Hi,
�in a previous company I worked we had something very similar to what Carlos described. We used SmallLint to verify that no errors where in the code (we extended SmallLint to be able to document false positives), we had what we called programming rules that were run as SUnit tests before integration and after integration, and we also had something we called architecture tests.
The programming rules included simple rules like checking the numbers of parameters, doing spell�checking, that the initialize message is not send for other object but the class, etc. up to complex validations for patterns like the visitor, state, etc.�
The architecture rules were oriented towards dependencies between modules (applications in Envy),�dependencies�between systems, etc.
�We had a process that defined when to run each type of test (tests were categorized like�functional, programming rules, architecture rules, etc using different suites). For example if you changed something, first you had to run the tests related to the changed module, then the functional tests of all the system (around 23.000 test, ran in 7 minutes) then the programming rules. After that the change was sent to the integrator who used an automatic integration tool and also the architecture tests where runned. Before closing the new release the version was uploaded to a fresh GemStone image, and all the functional tests where ran in GemStone. If everything ran correctly, the new version was released. The last steps (uploading code to gemstone, running the tests, etc) are currently done automatically in a build server using Jenkins.
This process allowed us to be very confident in the code we produced. Of course that it took some time to do the whole integration cycle, that is why sometimes when the release was an internal one, we could avoid running the tests in gemstone, etc.
Anyway, it feels really good to have a process like that and it is still in use. The programmers add new rules when they find mistakes they do, sometimes to prevent errors in the evolution of the code (that is very nice too... design�decisions�are made explicit through these tests), etc. So, at the same time is a very "reflective" process...
Bye,
Hernan.
On Thu, Nov 24, 2011 at 5:49 AM, Joseph Pelrine
<jplists@metaprog.com> wrote:
On 23.11.11 22:30, stephane ducasse wrote:
At my job we have several programming rules hat we check on pieces of code before integrating.
Some of these checks could be automated,
like what?
I integrated daily changes and I have a lot of simple questions. I'm quite sure that people would be interested by the answers
to these questions. But for the moment I will not tell them but I want to know what you guys are asking yourselves.
I've always considered it important to have a set of code quality tests which would run before code was checked in and accepted into a common baseline. Tests that were more than SUnit or functional acceptance tests, but which then took a bit longer to run. I'd like to have these tests automated and run when merging. SmallLint provides a number of these, and there are surely some other quality and metric tools around.
This may go a bit far, but since merging is part of a build process, I'd also like to see whatever you do fit into a CRISP build:
*Complete, i.e. scorched-earth build
*Repeatable, i.e. consistent and reproducable
*Informative
*Schedulable
*Portable
FWIW
--
Joseph Pelrine [ | ]
MetaProg GmbH
Email: jpelrine@metaprog.com
Web: � http://www.metaprog.com
As soon as you introduce people, things become complex.
--
Hern�n Wilkinson
Agile Software Development, Teaching & Coaching
Mobile: +54 - 911 - 4470 - 7207
email: hernan.wilkinson@10Pines.com
site:�http://www.10Pines.com
Address: Paraguay�523, Floor 7 N, Buenos Aires, Argentina