http://www.perlmonks.org?node_id=692804


in reply to How should I do (and document) effective semi-formal code review?

We have a formal-informal code review tool that inspects our code tree, finds what has changed, creates diffs, etc., and emails them to the reviewer. It also appends a note in the bug-tracking tool to say that this has been done. Once the reviewer is done, s/he runs the tool again to mark it as complete, which sends a note back to the originator AND updates the bug-tracking tool with the comments. This may need to be done again if large enough changes are required to address the comments.

The advantage here is really asynchronicity. You don't need to find time for everyone to get together at the same time. Theoretically, you should still be able to have multiple reviewers (due to a limitation of the tools, we can't right now, but we've requested fixes to the tools to allow it) all able to provide feedback. You definitely get documentation - it's in the defect remarks, as are the comments so generated. Now, whether that's a GOOD place for the documentation is debatable.

In our case, we don't generally have a lot of experts in any given area of the code, so we can't really get a huge group together anyway. (Note that in our definition of "expert," if you even have a book on the subject in your office, you're considered an expert - we're not actually as picky as "expert" may sound).

We also hold much more formal reviews once in a while. But I find those to be rare, for good or bad.

  • Comment on Re: How should I do (and document) effective semi-formal code review?

Replies are listed 'Best First'.
Re^2: How should I do (and document) effective semi-formal code review?
by GrandFather (Saint) on Jun 19, 2008 at 03:01 UTC

    What are the tools that you use?


    Perl is environmentally friendly - it saves trees

      We use IBM Rational Clearcase/Clearquest (not my choice). The tool for codereview was written on top of that by us. It just queries the current view to see what has been checked in, marks those files as being 'under review', and sends the review. Then the reviewer retrieves that into a temp directory, which marks those files as being retrieved for review, and gets to see all the diffs, as well as the full original and modified code. When done, the tool is invoked yet again to mark the files as passed review, and emails/logs in clearquest the comments, if any. The problem here for me is that these marks are singletons - you can't mark a file more than once. The purpose of the marks is so that a rejected review that results in further changes only needs to review those changes, not the whole thing again, because the tool will see that some files are already reviewed, but if you checkout/checkin again, the new versions of those files don't have the mark and get reviewed again.

      Of course, the tool is written in perl. ;-)