Peer Reviews and the 50/50 Rule

Surprisingly, there is a fairly easy way to know if your peer reviews are effective. They should be finding about 50% of your defects, and testing should be finding the other 50%. In other words the 50/50 rule for peer reviews is they should find half the defects you find before shipping the product.  If peer reviews aren't finding that many defects, something is wrong.

I base this rule of thumb on some study data and my observations of a number of real projects. If you want to find out if teams are really doing peer reviews (and doing ones that are effective), just ask what fraction of defects are found in peer reviews. If you get an answer in the 40%-60% range probably they're doing well.  If you get a lower answer than 40%, peer reviews are being skipped or are being done ineffectively. If they answer is "we do them but don't log them" then most of the time they are being done ineffectively, but you need to dig deeper to find out what is going on.

If you are trying to find all your defects in test (instead of letting peer review get half of them for you), you are taking some big risks. Test is usually a more expensive way to find defects. More importantly, peer review tends to find many defects or poor design choices that are difficult to find by testing with any reasonable effort.

So, why make your testing expensive and your product more bug prone? Try some peer reviews and see what they find.

Embedded Software Risk Areas -- Five Forbodes Failure

Series Intro: this is one of a series of posts summarizing the different red flag areas I've encountered in more than a decade of doing design reviews of industry embedded system software projects. You can read more about the study here. The results of this study inspired the chapters in my book.


To conclude these series of postings, here is an observation to ponder.

One of the informal observations made across the course of these reviews was that developer teams with exactly 5 primary contributors have the most spectacular project failures. Invariably these teams had previously completed a project with 3 or 4 members successfully, and increased the team size to tackle a more complex project without making any changes in their software process. But they failed with the new, 5-person team.
While this is an anecdotal result, projects that grow past 4 developers in size should seriously consider switching to a heavier weight software process (more paper, more formality, more methodical rigor). Smaller teams still seem to benefit from good process, but basically can get away with informality with less dramatic risks than larger teams (5 or more developers) working on more complex projects.

Does this mean with fewer than 5 people you can simply ignore all the risk areas I've posted? No. Most of the reviews (perhaps 80%) were conducted on teams with fewer than 5 people. What this does mean is that with only one or a few developers you can get away with a lot and only have a few red flag risk areas. If you are lucky you will survive them, and if you are unlucky they will bite you. Hard. But most of the time you will slide by well enough that you will work nights, weekends, and have no social life -- but you will still have a job.

But, if you have more than 5 people you probably have to do most or even all of the process activities listed in these risk areas. If you blow off using a rigorous process, it is pretty likely you will fail. Probably you will fail spectacularly.

At least this is what I have observed doing 95 reviews over 10 years in industry.  Your mileage may vary.