I base this rule of thumb on some study data and my observations of a number of real projects. If you want to find out if teams are really doing peer reviews (and doing ones that are effective), just ask what fraction of defects are found in peer reviews. If you get an answer in the 40%-60% range probably they're doing well. If you get a lower answer than 40%, peer reviews are being skipped or are being done ineffectively. If they answer is "we do them but don't log them" then most of the time they are being done ineffectively, but you need to dig deeper to find out what is going on.
If you are trying to find all your defects in test (instead of letting peer review get half of them for you), you are taking some big risks. Test is usually a more expensive way to find defects. More importantly, peer review tends to find many defects or poor design choices that are difficult to find by testing with any reasonable effort.
So, why make your testing expensive and your product more bug prone? Try some peer reviews and see what they find.
0 nhận xét:
Đăng nhận xét