We're wrapping up the final report for an FAA-sponsored study of CRC and Checksum performance for aviation applications, although the results in general apply to all uses of those error detection codes.
As part of our results we came up with an informal list of "Seven Deadly Sins" (bad ideas):
- Picking a CRC based on a popularity contest instead of analysis
- This includes using “standard” polynomials such as IEEE 802.3
- Saying that a good checksum is as good as a bad CRC
- Many “standard” polynomials have poor HD at long lengths
- Evaluating with randomly corrupted data instead of BER fault model
- Any useful error code looks good on random error patterns vs. BER random bit flips
- Blindly using polynomial factorization to choose a CRC
- It works for long dataword special cases, but not beyond that
- Divisibility by (x+1) doubles undetected fraction on even # bit errors
- Failing to protect message length field
- Results in pointing to data as FCS, giving HD=1
- Failing to pick an accurate fault model and apply it
- “We added a checksum, so ‘all’ errors will be caught” (untrue!)
- Assuming a particular standard BER without checking the actual system
- Ignoring interaction with bit encoding
- E.g., bit stuffing compromises HD to give HD=2
- E.g., 8b10b encoding – seems to be OK, but depends on specific CRC polynomial
(I haven't tried to map it onto the more traditional sin list... if someone comes up with a clever mapping I'll post it!)
Thanks to Kevin Driscoll and Brendan Hall at Honeywell for their work as co-investigators. You can read more about the research on my CRC and Checksum Blog. That blog has more detailed postings, slide sets, and will have the final research report when it is made publicly available.