A couple thoughts:
- Completely agree that all issues should be patched coming out of an audit. There’s probably a bit of a distinction between an Informational bug and one that affects Security, but I think we can agree all bugs classified at the Minor risk and above should be solved in an audit. I think we can definitely streamline this process from a risk assessment level to an admission of GO/NO-GO for a given bug (i.e. there is some likelihood of an exploit and it’s not just a semantics argument)
- Where I think this framework may be of better use is in change management. Applications are released, bugs are found, and as you note:
Yes! This is where Risk Assessment comes in.
I need a framework for ascertaining the severity of a bug and determining the speed with which I should patch it.
Catastrophic: Mitigate immediately. Hopefully this bug is disclosed via secure channel to the developer. This means an immediate response must be organized using the upgradeability and/or emergency stop mechanisms you’ve put in place, as laid out in your Incident Response Plan (you did have one, right?). Follow up with a proper analysis and a patch that fixes the problem properly.
Major: Depends on likelihood/access. Can probably spend time doing proper analysis and responding appropriately within a short time period.
Medium: Depends on severity/impact. Can probably be differed to a major upgrade, following a proper analysis. Let the public know it’s a problem so Auditors out there can analyze the bug in concert with your code for additional vulnerabilities of higher severity.
Minor: Probably not a big deal. Wait for a minor upgrade. Keep a public record.
Informational: Up to developer discretion. Keep a public record.
What do you guys think of above?