OWASP Risk Assessment Model


#1

Talking in the chat about the need to formalize the risk assessment model that we use for communication of vulnerabilities. There is a potential need to reduce or redefine certain qualities of the traditional model to account for specific qualities of crypto networks e.g. openness, access control of assets, etc.

Some think that there is no need to reformalize the model at all, that a formal definition of “Impact” vs “Likelihood” is completely use-case/application specific.


#2

My opinion on the Risk Assessment Model. I am an aerospace guy and this process is very much derived from existing aerospace processes. But, I am no expert in Blockchain software, so this is just my starting opinion.

The risk model is the most important improvement we will do on the existing guidelines. It will allow us to define a level of risk on an element of software. Software elements that have high levels of risk will have more rigid guidelines to follow (meaning more documentation and testing). Breaking down these last two sentences and you have the tasks to accomplish.

  1. We must define the different levels of risk for Solidity smart contracts. How many levels will there be? How do we define each level? We must define a clear process that an auditor or developer can use to determine the risk level for their software.
  2. We must define the elements of software on on which a risk is defined. Is it a method or a contract or something else?
  3. Will the guidelines recommend a more rigid process for software that has been defined as high risk? For example, it could require 100% code coverage or an additional set of static tools. Or perhaps it is just labeling giving awareness to the developer and the auditor without changing the development or documentation process. This should be defined.

#3

Excerpting from Telegram:

I think this discussion about risk models is heading in the wrong direction for smart contracts. I’ve never come across a problem on a security review where I wished I had a risk model, or where a risk model would solve a problem we encountered. Unlike web applications (OWASP), all the code is public and (generally) unpatchable, and the risk of loss of is immediate and total for any given issue. The correct model to consider is every identifiable bug should get marked MUSTFIX, rather than try to segregate issues into high/medium/low/etc to give yourself an excuse to ignore some.

All these contracts exist in an incredibly hostile environment, where essentially all issues do get exploited and there aren’t many secondary controls or possibilities for recovery. I think everyone at Trail of Bits strongly disagrees with copying any sort of approach from OWASP for smart contracts. The problems are just too different to be applicable.


#4

A couple thoughts:

  1. Completely agree that all issues should be patched coming out of an audit. There’s probably a bit of a distinction between an Informational bug and one that affects Security, but I think we can agree all bugs classified at the Minor risk and above should be solved in an audit. I think we can definitely streamline this process from a risk assessment level to an admission of GO/NO-GO for a given bug (i.e. there is some likelihood of an exploit and it’s not just a semantics argument)
  2. Where I think this framework may be of better use is in change management. Applications are released, bugs are found, and as you note:

Yes! This is where Risk Assessment comes in.

I need a framework for ascertaining the severity of a bug and determining the speed with which I should patch it.

Catastrophic: Mitigate immediately. Hopefully this bug is disclosed via secure channel to the developer. This means an immediate response must be organized using the upgradeability and/or emergency stop mechanisms you’ve put in place, as laid out in your Incident Response Plan (you did have one, right?). Follow up with a proper analysis and a patch that fixes the problem properly.

Major: Depends on likelihood/access. Can probably spend time doing proper analysis and responding appropriately within a short time period.

Medium: Depends on severity/impact. Can probably be differed to a major upgrade, following a proper analysis. Let the public know it’s a problem so Auditors out there can analyze the bug in concert with your code for additional vulnerabilities of higher severity.

Minor: Probably not a big deal. Wait for a minor upgrade. Keep a public record.

Informational: Up to developer discretion. Keep a public record.


What do you guys think of above?


#5

Also, this Risk Assessment model is probably useful for analysis of hacks after the fact. We have to ensure high likelihood attacks are mitigated by the process as soon as possible. Tools for high criticality bugs should be written sooner than others.


#6

I’m not sure I see how the post-deployment scenario you outlined is appreciably different from the pre-deployment one. There are no secrets on the blockchain, and all your flaws are on display. If the flaw can be taken advantage of then what we have learned empirically from past evidence on Ethereum is that it will be. You are always in a race against bots, blackhats with automation, and criminals with a direct financial incentive to abuse your contract. I think the patch decision narrows to MUSTFIX vs WONTFIX. In the MUSTFIX case, I think the only modifiers are whether the issue is being actively exploited today or not. I’m somewhat reticent to try and break down these cases into further levels of detail to provide “excuses” for delays.

I think this could use some more concrete examples to determine the use case for a risk model. In situations with clients where they have been affected by post-deployment issues, we were able to have effective conversations about patch strategy without a formalized taxonomy like the one you described above.


#7

Assuming the following:

  1. Tools are imperfect. You will never catch every bug before release.
  2. Smart contracts are used for wildly different things, by wildly different people, with wildly different risk tolerances.
  3. Not all bugs are attractive to exploit. Irritating users is technically an exploit, but at the end of the day a workaround is something more desirable than interruption of key business processes. A bug doesn’t have to be a vulnerability to be a “must fix”.
  4. Even with an upgrade mechanism, not all fixable bugs can be fixed immediately. Eventually, DAO controlled upgrade mechanisms may become the norm in an attempt to reduce control of developers. Signaling upgrades on such a system will be expensive and have a low tolerance for frequent use and thus relegated to controlled updates when possible.

Some scenarios

  1. An ICO. Handles large amounts of funds in a short time period. A bug is found that siphons user funds slowly. MUSTFIX: Cancel the ICO. Return funds you have already collected. Apologize to your users.
  2. A large game using NFTs. Handles somewhat valuable quantities of subjective value. A bug is found that affects transfer of items it certain cases where the recipient is a multi-sig wallet smart contract instead of an EOA. MUSTFIX? I disrupt gameplay and piss off everyone for a small subset of users who are suffering from this bug but have an easy workaround. WONTFIX? This bug is still a problem, subset of users leave eventually, but no one cares.
  3. A DAO-controlled dapp where the token holders vote on upgrade mechanisms. Considered critical infrastructure for users to run their business. A bug is discovered in the dapp that disables the ability to pay certain types of clients through the DAO treasury functionality. MUSTFIX? Dapp goes offline, disrupting business for days. Deemed unacceptable by DAO users. WONTFIX? Workaround is discovered, but decreased utility is irritating users who are threatening to leave your DAO network.
  4. A timebound escrow contract produced from a factory. A bug is discovered that locks user funds if transaction is triggered incorrectly by a third party when attempting a withdrawal. CANTFIX: No way to upgrade the child contract. Existing users notified to avoid bug behavior.

In all of these scenarios a fix is possible, but different needs drive the timeline for that need. A well-developed patch should be expected to go through the process again to ensure that it is not worse than that which it is replacing. If patches don’t follow the same process as the software itself, you run the risk of introducing extra vulnerabilities which may be more easily exploited. Sometimes a workaround or other mitigation is all that is practical to apply to the software until a proper patch is developed. These are not “excuses” when disruption can be just as costly as using a buggy patch.

I just don’t think a one-size fits all process works here. I also don’t think “waiting for the experts to help” (when you have that option) is a good strategy either. “Be prepared for failure” is the best strategy, and risk assessment frameworks allow you to formalize that.


#8

Hello!

We host a bugbounty platform that has blockchain platform, smart contracts and web application programs. Currently we are using CVSS (https://www.first.org/cvss/) scores, however, they are not applicable for blockchain projects. My team is conducting research how to modify CVSS to blockchain projects and we developed an alpha of open source self-written score - BVSS (https://github.com/hknio/bvss). There are some tests related to Ethereum smart contracts vulnerabilities.

It is the first draft and we ask for comments and contributions.