Audit Listings


#1

I think an immutable listing of the public audits an auditor performed and some bytecode hash or something that can be automatically validated via a “stamp” would satisfy transparency and accountability.

The listing being public and immutable means you can’t go back on what you said when a hack occurs, so people know your record. Additional metrics such as funds held, users, etc. could also be useful.

Thoughts?

Edit: this may end up being similar or compatible to Panvala @maurelian


ETH Berlin Unconference Workshop Topics
#2

I don’t understand what problem this solves, or why an auditor would agree to participate in it? You’ve succeeded in providing a way for normal people to find and read audit reports, however:

  • Individuals are ill-equipped to understand the output of an audit. They are typically not engineers, they are not aware of the codebase that was audited, and they are not security experts. They lack any context to understand the output. The appropriate audience for an audit report is the engineering team. Making a report public provides the report to the wrong audience. For better or worse, it also trains your attackers on the best methods to break your product.

  • It creates a perverse incentive where the person paying for the audit thinks they’re buying a clean bill of health they can use as a marketing document, rather than engineering help to design a secure product. The client will optimize for the goal of getting a document in the registry and a document that looks good, rather than getting the help that they need. If the auditor documents negative information, then the client won’t want to publish it. If they can’t publish it, they won’t want to pay for it. This scenario pits the auditor against the client, rather than positions them to work together.

  • There are many audits that you want to work on but the client’s budget is too low to consider the full scope or to staff the most senior people. This type of work is still valuable to do, since the client is being helped regardless. They are in a better spot than where they started, although the output looks diminutive. In the situation that you outlined, this creates a huge liability for the auditor, since now it looks like they were malfeasant if they missed something.

  • Finally, this creates a massive transfer of risk from the owner of the code to the auditor. It sets up the auditor as the “fall guy” so that when problems are encountered in the future, the owner can simply point out that they received an audit and believed they were ok, thus taking no responsibility for their own code. On the other hand, the auditor lacks agency to implement fixes on the owner’s behalf to protect them. This kind of system would easily 10x the rate I’d need to charge for each audit since any future criticism of my work would sink my entire company.

I’ve seen several projects that claim to do exactly what you described, and they are all severely flawed in these ways or others. Trail of Bits has steadfastly refused to participate in them, and will not in the future.

Ultimately, security assessments do not make a product secure. The transparency that you want to encourage should come from the product owner. If it is coming from a security reviewer, then you must acknowledge that you will create a necessarily adversarial relationship.


#3

Not quite the intent, but allowing anyone (including other security professionals) to read the audit documentation can help in understanding where the process broke down if a vulnerability is discovered by anyone. A exploited project is not incentivized to share this after the fact, but if it’s publicly available before that then they have no choice. We can’t always assume the auditor did a good job, just like we can’t assume the developer listened to the auditor or followed a secure process. Assuming everyone did their jobs to the best of their abilities, then the process is defunct.

It’s not about blame it’s about understanding.


The real problem is that users need to understand security guarantees in some way or another, otherwise why would I trust a dapp with my money?

How would you propose we solve this problem?


#4

The real problem is that users need to understand security guarantees in some way or another, otherwise why would I trust a dapp with my money?

I think the real problem is that developers cannot prove to themselves that their own code is safe. Don’t you think it’s a little bit premature to try and communicate something that no one understands yet? I think we’re racing to help people create the perception of security rather than real secure code.

Further, I don’t think that users are equipped to understand any type of contextualized security metrics. I mentioned this several times in this thread:

The typical issue with this approach is that individuals are either security experts or they are not. There is no real in between. Efforts at transparency typically fail because consumers are as likely to believe vendor FUD and snake oil as they are real, vetted truth. They are not sophisticated enough to tell the difference between the two. You can try to tell them that your transparent metrics are the ones they should believe, but ultimately the psychology behind what consumers will trust is hard to predict.

Individuals are ill-equipped to understand the output of an audit. They are typically not engineers, they are not aware of the codebase that was audited, and they are not security experts. They lack any context to understand the output. The appropriate audience for an audit report is the engineering team. Making a report public provides the report to the wrong audience. For better or worse, it also trains your attackers on the best methods to break your product.

In the cases where we have released audit reports and been highly critical of our clients code, users did not react to this information at all and simply displayed irrational exuberance that an audit was completed. The parties that are most interested in reading audit reports are those that have an outsized financial incentive to do so, e.g., hackers that intend to use the information to launch attacks down the line.

What you want is for the product owner to make a bet and put money on the line that their code is safe to use. In the real world, this is typically “insurance” or, in a stricter case, “liability.”

I think the most productive thing that we can do right now is to create languages and frameworks that people can trust, and provide tools and guidance to help them get better. I see all the rest of this (like “help product owners convince users that their product is worth paying for and using”) as out of scope, at least for right now.


#5

This is the crux of the problem, yes. You cannot prove to anyone that a piece of code is secure. You can only create degrees of likelihood. Even a formal proof of correctness is still only a high likelihood of security since the proofing engine could have a bug, or the underlying VM might have a bug, or the setup of the proof could be incorrect as some things are hard to fully specify into a formal logic framework. The core of the problem is that we’re human, and we make mistakes and do irrational things sometimes. I don’t think any tool or compiler can ever fix this. I hope we all agree on this.

To me, the only way to build security is through layering. Yes, one proofing engine could be wrong, but several tools and reviews together can build a robust underpinning for security. It requires a lot of effort, but there’s reward of a very low likelihood of an exploit occurring. At the end of the day, the only true test of security is simply exposure to a caustic environment. The longer it goes without a exploit against an incentive to attack it, the less likely it is to have an exploit to attack. At the very least, most of the layers are holding up.

Layered security is a process. Without developing that process, you risk making mistakes that can prove fatal because your playing with highly valued assets with no room for error. Being unaware that you need to develop a process is the number one problem with newcomers to this space. They come in with bright ideas of how to change the world and don’t take it seriously just how hard we tell them it is. They learn the hard way that they need to develop a process, and sometimes that comes too late. The “hard knocks” school of learning is cruel and ultimately unnecessary, I would rather try to distill our collective guidance for reference so we can build a culture of learning from previous mistakes. Tools, languages and frameworks can help, but the real solution is developing a security mindset through process and understanding. We need to show them how to be like us.


I agree. The average person has no idea on what they’re looking at, and would prefer to delegate that to some sort of trusted logo or green checkmark, or at the very least a grading system that contextualizes the risk into digestible rankings. We can build a system like that together if we choose to. That requires having all of the information available in a reasonably standard format where we can at least argue effectively about placing that logo or giving a set of software a certain grade. It also needs to be based in a provable reality, so it doesn’t end up being used as snake oil as you mention. A system where the developer places a bet on a certain grading, inviting others to disprove that grading based on logic might be a plan for doing this.

The other option is keeping it all private, forcing the developer to build trust through usage. This information asymmetry ensures exploits are harder to find, but then users have the same problem and can easily fall prey to scams as well.

This is a great way to accomplish this. What would that money be used for? Insurance for users is one option, “at the very least I’ll get some of my money back”. A public bug bounty is another, encouraging white hat reporting over private channels instead of exploitation. We can build a few of these systems for others to use in their quest to ensure post-deployment safety nets. They require some transparency however to be effective.


#6

@fubuloubu as a mod, you can/should move this (beneficial) sidetrack to a new thread.

Moderator: Done


@dguido Thank you for the excellent critiques.


ETH Berlin Unconference Workshop Topics