This is the crux of the problem, yes. You cannot prove to anyone that a piece of code is secure. You can only create degrees of likelihood. Even a formal proof of correctness is still only a high likelihood of security since the proofing engine could have a bug, or the underlying VM might have a bug, or the setup of the proof could be incorrect as some things are hard to fully specify into a formal logic framework. The core of the problem is that we’re human, and we make mistakes and do irrational things sometimes. I don’t think any tool or compiler can ever fix this. I hope we all agree on this.
To me, the only way to build security is through layering. Yes, one proofing engine could be wrong, but several tools and reviews together can build a robust underpinning for security. It requires a lot of effort, but there’s reward of a very low likelihood of an exploit occurring. At the end of the day, the only true test of security is simply exposure to a caustic environment. The longer it goes without a exploit against an incentive to attack it, the less likely it is to have an exploit to attack. At the very least, most of the layers are holding up.
Layered security is a process. Without developing that process, you risk making mistakes that can prove fatal because your playing with highly valued assets with no room for error. Being unaware that you need to develop a process is the number one problem with newcomers to this space. They come in with bright ideas of how to change the world and don’t take it seriously just how hard we tell them it is. They learn the hard way that they need to develop a process, and sometimes that comes too late. The “hard knocks” school of learning is cruel and ultimately unnecessary, I would rather try to distill our collective guidance for reference so we can build a culture of learning from previous mistakes. Tools, languages and frameworks can help, but the real solution is developing a security mindset through process and understanding. We need to show them how to be like us.
I agree. The average person has no idea on what they’re looking at, and would prefer to delegate that to some sort of trusted logo or green checkmark, or at the very least a grading system that contextualizes the risk into digestible rankings. We can build a system like that together if we choose to. That requires having all of the information available in a reasonably standard format where we can at least argue effectively about placing that logo or giving a set of software a certain grade. It also needs to be based in a provable reality, so it doesn’t end up being used as snake oil as you mention. A system where the developer places a bet on a certain grading, inviting others to disprove that grading based on logic might be a plan for doing this.
The other option is keeping it all private, forcing the developer to build trust through usage. This information asymmetry ensures exploits are harder to find, but then users have the same problem and can easily fall prey to scams as well.
This is a great way to accomplish this. What would that money be used for? Insurance for users is one option, “at the very least I’ll get some of my money back”. A public bug bounty is another, encouraging white hat reporting over private channels instead of exploitation. We can build a few of these systems for others to use in their quest to ensure post-deployment safety nets. They require some transparency however to be effective.