ETH Berlin Unconference Lightning Talks


That’s not really a security bug, but an economic one.

Assuming it wasnt programmed by accident,Zcash Ithink that falls under the flat of “intentional hoodwinking” and the creator of such a project would probably not run the tools in the first place. Such a bug, if found in another project, should be raised up as an issue with the project. I don’t think there is a need to put it on the security bug list


Are there security professionals incentivized by the network who can interpret the results for the developers?


maybe… do we have clearcut distinction between the two? What are your definitions? In your opinion, was batchOverflow a security or an economic issue? If an economic, then why do we care about integer under-/overflows in the context of security audits?


Not in the current version of the protocol, but it’s definitely something to consider, unless we get more precise analyses or experiment with AI.


Maybe not the best delineation, but I would say a security bug can be exploited by another for personal gains, and an economic one is just breaking the economics as written by the spec where no extrinsic gains are made by anyone. Both are bugs, I was trying to make a distinction of who benefits.

This is all opposed to a malicious contract writer who pay program in some obscure behaviors that are gameable economically. They are unlikely to conduct their own audit IMO.


Are the tools large enough in operating footprint (RAM, disk space, CPU time) that having a network for execution of these instead of your own computer makes sense?


so, in the case of BeautyChain, the attacker could mint themselves tokens thru exploiting integer overflow, and then sell these tokens on an exchange. That’s a personal gain if the minted tokens enter exchanges and get traded.


it depends on the contract, the scope of analysis, and the desired precision.

Currently, majority (but not all!) of the contracts we analyzed with Oyente or Mythril finished within seconds. We believe, however, that the complexity of the contracts and the required analyses times will tend to increase similarly how it has been happening with any other software.


I think my original comment may have gotten muddled.

Being able to change the supply (without minting new tokens to oneself) is an economic bug as it changes the supply and has other undesirable actions, but no one can directly benefit.

Being able to mint oneself new tokens (separate of increasing the supply) allows an attacker to steal funds by selling on an exchange. There is a direct benefit to doing that. Therefore, an attacker is more incentivized to do this.

Security vs. Economic bug is probably a bad distinction on my part.


We, from NuCypher, propose the following talks:

  • “Using proxy re-encryption for access control in end-to-end encrypted apps”. We’ll have a testnet working at the time, so probably can do a good demo;
  • “Private smart contracts with Fully Homomorphic Encryption”. Recent research we’ve been doing with regards to accelerating fully homomorphic encryption (we’re at around several thousand ops/s there), in order to apply for private smart contracts.
  • “Sidechain design with a smart contract counting number of confirmations”. I came up with yet another sidechain design recently []. The trick here is that there is a smart contract on the main chain which tracks all the forks and counts confirmations. Seems fairly simple, would not mind to discuss with a broader community.

If any (or all!) of this is of interest, we’d love to talk!


That’s obviously a big component of what we are doing, so can definitely go into detail and discussion. It would be great to get everyone’s feedback. The more critical the better.


I’d love to see the talk on fully homomorphic encryption!


I would also love to see that talk, but at another event.

Personally, I see homomorphic encryption as a fairly speculative research topics which isn’t immediately applicable towards the improvement of the state of ethereum smart contract security. That’s based on my perception of the scope I originally proposed.


I think this is fair. I’m personally really excited about homomorphic encryption, I think it’s really cool (databases! file sharing!), but it’s definitely out of scope unless it relates to security concerns of using it in smart contracts.

Even then, it’s a pretty underused concept in the smart contract space where it’s not likely to have too much impact overall so unfortunately it is probably not a good topic.

What do you guys think?


For the security unconference, I propose this 30 min talk:

Unpopular Opinions about Smart Contract Security

Since the DAO hack, a lot of progress was made in the field of smart contract security. However some non optimal security practices have emerged. This talk will challenge some of the current security practices:
-Putting more money on audits than in bug bounty reward pools.
-Assuming that safeMath protects against overflow issues.
-Using withdrawal patterns in all circumstances.
-Over-engineering smart contracts.
-Protecting users from their own mistakes at the smart contract level.
-Over-classifying auditor comments into vulnerabilities.
This talk will cover both why those practices have emerged and why they are not optimal practices.


Why doesn’t it?

About the rest, I think these are excellent and interesting discussion points. I think I understand the argument for each of them already though, so I wonder if there’s an opportunity to approach it more as a dialogue than a 1 to many talk?


Not, it doesn’t, it turns them into blocking the function of the contract.
For example, in a finalize function of a token sale, it’s way better not to use safe math than to use it. As the failure mode in the case of safe math is everything being locked forever which is the worse possible failure mode, greater than every potential overflow issues.
(But I don’t want to spoil everything :slight_smile: )

Dialogue could be also nice if we want to make unconference way.


I will officially put in Guidelines. I believe our group (not just SecurEth, but the whole group) should curate a set of guidelines that include the recommendations on tools, documents and resources for developers before their code is presented for a security assessment. This is worth a short talk.


Hopefully still in time, here’s a proposal for a 15 mins Mythril talk:

The Mythril Roadmap (sorry, I can’t think of a better title right now)

  • Symbolic Execution Engine Improvements
  • Public API & Developer Tools
  • Integration with development environments, IDEs and CI
  • Benchmarking
  • Analysis-at-scale on the mainnet