I’ve recently spent altogether too much time putting together an analysis of the limits on block size and transactions/second on the basis of various technical bottlenecks. The methodology I use is to choose specific operating goals and then calculate estimates of throughput and maximum block size for each of various different operating requirements for Bitcoin nodes and for the Bitcoin network as a whole. The smallest bottlenecks represents the actual throughput limit for the chosen goals, and therefore solving that bottleneck should be the highest priority.

The goals I chose are supported by some research into available machine resources in the world, and to my knowledge this is the first paper that suggests any specific operating goals for Bitcoin. However, the goals I chose are very rough and very much up for debate. I strongly recommend that the Bitcoin community come to some consensus on what the goals should be and how they should evolve over time, because choosing these goals makes it possible to do unambiguous quantitative analysis that will make the blocksize debate much more clear cut and make coming to decisions about that debate much simpler. Specifically, it will make it clear whether people are disagreeing about the goals themselves or disagreeing about the solutions to improve how we achieve those goals.

There are many simplifications I made in my estimations, and I fully expect to have made plenty of mistakes. I would appreciate it if people could review the paper and point out any mistakes, insufficiently supported logic, or missing information so those issues can be addressed and corrected. Any feedback would help (especially review of my math)!

Here’s the paper: https://github.com/fresheneesz/bitcoinThroughputAnalysis

Oh, I should also mention that there’s a spreadsheet you can download and use to play around with the goals yourself and look closer at how the numbers were calculated. Also, there was a discussion on r/BitcoinDiscussion a while back.

tl;dr

  • Bitcoin’s current most constraining bottleneck according to the goals I chose are storage of the blockchain and UTXO set, and memory use of storing the UTXO database. These two things almost definitely don’t meet the chosen goals.
  • The highest priority improvement should probably be fraud proofs, since they drastically improve the network security in an environment where most users use SPV nodes.
  • The second-highest priority improvements should be Assume-UTXO done in a way where historical data can be ignored by most nodes (fraud proofs are required for this).
  • The most effective future improvement will probably be some kind of accumulator (like Utreexo), since that will eliminate the size of the UTXO set as a bottleneck entirely.
  • In 10 years, if all these existing ideas are implemented, we can safely get to 100 transactions per second on-chain.

submitted by /u/fresheneesz
[link] [comments]