Recently, Blockstream CSO and long-time bitcoin evangelist Samson Mow gave an interview where he put forward a rather unique viewpoint about bitcoin’s block size: it was perhaps “too big.” The reason for this was his estimation that, thanks to changes implemented by the 2017 SegWit soft fork, the limit of the bitcoin block size was actually around 4 MB, as opposed to the official 1 MB cap that had been placed by Satoshi in 2010, and fought over just about every year since. In this article we review some of the highlights of the most divisive technical issue in the history of bitcoin: the cap on the size of its blocks.


What is the Bitcoin Block Size Limit Debate?

The block size debate has been ongoing for about 5 years now and is probably the most contentious aspect of bitcoin for the long-standing cryptocurrency community. What it comes down to is the question of whether or not bitcoin’s current 1 MB block size limit should be expanded. The debate was initially spearheaded by bitcoin developer and at one point, Satoshi Nakamoto’s right hand man, Gavin Andresen. Andresen initially proposed a new hard fork of bitcoin in October 2014 that would allow the block size to grow by 50% every year. In his blog post that outlined his reasoning behind the increase, he stated the following:

“Because Satoshi Said So isn’t a valid reason… I think the maximum block size must be increased for the same reason the limit of 21 million bitcoins must NEVER be increased: because people were told that the system would scale up to handle lots of transactions, just as they were told that there will only ever be 21 million bitcoins…

Agreeing on exactly how to accomplish that goal is where people start to disagree – there are lots of possible solutions. Here is my current favorite: roll out a hard fork that increases the maximum block size, and implements a rule to increase that size over time, very similar to the rule that decreases the block reward over time.” – Gavin Andresen.

Pros of Raising the Block Size Limit

The main “pro” of increasing the block size limit is that more transactions could fit into a single block, thereby making bitcoin more scalable by allowing faster, cheaper transactions. After all, bigger blocks = more room for transactions = faster confirmation times and lower fees. As a form of “proof” of this, we can take a look of the average transaction fees of Bitcoin vs. Bitcoin Cash vs. Bitcoin SV:

Coin       Block Size Limit  Average Tx Fee (as of 10/8/19)

BTC        1 MB                     $0.476

BCH       24 MB                  $0.0025

BSV        2,000 MB             $0.00045

Thus, it is easy to visualize an inverse relationship between block size limit and transaction fee, ignoring the existence of several other factors that may or may not play a role in this dynamic. Does this mean that one of the 3 bitcoins is more “bitcoinier” than the others? As Satoshi Nakamoto is not around to tell us, we just don’t know. What we do know is that Satoshi probably didn’t have a particular block size limit in mind, and that the block size limit was initially uncapped. Early developer Hal Finney suggested that blocks be capped at 1 MB in order to prevent Denial of Service (DOS) attacks on the Bitcoin Network, and after some consideration, the cap was implemented in a 2010 release of the client software.

Though it is impossible to guess what his stance on the scenario would be today, after things having evolved quite a bit since his involvement with bitcoin, it is evident that Satoshi wanted bitcoin to be able to scale to the likes of Visa, meaning that its current limitation of processing around 7 transactions per second just wasn’t enough. So, why have the Bitcoin Core developers (of BTC) been so reluctant to increase the max block size, following in the footsteps of Bitcoin Cash and Bitcoin SV?

Cons of Raising the Block Size Limit

The “con” side of the block size debate is a bit complicated and requires some patience to fully understand. First of all, it should be known that the BTC blockchain is currently 284 GB in size. While it is small enough to fit on the hard drive of almost any modern computer, it would still require several days (or even weeks) for an average internet connection to download it in its entirety, making it a bit difficult for anybody to just suddenly decide to run a full bitcoin node.

One of bitcoin’s core principles is the idea of decentralization, meaning that anybody should be able to run their own node and validate blocks from miners if they should so choose. Having a large number of nodes is a sign of a healthy, robust network, insuring that transaction data can be propagated across all corners of the globe in an expedited manner. If the blockchain were to become too big or unwieldy, it would definitely discourage potential node owners from running a node. This factor, combined with nodes not being able to download large amounts of transaction data fast enough, would lead to an overall decrease in the number of nodes, forcing centralization among those who had the economic means to continue running nodes. As explained by Core developer Gregory Maxwell in an interview, the problem faced by running a node with a huge, recklessly-expanding blockchain is as follows:

“There’s an inherent tradeoff between scale and decentralization when you talk about transactions on the network… You’d need a lot of bandwidth, on the order of a gigabit connection. It would work. The problem is that it wouldn’t be very decentralized, because who is going to run a node?” – Gregory Maxwell

Bitcoin’s Civil War: a Nation Divided

Gavin Andresen’s proposal was ultimately rejected, and though many other Bitcoin Improvement Proposals (BIPs) that included block size increases were presented in subsequent years, they were all rejected, leading to a broiling argument within the bitcoin community, and ultimately resulting in the splintering of a “big blocker” faction that would go on to support the Bitcoin Cash hard fork in 2017. The community was largely divided on the issue, with some prominent members throwing in the towel on bitcoin altogether after being dismayed by the amount of personal attacks and unprofessionalism being demonstrated on both sides.

Those who remained in the BTC camp were more willing to support the SegWit soft fork which manages to squeeze more transactions in per block by removing signature data deemed to be not imperative for transaction verification. In addition to solving some other problems such as transaction malleability, SegWit is thus a compromise in lieu of a big block solution, even if it is not the one desired by the big blocker camp.

Another solution to bitcoin’s scaling problem that does not involve raising the block size limit is the introduction of second layer solutions, such as the Lightning Network. Though it is currently largely experimental and sorely lacking in terms of a decent user interface, transactions performed on the Lightning Network do not take place on the blockchain, which renders the potential for its ability to scale bitcoin immense.

Regardless of how bitcoin approaches its scaling solutions going forward, lessons about the value of consensus, civility and a soundly intellectual approach (rather than emotionally- or financially-driven ones) can be learned from the past.