Skip to main content

51% Attack in Blockchain Technology [Explained]

Through the design of the technology, we know that the blockchain is immune to attack from any individual member of the network. However, what happens if the blockchain comes under attack from a large group of participants? More precisely, what happens if a group successfully takes control of over 50% of the computing power of the blockchain?
Such a scenario is known as a 51% attack, and it is one of the few real vulnerabilities of a blockchain.
To understand the problems posed by a 51% attack, we must return to the fundamentals of the blockchain and recall the process of adding a new block to the chain. Members of a network compete to be the first to compute a valid seal for the block and claim a reward. Inevitably, a group of individuals in control of over half the computing power of the network can monopolize this process and claim all the rewards for themselves. Such a situation allows this group to be the only entity to benefit from the rewards of the blockchain by preventing other members from adding blocks to the chain. This is possible because majority rule is among the fundamental concepts of a blockchain.
Another possible consequence of a 51% attack is known as double spending, and this is significantly more harmful than the prior one. Double spending occurs when a group of individuals successfully reverse completed transactions in a blockchain, allowing them to retrieve their money and spend it again. This is the digital equivalent of counterfeiting. It is made possible during a 51% attack due to the fundamental idea that the longest chain of a blockchain is the true one. Ironically, this is the same safeguard that makes double spending impossible when attempted by an ordinary member of the network.
In order to understand how double spending may occur, let us consider the following example. Imagine a network in which there exists an alliance that controls over half the computing power of the network. Suppose that A, a member of the alliance, buys a house from B, an ordinary member of the blockchain.
The transfer of money from A to B is recorded by each ordinary member of the network, but all members of the alliance secretly do not add this transaction to their records. The ongoing block is then completed and added to the chain by the honest individuals but secretly ignored by the alliance. There are now two versions of the blockchain in the network- the actual one with the recorded transaction, and the false one. As of this moment, the true blockchain is longer and accepted by the network, so the false version is kept secret by the alliance temporarily.
The alliance now continue to record the ongoing transactions in the network, as well as privately conducting meaningless transactions among themselves. These transactions are not announced to the network and allow the alliance to generate blocks at a faster rate than the true blockchain. As the alliance possesses over half the computing power of the network, it is possible for it to add blocks to the false blockchain at a higher rate than the blocks added to the true blockchain.
At some point, the length of the false blockchain exceeds that of the true blockchain, and the alliance now broadcasts this to the entire network. The network is then forced to accept this version of the blockchain as it is the longest version, and the transaction conducted between A and B is effectively reversed, allowing A to spend the same money on something else.
Thus, the alliance can monopolize the claiming of rewards and double spend money. They can also block transactions of other members using a similar procedure to that of double spending. They cannot forge a new transaction between ordinary members, however, as this requires the private keys of the members between which the transaction occurs. It is also quite difficult for them to modify blocks that have already been stored in the blockchain as this requires a very vast majority of the computing power of the blockchain to be feasible. The further back the blocks in the chain, the more secure they are.
The frailties of the blockchain exposed by a 51% attack lead us to the conclusion that the more nodes in a network, the more secure it is. This is simply because it is significantly easier to gain the majority in a network of 10 people than in a network of a million. In order to compete with the computing power of a distributed network with nodes worldwide, an ordinary individual would have to spend vast amounts resources in the form of money, electricity and time, or form an alliance that is far too large and distributed to organize efficiently.
In general, the largest blockchains are quite safe from a 51% attack. However, there have been some significant instances of 51% attacks in notable blockchains in the past. For example, ghash.io, a mining pool in the bitcoin network, briefly controlled over half the blockchain’s computing power in July 2014, which resulted in the pool voluntarily relinquishing some of its shares so as to not monopolize the mining of cryptocurrency. Bitcoin Gold was not quite as fortunate when it suffered a 51% attack during May 2018, and the malicious attackers successfully managed to double spend over 18 million dollars worth of cryptocurrency.
The very fundamentals of a blockchain rely on the assumption that the majority in the blockchain is honest. This exposes the biggest weakness of a blockchain and its main true area of vulnerability. A dishonest majority can cause vast damage to the blockchain.

Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

Build Data Platform

I'd appreciate your likes and comments). Additionally, it will result in my lengthiest blog post to date. However, regardless of the length, it's a significant achievement that I'm eager to share with you. I'll refrain from delving into unnecessary details and get straight to the main points to prevent this post from turning into a 100-minute read :). As always, I'll strive to simplify everything to ensure even those who aren't tech-savvy can easily follow along. Why? Everything has a why, this project too. (DevOps for data engineering) and I needed to apply them in an end-to-end project. Of course, this project is not the best one out there, but it helps me to quickly iterate and make errors. (And it reflects the reality of Modern Data Engineering, with beautiful tool icons everywhere). End Goal The end goal of this project is to have a fully functional data platform/pipeline, that will refresh our analytics tables/dashboards daily. The whole infrastructu...

Kubernetes Configuration Provider to load data from Secrets and Config Maps

Using Kubernetes Configuration Provider to load data from Secrets and Config Maps When running Apache Kafka on Kubernetes, you will sooner or later probably need to use Config Maps or Secrets. Either to store something in them, or load them into your Kafka configuration. That is true regardless of whether you use Strimzi to manage your Apache Kafka cluster or something else. Kubernetes has its own way of using Secrets and Config Maps from Pods. But they might not be always sufficient. That is why in Strimzi, we created Kubernetes Configuration Provider for Apache Kafka which we will introduce in this blog post. Usually, when you need to use data from a Config Map or Secret in your Pod, you will either mount it as volume or map it to an environment variable. Both methods are configured in the spec section or the Pod resource or in the spec.template.spec section when using higher level resources such as Deployments or StatefulSets. When mounted as a volume, the contents of the Secr...