In Part 2, we saw how a simple concept of a linked list can morph into complex, distributed systems. Obviously, this is a simple, conceptual evolution leading up to blockchain, but it’s not the only way distributed systems can arise. Distributed systems need coordination, fault tolerance, consensus, and several layers of technology management (in the sense of systems and protocols).
Distributed systems also have a number of other complex issues. When the nodes in a distributed system are also decentralized (from the perspective of ownership and control), security becomes essential. That’s where complex cryptographic mechanisms come into play. The huge volume of transactions makes it necessary to address the performance of any shared or replicated data, thus paving the way to notions of scaling, sharding, and verification of distributed data to ensure that it did not get out of sync or get compromised. In this segment, we will see that these ideas are not new; they were known and have been working on for several decades.
One important requirement in distributed systems is the security of data and participants. This motivates the introduction of cryptographic techniques. Ralph Merkle, for example, introduced in 1979 the concept of a binary tree of hashes (now known as a Merkle tree). Cryptographic hashing of blocks was implemented in 1991 by Stuart Haber & W. Scott Stornetta. In 1992, they incorporated Merkle trees into their scheme for efficiency.
The hashing functions are well-researched, standard techniques that provide the foundation for much of modern cryptography, including the well-known SSL certificates and the https protocol. Merkle’s hash function, now known as the Merkle-Damgard construction, is used in SHA-1 and SHA-2. Hashcash uses SHA-1 (original SHA-0 in 1993, SHA-1 in 1995), now using the more secure SHA-2 (which actually consists of SHA-256 and SHA-512). The more secure SHA-3 is the next upgrade.
Partitioning, Scaling, Replicating, and Sharding
Since the core of a blockchain is the database in the form of a distributed ledger, the question of how to deal with the rapidly growing size of the database becomes increasingly urgent. Partitioning, replicating, scaling, and sharding are all closely related concepts. These techniques, historically used in enterprise systems, are now being employed in blockchains to address performance limitations.
As with all things blockchain, these are not new concepts either, since large companies have been struggling with these issues for many decades, though not from a blockchain perspective. The intuitively obvious solution for a growing database is to split it up into pieces and store the pieces separately. Underlying this seemingly simple solution lies a number of technical challenges, such as how would the application layer know in which “piece” any particular data record would be found, how to manage queries across multiple partitions of the data, etc. While these scalability problems are tractable in enterprise systems or in ecosystems that have known and permitted participants (i.e., the equivalent of permissioned blockchains), it gets trickier in public blockchains. The permutations for malicious strategies seem endless and practically impossible to enumerate in advance. The need to preserve reasonable anonymity also increases the complexity of robust solutions.
Verification and Validation
Zero-knowledge proofs (ZKP) are techniques to prove (to another party, called the verifier) that the prover knows something without the prover having to disclose what it is that the prover knows. (This sounds magical, but there are many simple examples to show how this is possible that I’ll cover in a later post.) ZKP was first described in a paper, “The Knowledge Complexity of Interactive Proof-Systems” in 1985 by Shafi Goldwasser, Silvio Micali, and Charles Rackoff (apparently, it was developed much earlier in 1982 but not published until 1985). Zcash, a bitcoin-based cryptocurrency, uses ZKPs (or variants called zkSNARKs, first introduced in 2012 by four researchers) to ensure the validity of transactions without revealing any information about the sender, receiver, or the amount itself.
Some of these proofs and indeed the transactions themselves could be implemented by automated code, popularly known as smart contracts. These were first conceived by Nick Szabo in 1996. Despite the name, it is debatable if these automated pieces of code can be said to be smart given the relatively advanced current state of artificial intelligence. Similarly, smart contracts are not quite contracts in the legal sense. A credit card transaction, for example, incorporates a tremendous amount of computation that includes checking for balances, holds, fraud, unusual spending patterns, etc., with service-level agreements and contractual bindings between various parties in the complex web of modern financial transactions, but we don’t usually call this a ‘smart contract’. In comparison, even the current ‘smart contracts’ are fairly simplistic.