This is a chapter from the book Token Economy (Third Edition) by Shermin Voshmgir. Paper & audio formats are available on Amazon and other bookstores. Find copyright information at the end of the page.
The original Bitcoin protocol was groundbreaking but incomplete, as it had not yet fully addressed many critical issues, such as privacy or scalability. Additionally, new challenges emerged as blockchain networks gained broader adoption. Questions arose about how to decentralize off-chain storage solutions, how to trust information flowing into blockchain systems, how to make blockchain networks interoperable, and how to connect machines and objects to blockchains.
The initial launch of the Bitcoin network was designed as a live experiment by Bitcoin’s creators. At the time of the deployment of version 0.1 of the Bitcoin protocol in 2009, it was still unclear whether the economic robustness assumptions—upon which Proof-of-Work relied—would withstand manipulation attempts. It was expected that more research and development would lead to a refined Bitcoin protocol. However, once the first version of the Bitcoin network went live and its economic incentives proved to be stable, the network and the Bitcoin asset quickly gained traction and significant interest. Given its resilience, the network was maintained rather than being discarded in favor of a final iteration. Although the protocol was improved many times, especially in the early years, protocol improvements soon became the subject of political ideologies regarding how to update the Bitcoin protocol. Conflicting ideas about network governance and vested economic interests further politicized network upgrades. Protocol improvements became subject to political inertia and existing economic interests.
Many developers began to fork the Bitcoin codebase to address unresolved issues, such as privacy, scalability, or asset exchange, on their own modified versions of Bitcoin. These forks gave rise to new online communities, each contributing to its own network and services. Although some of Satoshi's intended innovations were realized within Bitcoin, much of the innovation occurred outside of it. New protocols emerged to solve specific issues that Bitcoin was not optimized to handle.
The promise of a collectively managed internet also depended on the development and integration of complementary protocols, because blockchain networks simply process decentralized applications, recording all token transactions and performing computation. Other P2P protocols needed to be developed, such as decentralized file storage, decentralized identity, decentralized interoperability protocols, decentralized oracle services, and a decentralized physical web for connecting objects and machines to blockchain networks in a trustful and decentralized manner. The transition from client-server to a decentralized web has been gradual, with many challenges along the way and standards still evolving. Partial decentralization currently dominates, as novel points of centralization continue to emerge.
Privacy & Identity Challenge
The transparent nature of public and permissionless blockchain networks, which allows node operators to collectively verify the correctness of transactions, also makes those transactions publicly traceable. While this enhances institutional accountability, it comes at the cost of user privacy. In the Bitcoin network and other early blockchain networks, transactions are recorded in plaintext on the ledger, visible via block explorers. Transaction data—including sender and receiver addresses, the connection between them, and the transaction amount—is accessible to anyone. More complex smart contracts generate additional on-chain data, further increasing the visibility of interactions when data is recorded in plaintext. Even the process of broadcasting transactions can leak IP addresses, allowing user identification through metadata analysis, despite anonymization tools like Tor or VPNs.
The decentralized public key infrastructure used to create blockchain accounts enables users to trustfully interact without relying on centralized identity verification, such as Know-Your-Customer processes. However, blockchain addresses are only pseudonymous, not fully anonymous. Effective privacy can only be preserved if wallet addresses are never linked to real-world identities. In early blockchain networks, users who publicly shared their blockchain addresses—through social media posts, public token transfers, or blockchain-based interactions—risked de-anonymization through blockchain data analysis.
Since most cryptocurrency users purchase tokens using fiat currencies on cryptocurrency exchanges, which require KYC compliance, blockchain addresses can often be linked to real-world identities through exchange records. Law enforcement agencies and regulatory bodies use blockchain forensic services like “Chainalysis” or “Elliptic” to analyze transaction histories and track the movement of funds. These forensic tools allow authorities to trace the flow of assets, identifying past transactions, recipients, and spending patterns. While, in theory, customer data should only be shared with authorities in cases of fraud or money laundering investigations, this level of surveillance raises serious concerns about privacy rights and the potential for state overreach. Furthermore, token fungibility is affected, as certain merchants or platforms may reject transactions involving “tainted” coins—those previously associated with illicit activity—leading to potential discrimination between otherwise identical tokens.
To address these issues, blockchain protocols have started to implement privacy-enhancing techniques focusing on different levels: (i) wallet/address anonymity, which prevents blockchain actions from being linked to real-world identities; (ii) transaction data privacy, which conceals sender, recipient, and transaction amounts using cryptographic tools like zero-knowledge proofs, ring signatures, and confidential transactions; and (iii) network state privacy, which limits who can see the overall ledger state by restricting data visibility to authorized participants. However, implementing these privacy measures involves technological trade-offs, particularly in balancing individual privacy with network security, transparency, and regulatory compliance. The goal is to ensure sufficient privacy while maintaining trust and functionality within the system. These trade-offs are not only technological but also reflect legal challenges, where privacy laws increasingly conflict with anti-money laundering and counter-terrorism laws (read more on this topic in chapter “Token Privacy”).
Furthermore, the pseudonymous nature of blockchain addresses often limits their use in more complex decentralized applications that require real-world identity verification. At the time of writing, most decentralized applications rely on centralized mechanisms to verify real-user identities. More user-centric identity solutions—which allow users to control their digital identities and personal data without relying on centralized authorities—are currently being developed by various initiatives (read more on this topic in chapter “User-Centric Identities.”)
Interoperability Challenge
Blockchain networks function as isolated ecosystems, with no built-in mechanisms to communicate or share information with one another. Each network operates independently, meaning that tokens managed on one blockchain cannot be transferred directly to another. Nodes in the Bitcoin network have no information about the state of tokens in the Ethereum network and vice versa. They also have no knowledge of whether other networks have idle capacities (block space) to settle transactions. Interoperability of blockchain networks, however, is crucial for enabling cross-chain token transfers and smart contract execution across ecosystems. Without it, decentralized applications remain confined to the individual blockchain ecosystem, limiting their functionality and liquidity across ecosystems.
Atomic swaps were one of the first interoperability solutions, enabling peer-to-peer cross-chain token trades without requiring centralized intermediaries. They use hashed time-locked contracts (HTLCs), a type of smart contract that ensures both parties fulfill trade requirements. Transactions are wallet-to-wallet, preserving user control over private keys and tokens. However, atomic swaps require both blockchain networks to support HTLCs and the same cryptographic hash function. Wallet applications must also have atomic swap capabilities. While atomic swaps offer a secure and efficient mechanism for asset-to-asset exchanges across blockchain systems, their scope is typically limited to token swaps. For broader interoperability challenges—such as transferring arbitrary data or enabling cross-chain smart contract interactions—more holistic bridging protocols or general-purpose interoperability frameworks are preferable.
Wrapped tokens are synthetic assets that represent a token from one blockchain on another blockchain network. The actual token is never transferred between networks. Instead, a smart contract or another escrow solution is used to lock the original token on the primary network and issue a wrapped token on a secondary network, where they can now be used. The locked tokens can only be released by the smart contract once the wrapped tokens on the destination network are burned. In theory, developers can create such tokens without relying on a dedicated interoperability protocol or service provider. While such a custom approach is possible, many projects opt for specialized bridging solutions or general interoperability protocols to automate the process and enhance the security of operations. More general interoperability protocols and bridging solutions help standardize the wrapping process, reducing the risks associated with manually managing the lock-and-mint procedure.
Bridges enable direct cross-chain asset transfers without requiring the creation of wrapped tokens. Unlike wrapped tokens, which are issued as synthetic assets on a secondary chain while the original tokens remain locked, bridges can facilitate burn-and-mint mechanisms or liquidity-based transfers. In a burn-and-mint model, tokens on the source chain are burned, and an equivalent amount is minted natively on the destination chain, eliminating the need for custodial storage of locked tokens. Some bridges use so-called “liquidity pools,” where users deposit tokens on one blockchain and withdraw pre-existing tokens from a pool on another blockchain, ensuring faster transfers without locking assets. Bridges can be trusted (operated by centralized entities or federations) or trustless (using decentralized validation mechanisms). Unlike atomic swaps, which only allow peer-to-peer token trades, bridges enable continuous liquidity movement between blockchains, supporting cross-chain applications and seamless token interoperability. However, bridges introduce additional security risks, particularly when relying on external validators, multi-signature schemes, or smart contracts, which have been frequent targets of exploits.
General interoperability protocols extend beyond token transfers and allow blockchains to exchange arbitrary data, execute cross-chain smart contracts, and facilitate decentralized application interoperability. Unlike simple bridges, these protocols aim to provide a standardized communication layer between blockchains, enabling networks to interact natively. These systems allow multiple blockchains to function as an interconnected ecosystem, removing the need for centralized intermediaries and making cross-chain transactions more secure and efficient. Solutions vary greatly and are complex. Among the most important are Cosmos’ Inter-Blockchain Communication (IBC) protocol, Polkadot’s Relay Chain, Chainlink’s Cross-Chain Interoperability Protocol (CCIP), and the Interledger protocol.
Scalability Challenge
The topic of blockchain scalability is comparable to the early days of the Internet when bandwidth was low, communication was slow, and pages loaded pixel by pixel. The introduction of the 56k modem was considered a major improvement over the 28k modem, yet video streaming was still a distant dream. While data throughput was a bottleneck, these issues were eventually resolved, and they did not prevent the Internet's evolution to its current form. The same is true for blockchain bandwidth, though the challenges differ. Early blockchain networks using Proof-of-Work could not process many transactions simultaneously because block size was limited, and blocks were created at a fixed interval to maintain security and network stability. This created trade-offs between decentralization, transaction security, and network capacity and became known as the “scalability trilemma.”
"Security" in token transactions is critical in public and permissionless blockchain networks, as they rely on the collaboration of anonymous network nodes.
"Decentralization" refers to the distribution of network nodes, ensuring that independent social and economic agents, scattered across the globe, coordinate for transaction verification and block creation. Decentralization also determines the level of inclusion of network nodes. It is generally assumed that the more nodes participate in the process—including geographically distant and less resourceful nodes that might suffer from network latencies—the more decentralized and secure the network becomes. This reduces risks of centralization and collusion.
Scalability has both narrow and broad definitions. Narrowly, it refers to the number of transactions a blockchain network can process per block (epoch), specifically the throughput (transactions per time unit) and payload (transaction information). Broadly, it also encompasses the cost of block creation, which affects transaction fees and user experience. High transaction fees during network congestion can deter users, particularly those executing small-value transactions. Improved network scalability can lower fees, increase accessibility, reduce hardware and energy costs, and enhance user adoption.
In the early days of blockchain networks, scalability was not a major concern due to low traffic. However, as usage grew, scalability became one of the most pressing research and development questions limiting mass adoption. Blockchain protocols attempted to resolve the issue of limited block size and throughput—either by modifying the protocol itself (Layer 1 solutions) or through off-chain methods (Layer 2 solutions).
Layer 1 scaling solutions address scalability issues directly on the blockchain protocol level. However, on-chain scalability suffers from the above-mentioned trade-offs between decentralization, security, and efficiency and can offer only limited improvements.
“Block size:” Smaller block sizes minimize unused space, reduce wasted capacity, and promote decentralization by lowering resource demands. Larger block sizes increase transaction throughput and scalability but require more computational resources, which in turn reduces inclusivity and decentralization. The Bitcoin protocol enforced a fixed 1 MB block size. Early scalability solutions explored increasing block size. While this solution allows for more transactions to be processed per block, all transactions still have to be processed by each single node in the network, which means that the operational costs per block are not reduced, making the network very energy-intensive in the case of Proof-of-Work.
“Block interval:” Shorter block intervals allow for faster transaction finality and overall scalability but make the network less inclusive for less powerful nodes. Network latencies compound this issue, as nodes farther from the block proposer have less time to validate transactions, leading to higher centralization risks.
“Hardware intensity:” Some protocols required node operators to use more powerful hardware to improve network capacity. While this solution increases overall transaction throughput, it raises cost barriers for node operators, making node operation less inclusive and more centralized.
“Consensus mechanism:” Alternative consensus mechanisms were another means of resolving the scalability challenge, especially in the early years of blockchain networks. Proof-of-Stake (PoS), Delegated Proof-of-Stake (DPoS), Byzantine Fault Tolerance (BFT), and hybrid models like Solana’s Proof-of-History (PoH) combined with PoS were designed to increase scalability. However, these mechanisms are more scalable because they grant greater influence to a limited number of validators, reducing decentralization. Hybrid consensus mechanisms that blend Proof-of-Work with Proof-of-Stake promise better scalability without compromising security or decentralization.
“Alternative cryptographic algorithms:” Different cryptographic algorithms from those used in early blockchain networks—such as collective signature schemes and zero-knowledge proofs—can also improve network efficiency and scalability by reducing signature size, data storage needs, and computational overhead.
“Sharding” is a solution that divides the ledger into smaller partitions (shards), each managed by different nodes. Instead of all nodes storing the full transaction history, each node handles only a specific shard, reducing data burden and enabling parallel transaction processing. Each shard maintains a "sub-state" of the network, processing transactions independently while following cross-shard communication rules to maintain consistency. A coordination layer verifies shard activity. While sharding increases scalability, it introduces new attack risks, such as validator collusion or cross-shard inconsistencies, which must be carefully managed. To counter this, the Ethereum ecosystem developed Proto-Danksharding, and the Celestia ecosystem developed Data Availability Sampling (DAS). Both are variations of Layer 1 sharding solutions designed to support second-layer solutions like rollups. They do not execute transactions but optimize data availability to ensure that second-layer scalability solutions like rollups have sufficient data to finalize state changes securely.
Layer 2 scaling solutions shift token transactions from the mainchain onto a second layer—either entirely or only certain functions. Processing transactions off-chain reduces the computational burden on the main network while leveraging its security mechanisms. The main idea is to outsource certain blockchain processes to secondary networks that can execute transactions independently and settle the final results on the mainchain.
“Payment channels & state channels” allow users to conduct multiple transactions off-chain, recording only the initial and final states on-chain. While payment channels handle simple value transfers, state channels can support off-chain smart contract execution, enabling more complex interactions beyond payments. Only the net balance of transactions is settled on-chain, reducing costs, improving speed, and enhancing privacy. Within the channel, balances are updated off-chain through signed transactions between participants, which are later aggregated to reflect the final balance when the channel is closed. The only publicly visible transactions are the opening and closing transactions; all other interactions remain private. This setup leverages the security of the main network while minimizing the on-chain transaction load. Smart contracts govern the process, locking tokens as a security mechanism in case of disputes. Adding or removing participants requires modifying the smart contract or creating a new channel. A key limitation is that participants must remain available. If a malicious actor submits a fraudulent closing state while the other party is offline, funds could be at risk. To prevent this, watchtowers (third-party monitoring services) can dispute fraudulent claims in exchange for a fee. State channels are most efficient when participants frequently exchange state updates over time, justifying the initial cost of setup and monitoring. To mitigate this limitation and improve usability, networks have developed interconnected channel networks, reducing the need for bilateral channels between every participant.
“Sidechains” provide a separate and autonomous blockchain network—with its own security mechanisms, block creation parameters, and network token—to which network transactions are outsourced or to implement features not supported on the main blockchain network. Sidechains are anchored to and compatible with the mainchain—linked to each other via a “two-way peg”—and can transfer any token between the mainchain and sidechain, writing transactions to the mainchain to finalize transactions. The mainchain guarantees overall security and dispute resolution, and the transactions that are outsourced to the sidechain can sacrifice decentralization or security in return for scalability. Some sidechains use federations to manage asset transfer, while others implement their own Proof-of-Stake or Byzantine Fault Tolerance-based consensus. As opposed to state channels, transactions between the participants are not private. They are published on the sidechain network and thus visible to anyone who has reading access to the ledger. On the upside, participants don’t have to remain online or monitor the network to secure their funds, as the sidechain operates continuously and independently, maintaining a persistent ledger of all transactions. Unlike state channels, which require direct interaction between participants and off-chain transaction signing, sidechains function as autonomous blockchain networks where users can freely enter and exit without predefined counterparties or locked collateral.
“Rollups & data availability:” Rollups are a scaling solution developed by the Ethereum ecosystem. They aggregate multiple transactions on a second-layer network, reducing data storage and processing costs on the Ethereum mainchain. Transactions are processed by rollup nodes and finalized on the mainchain via a smart contract that enforces security guarantees. Rollup nodes submit a batched transaction summary with either validity proofs (ZK-Rollups) or fraud proofs (Optimistic Rollups) to Ethereum L1, ensuring anyone can verify state integrity. This ensures that rollup transactions can be reconstructed and verified independently. The Ethereum mainnet records essential transaction data, such as Merkle roots or validity proofs, effectively acting as the data availability layer for rollups. Data availability refers to the minimum amount of data that must be available on-chain so users can verify transaction integrity independently, even though transactions have been processed off-chain. Different rollup types implement different data availability models based on their security and scalability trade-offs. The system architecture essentially differs in how much data is posted on-chain vs. kept off-chain:
- Optimistic rollups, such as “Arbitrum” or “Optimism,” assume transactions are valid by default and use fraud proofs to detect invalid activity. They post only a summary of transaction data to L1, not the full transaction details.
- ZK-Rollups, such as “zkSync” or “StarkNet,” assume transactions are valid by default and use fraud proofs to detect invalid activity. They post only a summary of transaction data to L1, not the full transaction details.
- Validium is a ZK-based variant that keeps both transaction data and execution off-chain, increasing scalability but sacrificing Ethereum’s full security guarantees since data availability is managed externally.
Managing blockspace while increasing network throughput capacity involves balancing supply and demand for transaction verification to ensure fairness toward users, incentivize node operators, and maintain adequate levels of decentralization. In the Bitcoin ecosystem, the dominant second-layer scaling solution is payment channels, implemented by the “Lightning Network.” Sidechains like “Liquid” and “RSK” also play a role—for different use cases—such as smart contracts and asset issuance that use Bitcoin as a reserve asset. They enable wrapped BTC to be used in DeFi applications, token issuance, and programmable financial contracts. Rollup-based solutions for Bitcoin, such as “BitVM” and “Ark,” are still in the research phase and may offer additional scaling alternatives in the future. In the Ethereum ecosystem, state channels were once considered a promising Layer 2 scaling solution, but their adoption has declined significantly in favor of rollups. While state channels still have niche applications, they are no longer the dominant scaling approach. Other blockchain networks take alternative approaches to scalability, often integrating multiple L1 scaling techniques (such as sharding or parallel transaction execution) with L2-inspired features (such as rollups, state channels, or interoperability layers). Networks like “Polkadot,” “Cosmos,” “Near,” and “Avalanche” implement heterogeneous multi-chain architectures, where multiple execution environments interact with shared security model.
MEV Challenge
The term MEV stands for “Maximal Extractable Value” or “Miner Extractable Value” and refers to the extra profit that node operators—who control the order in which transactions are processed—can make. Depending on the system architecture of a blockchain network, this could be a miner, validator, or sequencer. Whoever has the power to rearrange the order of transactions can potentially insert their own transaction at the right moment for arbitrage opportunities or other types of financial gain.
The MEV problem is as old as Bitcoin itself, first identified as “Miner Extractable Value” by Bitcoin miners, who occasionally found that by selectively reordering transactions, they could capture extra fees or profit from arbitrage opportunities. Bitcoin’s limited scripting language, however, constrained these opportunities. With the emergence of Ethereum and its versatile smart contract ecosystem, the problem of MEV started to escalate. Ethereum’s rich programming capabilities and the rise of decentralized financial applications created a myriad of avenues for MEV extraction—first by miners (under Proof-of-Work), then by validators (under Proof-of-Stake), and more recently by Layer 2 node operators who order transactions. As the ecosystem evolved, the term was broadened from “Miner Extractable Value” to “Maximal Extractable Value” to include new types of network nodes that had the power to rearrange transactions to profit from financial market activities like “front-running,” “sandwich attacks,” or other value-extraction strategies. However, MEV opportunities are not limited to financial use cases. They can affect any decentralized application where transaction ordering plays a role in determining outcomes, leading to unfair advantages, where some participants might extract extra profit or other benefits at the expense of others, potentially undermining trust in the network.
Different blockchain ecosystems experience MEV differently. Historically, Bitcoin’s simple transaction model limited opportunities for MEV extraction. However, emerging applications like Runes and Ordinals are adding layers of complexity to the Bitcoin ecosystem by enabling richer data and meta-protocol functionalities directly on Bitcoin. These innovations allow Bitcoin to support more complex interactions—similar to smart contracts—thus creating new avenues where transaction ordering could be exploited for extra profit. As Bitcoin becomes more versatile with protocol upgrades and additional Layer 2 solutions, the potential for MEV may increase. Ethereum’s decentralized application landscape provides multiple layers of complexity, making MEV a significant concern for both individual traders and large institutional players. On Layer 2 networks, specialized sequencers—nodes that order and batch transactions before relaying them to Layer 1—introduce another dimension of MEV risk, as their centralized role in transaction ordering can lead to concentrated extraction opportunities. Meanwhile, other blockchain networks, such as Solana or BNB Chain, also encounter MEV issues, but their different consensus mechanisms and higher throughput impact the nature and scale of these opportunities differently.
Reducing these opportunities is an ongoing challenge for blockchain designers who want to ensure that the system treats all users fairly. In Ethereum, initiatives like Flashbots have emerged to bring transparency to MEV extraction by enabling miners and traders to collaborate in a more open market, reducing the negative externalities of adversarial practices. Protocol-level proposals include fair ordering mechanisms, such as randomized or first-in-first-out ordering, and methods to obscure transaction details until after the order is fixed—using techniques like threshold encryption or zero-knowledge proofs. For Layer 2 solutions, efforts are underway to decentralize the sequencing process so that no single entity can manipulate transaction ordering for undue profit.
Oracle Challenge
The Bitcoin network does not need information from outside of its network to operate. The system logic only needs to verify who owns how many Bitcoins in the network, which is done via the consensus process among all participating nodes who check the state of all Bitcoins on-chain. The emergence of smart contract networks, however, such as Ethereum, created new requirements. Their applications often need data from outside the blockchain network to be fully functional, which network nodes need to collectively verify if predefined actions of a smart contract need to be triggered and state changes to the ledger induced. These data feeds are referred to as “oracles” and can either come from software, hardware, or humans.
Hardware oracles refer to data that is directly provided by a piece of hardware. For example, when a car crosses the garage barrier, movement sensors detect the vehicle and send the data to a smart contract that could trigger an inspection call or a payment. Hardware oracles always require the interplay of dedicated hardware with a blockchain identity that is secure enough and designed to natively communicate with a blockchain network.
Software oracles refer to data that originates from online sources, such as prices of stocks or commodities, flight or train arrival times, etc.
Consensus-based oracles rely on expert evaluation through human intelligence and effort. They can get their data from human consensus through some form of evaluation, voting, or prediction process.
In the car-buying smart contract scenario described in the previous chapter, oracles are essential for integrating real-world data with blockchain-based automation of buying and selling a used car. Ownership verification requires oracles to confirm that Bob truly owns the car before the transaction can proceed. The mechanic provides human oracle services, inspecting the car and certifying that it is intact. Pricing oracles can provide fair market values or loan eligibility criteria if Alice needs financing. Insurance oracles can dynamically adjust policy rates based on Alice’s driving history and the car's condition. Additionally, hardware oracles such as smart locks verify that Alice, the rightful new owner, can access the garage where the car is stored. Software oracles may also be needed to fetch data from the vehicle registration authority or traffic databases.
Smart contracts can either actively request data from external sources when needed (Pull-based oracles). Alternatively, external data sources can proactively send updates to the blockchain at predetermined intervals or when specific conditions are met (push-based models). Push oracles can be more efficient for time-sensitive applications that require regular updates, as they eliminate the need for constant polling from the smart contract side. Many modern oracle solutions like Chainlink utilize hybrid approaches that combine both push and pull mechanisms, optimizing for both efficiency and reliability based on the specific use case requirements.
A smart contract that manages a stable token such as DAI needs to periodically verify the exchange rate between the stable token DAI and the collateral assets used to back the stable token. The exact exchange rate is needed so that certain actions can be triggered if the collateral asset loses value to prevent the stable token from becoming undercollateralized. This exchange rate information does not exist on-chain but has to be collected from various crypto exchanges that publish this data on their websites and via APIs. Blockchain oracles are even more important in the context of tokenizing real-world assets, such as a deed of a house, physical artworks, a kWh of energy, or any type of asset along the supply chain of goods. Linking the state of a physical object to a digital asset that represents the ownership rights, usage rights, or management rights to these assets is a critical issue that needs to be resolved since smart contracts and blockchain networks have no native mechanisms to interact with the physical world.
Oracle services can be inbound or outbound, depending on whether they flow from the external world into a smart contract, or flow from the smart contract to the outside world. Oracles can furthermore be centralized or decentralized:
Centralized oracles are operated by a single institution that provides the necessary data. This means that the smart contract queries only one predefined source of information. These oracle providers need to make sure to have a set of security features in place so that the data can be trusted.
Decentralized oracles rely on multiple external sources to verify data that can trigger an action of a smart contract. The smart contract queries not only one external source but multiple oracle services to determine the validity and accuracy of the data—effectively avoiding unique points of failure and distributing the trust between many data providers.
Both centralized and decentralized oracle services have a high degree of power over how smart contracts are executed. They are third-party services that are not part of the blockchain consensus mechanism, which makes them prone to security issues. One could, for example, replicate “man-in-the-middle attacks” standing between contracts and oracles to manipulate the data. Different trusted cryptographic tools and computing techniques can be used as a way of solving these issues so that oracle services are safe.
Storage Challenge
Blockchain networks are not suited for storing large amounts of data. Trying to store large text or video files or other large documents directly on-chain is inefficient and costly, compounding the already existing scalability problem. Depending on the blockchain network used, privacy would also be an issue. Decentralized applications that require extensive data storage—such as a decentralized YouTube—need dedicated peer-to-peer file storage systems. Decentralized storage networks address this challenge.
“IPFS” (InterPlanetary File System) and “Filecoin” were among the first and most popular solutions. IPFS provides a foundation for distributed file sharing and content addressing without relying on central servers, while Filecoin builds on IPFS by adding a native token economy that rewards storage providers for lending their unused disk space. Other protocols have filled out important niches. “Arweave” has gained prominence for its permanent data storage solution—a “permaweb” that uses an innovative endowment model to fund one-time, permanent storage. More seasoned protocols such as “Swarm,” “Storj,” and “Sia” (with its Skynet interface) also remain active, each incentivizing participation through their own tokens and economic models, thereby turning decentralized storage into a competitive marketplace. Newer services like “Web3.Storage” build on established protocols such as IPFS and Filecoin by offering a more user-friendly interface and simplified integration for Web3 applications. They aim to create an entirely new decentralized infrastructure for both data storage and communication. The “SAFE Network” is another solution that seeks to redesign the storage and data management model from the ground up with a more holistic approach.
The above-mentioned protocols vary significantly in their degrees of decentralization, privacy features, and incentive mechanisms. Designing robust economic models that reward network participants with tokens without reliance on centralized parties remains the greatest challenge. Various consensus mechanisms have been explored to secure decentralized file storage. Early models include “Proof-of-Retrievability,” “Proof-of-Storage,” and “Proof-of-Spacetime.” Newer approaches, such as “Proof-of-Replication” and “Proof-of-Access,” further enhance the security and efficiency of these systems. Despite these advances, fully decentralized and economically incentivized storage solutions that meet the demands of decentralized applications are still evolving, with ongoing research needed to balance scalability, cost, and privacy.
Physical Infrastructure Challenge
Blockchain networks are a great infrastructure for issuing, managing, and settling digital assets, but they have no native way to communicate with the physical world. Decentralized physical infrastructure networks (DePIN) resolve this by connecting machines and other real-world objects with blockchain networks—enabling physical assets and infrastructure to be collectively tracked, traced, and managed without reliance on centralized authorities issuing certificates of origin and other provenance-related information. In such a setup, blockchain networks and complementary protocols are used to identify, digitize, and control data collected by machines or along the supply chain of goods and services—creating unique digital identities and related product passport solutions. This allows the tracking and tracing of provenance, ownership, and performance of real-world assets and services across multiple parties. Examples include tokenized energy, tokenized telecommunication services, and tokenized mobility data.
To achieve this, every physical object—from telecommunication antennas, solar panels, vehicles, and industrial sensors—must have a unique, tamper-proof blockchain identity. These objects can be equipped with crypto accelerators—small microcontrollers optimized to run critical cryptographic algorithms—to create a digital twin of those objects. This forms the basis for generating unique digital signatures, allowing each object to securely send, receive, and verify tokenized real-world assets and their credentials. By providing tamper-proof digital identities for physical assets, decentralized physical infrastructure networks facilitate secure tracking and provenance verification. Control is distributed across multiple stakeholders of a decentralized physical infrastructure network—from local communities and individual investors to participating companies offering services—which minimizes single points of failure.
In P2P energy grids, for example, solar-power producers and consumers can use decentralized physical infrastructure networks to issue and trade energy tokens directly via a blockchain infrastructure, bypassing the need for traditional utility companies managing large solar parks (read more in chapter “Asset Tokens & NFTs”). P2P telecommunications networks, on the other hand, can be used by producers to operate mini cell towers, collectively providing a telecommunications network that others can consume. Infrastructure investment and maintenance are rewarded with network tokens upon proof of data relayed. Both productivity proofs and payments are settled via public blockchain infrastructure (read more in chapter “Helium: P2P Telecom Network”). In transportation and logistics, vehicles and supply chain components can interact directly using decentralized physical infrastructure networks, enabling real-time asset tracking and settlement, as well as P2P data exchange. Other real-world applications include “Hivemapper” (decentralized mapping), “DIMO” (vehicle data sharing), and “WeatherXM” (community-powered weather stations).
The greatest technical challenge of decentralized physical infrastructure networks is ensuring reliable, secure, and tamper-proof connectivity between physical objects and blockchain networks—which requires the development of robust crypto accelerators that can withstand harsh environmental conditions and their integration with blockchain networks and AI-driven data analytics for predictive maintenance and optimization. Universal standards for digital identities (such as DIDs), which ensure different systems can communicate with each other, are still being developed. Fragmentation across various protocols can hinder widespread adoption, although efforts such as W3C standardization for DIDs are working toward greater interoperability (read more in chapter “User-Centric Identities”). The greatest economic challenge is designing incentive models that fairly reward all participants while ensuring the network remains decentralized and tamper-resistant.
Decentralization Challenge
Blockchain networks and the applications that operate on them are often described as “decentralized,” categorically opposing centralization—implying a more democratic or inclusive Web. While they are collaboratively managed and are generally more decentralized than traditional client-server infrastructure, the degree of decentralization and the level of inclusion can vary greatly. Though it is true that none of the different network actors in a blockchain network or other Web3 protocol have exclusive power over the network, points of power concentration always develop over time. Such power structures and points of centralization are a political and economic reality—not only in crypto, but in all systems, digital or analog, where institutions and individuals pursuing their own best interests collide.
Let’s take the example of Bitcoin. As will be analyzed in greater detail in a dedicated chapter of this book, Bitcoin has fallen short of its own original value proposition, at least to a certain extent. While the codebase and the governance process are open, meaning that in theory anyone can participate in the policymaking process of the network rules, the network today is effectively controlled by a handful of large mining pool operators, a limited number of full node operators, and only a relatively small group of protocol developers contributing to protocol maintenance. Most users manage their Bitcoin holdings in custodial wallets that are operated by third-party providers instead of self-hosted wallets, meaning they have no control over their own assets. The reality of Bitcoin network operation and usage is far less inclusive, decentralized, and self-sovereign than it initially set out to be.
Decentralization, therefore, is not an absolute term but exists on a gradient. The degree of inclusion needs to be measured and put into context when designing the initial token governance rules or trying to evaluate how decentralized a network is. Depending on the type and purpose of the network, different levels of decentralization and inclusion might be optimal. Various aspects need to be considered when trying to assess the level of inclusion of network participants:
Metrics: Is a network decentralized if it is controlled by 5, 50, or 50,000 participants? Should we measure the absolute number of nodes or their ratio to users? Without metrics and context regarding what is being measured, the term “decentralization” remains a shallow buzzword.
Aspects: When claiming “decentralization,” what aspect is one referring to? Code development (who can contribute to policymaking?), governance (who can vote on policy upgrades?), operational maintenance (who can become an infrastructure operator?), or users (who can use the network, and how do they use it?)
The degree of inclusion often depends on barriers to participation, which can be influenced by factors such as (i) whether or not participation requires some form of exclusive permission, (ii) the know-how necessary to operate a network node, (iii) the amount of money required to operate a network node, or (iv) legal barriers. When node operation becomes more permissioned, expensive, or requires specialized know-how, the blockchain network becomes less inclusive and more centralized.
Some argue that fully decentralized systems have downfalls. We can find evidence of this assumption in the history of government, markets, biology, and engineering. Direct democracy has proven not to work—at least in larger communities. The decision overload it imposes on its citizens is one of many issues, which is why most democratically run societies have resorted to various forms of representative democracy instead of direct democracy, where—for better and worse—points of centralization have emerged. While the Internet’s future may lean toward more decentralized system architectures, points of centralization will always emerge where they offer advantages, such as speed, simplicity, or security. The appropriate level of collaboration and decentralization in Web3 networks depends on the network's purpose, the community’s values, and the socio-economic realities of its participants. The initial design of the network rules defines how future power structures will unfold. The last part of this book will examine five different use cases in depth, analyzing the evolution of power structures for each use case in more detail.