Decentralized computing
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Decentralized computing is the allocation of resources, both hardware and software, to each individual workstation, or office location. In contrast, centralized computing exists when the majority of functions are carried out or obtained from a remote centralized location. Decentralized computing is a trend in modern-day business environments. This is the opposite of centralized computing, which was prevalent during the early days of computers. A decentralized computer system has many benefits over a conventional centralized network.[1] Desktop computers have advanced so rapidly, that their potential performance far exceeds the requirements of most business applications. This results in most desktop computers remaining idle (in relation to their full potential). A decentralized system can use the potential of these systems to maximize efficiency. However, it is debatable whether these networks increase overall effectiveness.
All computers have to be updated individually with new software, unlike a centralized computer system. Decentralized systems still enable file sharing and all computers can share peripherals such as printers and scanners as well as modems, allowing all the computers in the network to connect to the internet.
A collection of decentralized computers systems are components of a larger computer network, held together by local stations of equal importance and capability. These systems are capable of running independently of each other.
Origins of decentralized computing
[edit]The origins of decentralized computing originate from the work of David Chaum.[citation needed]
During 1979 he conceived the first concept of a decentralized computer system known as Mix Network. It provided an anonymous email communications network, which decentralized the authentication of the messages in a protocol that would become the precursor to Onion Routing, the protocol of the TOR browser. Through this initial development of an anonymous communications network, David Chaum applied his Mix Network philosophy to design the world's first decentralized payment system and patented it in 1980.[2] Later in 1982, for his PhD dissertation, he wrote about the need for decentralized computing services in the paper Computer Systems Established, Maintained and Trusted by Mutually Suspicious Groups.[3] Chaum proposed an electronic payment system called Ecash in 1982. Chaum's company DigiCash implemented this system from 1990 until 1998.[non-primary source needed]
Peer-to-peer
[edit]Based on a "grid model" a peer-to-peer system, or P2P system, is a collection of applications run on several computers, which connect remotely to each other to complete a function or a task. There is no main operating system to which satellite systems are subordinate. This approach to software development (and distribution) affords developers great savings, as they don't have to create a central control point. An example application is LAN messaging which allows users to communicate without a central server.
Peer-to-peer networks, where no entity controls an effective or controlling number of the network nodes, running open source software also not controlled by any entity, are said to effect a decentralized network protocol. These networks are harder for outside actors to shut down, as they have no central headquarters.[4][better source needed]
File sharing applications
[edit]One of the most notable debates over decentralized computing involved Napster, a music file sharing application, which granted users access to an enormous database of files. Record companies brought legal action against Napster, blaming the system for lost record sales. Napster was found in violation of copyright laws by distributing unlicensed software, and was shut down.[5]
After the fall of Napster, there was demand for a file sharing system that would be less vulnerable to litigation. Gnutella, a decentralized system, was developed. This system allowed files to be queried and shared between users without relying upon a central directory, and this decentralization shielded the network from litigation related to the actions of individual users.
Decentralized web
[edit]See also
[edit]References
[edit]- ^ Pandl, Konstantin D.; Thiebes, Scott; Schmidt-Kraepelin, Manuel; Sunyaev, Ali (2020). "On the Convergence of Artificial Intelligence and Distributed Ledger Technology: A Scoping Review and Future Research Agenda". IEEE Access. 8: 57075–57095. arXiv:2001.11017. Bibcode:2020IEEEA...857075P. doi:10.1109/ACCESS.2020.2981447. ISSN 2169-3536.
- ^ Patent US4529870
- ^ Chaum, David. Computer Systems Established, Maintained and Trusted by Mutually Suspicious Groups
- ^ Croman, Kyle; Decker, Christian; Eyal, Ittay; Gencer, Adem Efe; Juels, Ari; Kosba, Ahmed; Miller, Andrew; Saxena, Prateek; Shi, Elaine; Gün Sirer, Emin; Song, Dawn; Wattenhofer, Roger (2016). "On Scaling Decentralized Blockchains: (A Position Paper)". Financial Cryptography and Data Security. Lecture Notes in Computer Science. Vol. 9604. Springer. pp. 106–125. doi:10.1007/978-3-662-53357-4_8. ISBN 978-3-662-53356-7. Retrieved 9 March 2021.
- ^ Evangelista, Benny; Writer, Chronicle Staff (2002-09-04). "Napster runs out of lives -- judge rules against sale". SFGate. Archived from the original on 2021-03-09. Retrieved 2019-07-25.
Notes
[edit]- Crowcroft, Jon. Moreton; Tim. Pratt, Ian. Twigg (2003). "Peer-to-Peer Systems and the Grid" (PDF). Retrieved 2013-11-06.
{{cite journal}}: Cite journal requires|journal=(help) - Reid, Alex (1995). "IT Strategy Review, Distributed Computing – Rough Draft". Retrieved 2013-11-06.
Decentralized computing
View on GrokipediaFundamentals
Definition and Distinctions
Decentralized computing encompasses architectures in which computational resources, data storage, and decision-making authority are distributed across multiple independent nodes in a network, eliminating dependence on a single central server or entity for operation.[4][5] In such systems, nodes collaborate via peer-to-peer protocols to perform tasks like data processing and validation, ensuring no single point of failure or control.[6] This model contrasts with traditional mainframe-era computing, where resources were concentrated in centralized facilities, and has gained prominence through technologies enabling resilient, scalable operations as of the early 2020s.[7] A primary distinction lies between decentralized and centralized computing: centralized systems route all requests through a single authoritative hub, which manages resources and enforces policies, whereas decentralized systems devolve control to autonomous nodes that collectively maintain system integrity without hierarchical oversight.[8][9] Centralized approaches offer streamlined administration but introduce vulnerabilities, such as outages from hub failure, as evidenced by historical incidents like the 2021 Facebook downtime affecting 3.5 billion users due to single-point reliance.[10] Decentralized systems mitigate this by distributing workloads, enhancing fault tolerance, though they demand robust consensus mechanisms to prevent inconsistencies.[11] Decentralized computing further differs from distributed computing, where tasks and data are spread across networked components that communicate and coordinate, often under a central orchestrator or coordinator to synchronize actions.[12][13] While all decentralized systems are inherently distributed—spanning multiple locations for parallelism—distributed systems may retain centralized elements, such as a master node directing subordinates, as in many enterprise database clusters.[14] True decentralization requires peer-level autonomy, where no node dominates, fostering applications like blockchain networks that achieved global scale by 2017 with Bitcoin's proof-of-work consensus distributing validation across thousands of participants.[15] This autonomy introduces challenges like higher coordination overhead but enables censorship resistance, absent in federated distributed models with trusted intermediaries.[16]Core Principles
Decentralized computing fundamentally distributes control and decision-making across independent nodes, eschewing a central authority that coordinates or possesses complete system knowledge. In such systems, no single entity accesses all inputs or dictates outputs; instead, solutions emerge from local computations on partial data, with nodes collaborating through limited, peer-to-peer interactions.[2] This principle contrasts with centralized computing, where a master node imposes command-and-control, and even with many distributed systems that retain centralized knowledge aggregation despite physical dispersion.[2][17] A key tenet is node autonomy, where each participant operates independently, processing local information without reliance on a hierarchical overseer. Nodes make decisions based on their own data and minimal communication with peers, enabling responsiveness and innovation but potentially leading to duplicated efforts or suboptimization if not balanced.[17] This autonomy fosters impartial standards and simplified resource allocation, as no master enforces uniformity, though it demands mechanisms for conflict resolution among equals.[17] Resilience arises from the absence of a single point of failure or control, as the system's functionality persists through redundancy and distributed functions across nodes. Independent peers must collaborate to achieve collective goals, distributing intelligence rather than concentrating it, which enhances fault tolerance but requires robust local recovery and synchronization protocols.[2][17] Scalability follows from this structure, as growth involves adding nodes without bottlenecking a core authority, though efficiency depends on effective peer coordination to avoid overload from excessive duplication.[17]Historical Development
Early Foundations (Pre-1990s)
The origins of decentralized computing trace to Paul Baran's 1964 RAND Corporation memos, which analyzed vulnerabilities in centralized and hierarchical networks and proposed distributed alternatives using packet switching to enhance survivability against failures or attacks.[18] Baran's design divided messages into small packets routed independently across nodes, allowing reconfiguration around damaged links without a central controller, a concept formalized in his 11-volume report On Distributed Communications Networks.[19] This work emphasized redundancy and digital encoding over analog circuits, influencing subsequent military and research networking efforts.[20] ARPANET, launched in 1969 by the U.S. Department of Defense's Advanced Research Projects Agency (ARPA), implemented these principles as the first operational packet-switched network, connecting four university nodes (UCLA, Stanford Research Institute, UC Santa Barbara, and University of Utah) via Interface Message Processors (IMPs).[21] The system's decentralized topology avoided single points of failure, enabling dynamic routing and resource sharing among heterogeneous computers, with initial data transmission speeds of 50 kbps.[22] By 1972, ARPANET supported 23 nodes and demonstrated public packet switching at the International Conference on Computer Communication, while early protocols like NCP facilitated remote login and file transfer.[23] Distributed computing theory advanced in the 1970s through ARPANET experiments, including Ray Tomlinson's 1971 implementation of email (using @ symbol for addressing), which operated without central servers by relaying messages peer-to-peer.[24] Programs like Creeper (a self-replicating crawler) and Reaper (its tracker), developed in the early 1970s, demonstrated autonomous propagation across nodes, highlighting challenges in coordination and fault tolerance.[24] Leslie Lamport's 1978 paper "Time, Clocks, and the Ordering of Events in a Distributed System" introduced logical clocks to resolve causality in asynchronous environments lacking global time.[25] In the late 1970s and 1980s, decentralized messaging networks emerged outside ARPANET. Usenet, created in 1979 by Tom Truscott and Jim Ellis at Duke University using UUCP over dial-up links, formed a distributed hierarchy of newsgroups where servers exchanged messages via flood-fill propagation, serving over 500 hosts by 1987 without centralized moderation.[26] FidoNet, founded in 1984 by Tom Jennings, enabled bulletin board systems (BBSes) to interconnect via scheduled phone calls for store-and-forward email and file echos, growing to thousands of nodes by the late 1980s and demonstrating scalable peer coordination amid varying connectivity.[27] These systems underscored practical trade-offs in decentralization, such as propagation delays and polling overhead, prefiguring resilience in resource-constrained environments.[28]Peer-to-Peer Emergence (1990s-2000s)
The peer-to-peer (P2P) paradigm in computing gained prominence in the late 1990s amid expanding internet connectivity, rising broadband adoption, and demands for efficient resource sharing beyond centralized servers, which often suffered from bottlenecks and single points of failure.[29] Early P2P implementations focused on file distribution, leveraging end-user devices for storage and bandwidth to enable scalable, fault-tolerant networks without intermediary control.[30] This shift was catalyzed by the digitization of media and the limitations of prior models like FTP and Usenet, which lacked direct peer interactions for dynamic discovery and transfer.[31] Napster, launched on June 1, 1999, by Shawn Fanning and Sean Parker, represented the breakthrough application, employing a hybrid architecture with a central directory for indexing MP3 files while routing actual data transfers directly between user machines.[32] The service rapidly scaled to over 80 million registered users by early 2001, demonstrating P2P's potential for massive parallelism in content dissemination and challenging traditional media distribution monopolies.[33] However, its centralized indexing vulnerability led to shutdown injunctions in July 2001 following lawsuits from the Recording Industry Association of America for facilitating copyright violations, underscoring regulatory risks in decentralized systems.[34] Napster's demise accelerated fully decentralized P2P designs. Gnutella, developed by Nullsoft's Justin Frankel and Tom Pepper and released in March 2000 under GPL, introduced a protocol for query flooding across peer connections, enabling search without servers and fostering open-source clients like LimeWire.[35] Concurrently, Freenet, conceived by Ian Clarke in a 1999 University of Edinburgh report and first released in March 2000, prioritized anonymity and censorship resistance through distributed key-based routing and encrypted data insertion, treating the network as a collective, location-independent storage layer.[36] MojoNation, publicly beta-launched in July 2000, advanced incentivized participation via a Mojo currency for micropayments, aiming to balance load and deter free-riding in file storage and retrieval.[37] These systems highlighted P2P's resilience against takedowns but revealed challenges like inefficient searches, bandwidth waste, and sybil attacks, informing later refinements in decentralized architectures.[30]Blockchain Era (2008 Onward)
The publication of the Bitcoin whitepaper on October 31, 2008, by the pseudonymous Satoshi Nakamoto introduced blockchain as a distributed ledger secured by proof-of-work (PoW) consensus, enabling peer-to-peer electronic cash transactions without trusted third parties.[38] This system solved the double-spending problem through cryptographic hashing of blocks into an immutable chain, where nodes compete to validate transactions via computational puzzles, achieving probabilistic finality via the longest-chain rule.[39] The Bitcoin network activated on January 3, 2009, with the genesis block, which embedded a headline referencing bank bailouts to underscore its critique of centralized finance.[40] By decentralizing monetary verification, Bitcoin demonstrated a mechanism for tamper-resistant, consensus-driven state management, foundational to extending blockchain beyond currency to verifiable computation across untrusted networks.[41] Bitcoin's PoW model incentivized participation through block rewards, fostering a self-sustaining network that processed its first real-world transaction on May 22, 2010—10,000 BTC for two pizzas—validating economic utility.[42] However, limitations in scripting capabilities confined it primarily to simple value transfer, prompting innovations in programmable blockchains. Ethereum, conceptualized in a November 2013 whitepaper by Vitalik Buterin, launched its mainnet on July 30, 2015, introducing the Ethereum Virtual Machine (EVM) for executing smart contracts—deterministic code snippets stored and run on-chain.[43] These contracts enabled decentralized applications (dApps), where computation is replicated and attested by network validators, shifting blockchain from passive ledgers to active platforms for logic enforcement without intermediaries.[44] Ethereum's Turing-complete design facilitated complex interactions, such as automated escrow or governance rules, but faced scalability bottlenecks, with transaction throughput averaging 15-30 per second under PoW.[45] To mitigate this, the network transitioned to proof-of-stake (PoS) consensus via "The Merge" on September 15, 2022, slashing energy use by over 99% by selecting validators based on staked ether rather than computational races, while introducing slashing penalties for invalid attestations to preserve security.[46] Complementary advancements, including layer-2 rollups for off-chain computation settlement and sharding for parallel processing, have boosted effective capacity, enabling dApps in decentralized finance (DeFi) protocols that executed over $1 trillion in transaction volume by 2021.[47] Alternative blockchains, such as those employing proof-of-history for timestamping or directed acyclic graphs for non-linear consensus, emerged to prioritize speed and cost-efficiency for compute-intensive tasks, with networks like Solana achieving thousands of transactions per second by 2021.[48] These evolutions have underpinned decentralized autonomous organizations (DAOs), where on-chain voting and treasury management distribute decision-making, and oracle networks that feed real-world data to smart contracts, though vulnerabilities like flash loan exploits highlight ongoing risks in assuming perfect decentralization.[49] By 2025, blockchain's consensus primitives have integrated with edge computing and AI, enabling verifiable, incentive-aligned distributed processing resistant to single points of failure.[50]Technical Architectures
Peer-to-Peer Systems
Peer-to-peer (P2P) systems comprise networks of nodes that directly share computational resources, storage, and bandwidth to deliver collective services, with each peer serving dual functions as both resource supplier and requester.[51] This design distributes authority across participants, fostering scalability as aggregate capacity expands proportionally with network size, unlike centralized models constrained by server limitations.[51] Published in November 2009, RFC 5694 defines P2P systems by their emphasis on mutual benefit through resource sharing, allowing adaptation to dynamic node populations and failures via redundancy and replication.[51] P2P architectures divide into pure forms, devoid of central coordinators, and hybrid variants incorporating minimal centralized elements for tasks like initial peer discovery.[51] Overlay topologies classify further as unstructured, featuring arbitrary peer connections and resource discovery through flooding queries that propagate exponentially but inefficiently in large networks, or structured, imposing logical overlays like distributed hash tables (DHTs) to map keys to nodes deterministically for logarithmic routing efficiency.[51] Structured systems mitigate unstructured scalability issues by embedding routing geometry in node identifiers, enabling O(log N) lookup times where N denotes peer count.[52] Exemplary DHT protocols include Chord, developed in 2001 by Ion Stoica et al., which organizes peers into a circular keyspace via consistent hashing, assigning each node a successor and finger-table pointers to distant nodes for fault-tolerant, decentralized indexing and O(log N) message routing even amid churn. Kademlia, proposed in 2002 by Petar Maymounkov and David Mazières, employs 160-bit node identifiers and an XOR-based distance metric to partition the identifier space into binary prefixes, maintaining k-buckets of diverse contacts per prefix for parallel, asynchronous queries yielding low-latency lookups and resilience to targeted failures.[53] These protocols underpin decentralized computing by enabling self-organizing resource location without trusted intermediaries, as evidenced in applications from file distribution to blockchain propagation.[54] While P2P systems enhance robustness through inherent decentralization—sustaining operations despite peer departures via replicated data— they face challenges like free-riding, where non-contributing nodes erode efficiency, and security threats including Sybil attacks that flood the network with fake identities to subvert routing or data integrity.[51] Mitigation often involves reputation-based incentives or cryptographic verification, though persistent churn in transient populations demands ongoing protocol adaptations for sustained performance.[51] In decentralized contexts, these trade-offs highlight P2P's causal strength in fault tolerance but underscore the need for layered defenses against adversarial incentives inherent to open participation.[55]Consensus Mechanisms
Consensus mechanisms are protocols enabling nodes in a distributed, decentralized network to agree on the system's state or transaction validity without relying on a trusted central authority, thereby ensuring fault tolerance and consistency amid potential node failures or malicious actions. These algorithms address the consensus problem formalized in distributed systems research, where nodes must select a single value from proposed options despite asynchronous communication and limited trust. In decentralized computing, they underpin peer-to-peer agreement in applications like blockchains and distributed ledgers, tolerating either crash failures (non-responsive nodes) or Byzantine faults (arbitrary, potentially adversarial behavior).[56][57] Consensus algorithms are broadly categorized into crash-fault tolerant (CFT), Byzantine fault tolerant (BFT), and proof-based mechanisms. CFT algorithms, such as Paxos (proposed in 1989 by Leslie Lamport) and Raft (introduced in 2014 by Diego Ongaro and John Ousterhout), assume nodes fail only by crashing and halting, achieving agreement through leader election and log replication in synchronous environments with up to half the nodes failing; they are widely used in distributed databases like Google's Chubby or etcd for state machine replication but offer limited protection against malicious nodes.[56] BFT algorithms extend tolerance to Byzantine faults, where up to one-third of nodes can behave arbitrarily, including lying or colluding. Practical Byzantine Fault Tolerance (PBFT), developed by Miguel Castro and Barbara Liskov in 1999, operates in phases—pre-prepare, prepare, and commit—where a primary node proposes values, and replicas vote via message exchanges to achieve quorum; it provides deterministic finality with low latency in permissioned networks but scales poorly beyond dozens of nodes due to quadratic communication overhead (O(n²) messages). PBFT and its variants, like those in Hyperledger Fabric since 2016, suit consortium blockchains where participants are known, offering resilience against up to f faulty nodes in systems of 3f+1 total nodes.[58][59] Proof-based mechanisms, prevalent in permissionless decentralized networks, incentivize honest participation through economic costs rather than identity verification. Proof of Work (PoW), pioneered in Bitcoin's 2008 whitepaper by Satoshi Nakamoto, requires nodes (miners) to solve computationally intensive puzzles—finding a nonce yielding a hash below a target difficulty—to propose blocks, securing the chain via the longest-proof-of-work rule; this deters attacks by imposing high energy costs (Bitcoin's network consumed about 121 TWh annually as of 2023) but limits throughput to roughly 7 transactions per second (TPS) and raises environmental concerns. Proof of Stake (PoS), first implemented in Peercoin in 2012 and adopted by Ethereum in its September 2022 merge, selects validators probabilistically based on staked cryptocurrency holdings, with slashing penalties for misbehavior; it reduces energy use by over 99% compared to PoW while enabling higher scalability (Ethereum post-merge targets 100,000+ TPS via sharding), though it risks validator centralization among large holders and "nothing-at-stake" attacks mitigated by mechanisms like checkpoints.[60][61][62]| Mechanism | Fault Model | Scalability | Energy Efficiency | Example Systems |
|---|---|---|---|---|
| PoW | Byzantine (economic) | Low (e.g., 7 TPS) | Low | Bitcoin (2009) |
| PoS | Byzantine (slashing) | Higher (sharded variants) | High | Ethereum (2022+), Cardano |
| PBFT | Up to 1/3 Byzantine | Low (O(n²) comm.) | High | Hyperledger Fabric |
| Raft | Crash (majority) | Moderate | High | Consul, etcd |