Recent from talks
Contribute something
Nothing was collected or created yet.
Computing
View on Wikipedia


Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery.[1] It includes the study and experimentation of algorithmic processes, and the development of both hardware and software. Computing has scientific, engineering, mathematical, technological, and social aspects. Major computing disciplines include computer engineering, computer science, cybersecurity, data science, information systems, information technology, and software engineering.[2]
The term computing is also synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers.[3]
History
[edit]The history of computing is longer than the history of computing hardware and includes the history of methods intended for pen and paper (or for chalk and slate) with or without the aid of tables. Computing is intimately tied to the representation of numbers, though mathematical concepts necessary for computing existed before numeral systems. The earliest known tool for use in computation is the abacus, and it is thought to have been invented in Babylon circa between 2700 and 2300 BC. Abaci, of a more modern design, are still used as calculation tools today.
The first recorded proposal for using digital electronics in computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams.[4] Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" then introduced the idea of using electronics for Boolean algebraic operations.
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947.[5][6] In 1953, the University of Manchester built the first transistorized computer, the Manchester Baby.[7] However, early junction transistors were relatively bulky devices that were difficult to mass-produce, which limited them to a number of specialised applications.[8]
In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface.[9] Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960.[10][11] The MOSFET made it possible to build high-density integrated circuits,[12][13] leading to what is known as the computer revolution[14] or microcomputer revolution.[15]
Computers
[edit]A computer is a machine that manipulates data according to a set of instructions called a computer program.[16] The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm.[17] Because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the CPU type.[18]
The execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer. They trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions.
Computer hardware
[edit]Computer hardware includes the physical parts of a computer, including the central processing unit, memory, and input/output.[19] Computational logic and computer architecture are key topics in the field of computer hardware.[20][21]
Computer software
[edit]Computer software, or just software, is a collection of computer programs and related data, which provides instructions to a computer. Software refers to one or more computer programs and data held in the storage of the computer. It is a set of programs, procedures, algorithms, as well as its documentation concerned with the operation of a data processing system.[citation needed] Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware (meaning physical devices). In contrast to hardware, software is intangible.[22]
Software is also sometimes used in a more narrow sense, meaning application software only.
System software
[edit]System software, or systems software, is computer software designed to operate and control computer hardware, and to provide a platform for running application software. System software includes operating systems, utility software, device drivers, window systems, and firmware. Frequently used development tools such as compilers, linkers, and debuggers are classified as system software.[23] System software and middleware manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user, unlike application software.
Application software
[edit]Application software, also known as an application or an app, is computer software designed to help the user perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software, and media players. Many application programs deal principally with documents.[24] Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install additional applications. The system software manages the hardware and serves the application, which in turn serves the user.
Application software applies the power of a particular computing platform or system software to a particular purpose. Some apps, such as Microsoft Office, are developed in multiple versions for several different platforms; others have narrower requirements and are generally referred to by the platform they run on. For example, a geography application for Windows or an Android application for education or Linux gaming. Applications that run only on one platform and increase the desirability of that platform due to the popularity of the application, known as killer applications.[25]
Computer networks
[edit]A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow the sharing of resources and information.[26] When at least one process in one device is able to send or receive data to or from at least one process residing in a remote device, the two devices are said to be in a network. Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope.
Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. One well-known communications protocol is Ethernet, a hardware and link layer standard that is ubiquitous in local area networks. Another common protocol is the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, host-to-host data transfer, and application-specific data transmission formats.[27]
Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology, or computer engineering, since it relies upon the theoretical and practical application of these disciplines.[28]
Internet
[edit]The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users. This includes millions of private, public, academic, business, and government networks, ranging in scope from local to global. These networks are linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web and the infrastructure to support email.[29]
Computer programming
[edit]Computer programming is the process of writing, testing, debugging, and maintaining the source code and documentation of computer programs. This source code is written in a programming language, which is an artificial language that is often more restrictive than natural languages, but easily translated by the computer. Programming is used to invoke some desired behavior (customization) from the machine.[30]
Writing high-quality source code requires knowledge of both the computer science domain and the domain in which the application will be used. The highest-quality software is thus often developed by a team of domain experts, each a specialist in some area of development.[31] However, the term programmer may apply to a range of program quality, from hacker to open source contributor to professional. It is also possible for a single programmer to do most or all of the computer programming needed to generate the proof of concept to launch a new killer application.[32]
Computer programmer
[edit]A programmer, computer programmer, or coder is a person who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst.[33] A programmer's primary computer language (C, C++, Java, Lisp, Python, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with Web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming.[34]
Computer industry
[edit]The computer industry is made up of businesses involved in developing computer software, designing computer hardware and computer networking infrastructures, manufacturing computer components, and providing information technology services, including system administration and maintenance.[35]
The software industry includes businesses engaged in development, maintenance, and publication of software. The industry also includes software services, such as training, documentation, and consulting.[citation needed]
Sub-disciplines of computing
[edit]Computer engineering
[edit]Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software.[36] Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration, rather than just software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering includes not only the design of hardware within its own domain, but also the interactions between hardware and the context in which it operates.[37]
Software engineering
[edit]Software engineering is the application of a systematic, disciplined, and quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches. That is, the application of engineering to software.[38][39][40] It is the act of using insights to conceive, model and scale a solution to a problem. The first reference to the term is the 1968 NATO Software Engineering Conference, and was intended to provoke thought regarding the perceived software crisis at the time.[41][42][43] Software development, a widely used and more generic term, does not necessarily subsume the engineering paradigm. The generally accepted concepts of Software Engineering as an engineering discipline have been specified in the Guide to the Software Engineering Body of Knowledge (SWEBOK). The SWEBOK has become an internationally accepted standard in ISO/IEC TR 19759:2015.[44]
Computer science
[edit]Computer science or computing science (abbreviated CS or Comp Sci) is the scientific and practical approach to computation and its applications. A computer scientist specializes in the theory of computation and the design of computational systems.[45]
Its subfields can be divided into practical techniques for its implementation and application in computer systems, and purely theoretical areas. Some, such as computational complexity theory, which studies fundamental properties of computational problems, are highly abstract, while others, such as computer graphics, emphasize real-world applications. Others focus on the challenges in implementing computations. For example, programming language theory studies approaches to the description of computations, while the study of computer programming investigates the use of programming languages and complex systems. The field of human–computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans.[46]
Cybersecurity
[edit]The field of cybersecurity pertains to the protection of computer systems and networks. This includes information and data privacy, preventing disruption of IT services and prevention of theft of and damage to hardware, software, and data.[47]
Data science
[edit]Data science is a field that uses scientific and computing tools to extract information and insights from data, driven by the increasing volume and availability of data.[48] Data mining, big data, statistics, machine learning and deep learning are all interwoven with data science.[49]
Information systems
[edit]Information systems (IS) is the study of complementary networks of hardware and software (see information technology) that people and organizations use to collect, filter, process, create, and distribute data.[50][51][52] The ACM's Computing Careers describes IS as:
"A majority of IS [degree] programs are located in business schools; however, they may have different names such as management information systems, computer information systems, or business information systems. All IS degrees combine business and computing topics, but the emphasis between technical and organizational issues varies among programs. For example, programs differ substantially in the amount of programming required."[53]
The study of IS bridges business and computer science, using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline.[54][55][56] The field of Computer Information Systems (CIS) studies computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society[57][58] while IS emphasizes functionality over design.[59]
Information technology
[edit]Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit, and manipulate data,[60] often in the context of a business or other enterprise.[61] The term is commonly used as a synonym for computers and computer networks, but also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce, and computer services.[62][63]
Research and emerging technologies
[edit]DNA-based computing and quantum computing are areas of active research for both computing hardware and software, such as the development of quantum algorithms. Potential infrastructure for future technologies includes DNA origami on photolithography[64] and quantum antennae for transferring information between ion traps.[65] By 2011, researchers had entangled 14 qubits.[66][67] Fast digital circuits, including those based on Josephson junctions and rapid single flux quantum technology, are becoming more nearly realizable with the discovery of nanoscale superconductors.[68]
Fiber-optic and photonic (optical) devices, which already have been used to transport data over long distances, are starting to be used by data centers, along with CPU and semiconductor memory components. This allows the separation of RAM from CPU by optical interconnects.[69] IBM has created an integrated circuit with both electronic and optical information processing in one chip. This is denoted CMOS-integrated nanophotonics (CINP).[70] One benefit of optical interconnects is that motherboards, which formerly required a certain kind of system on a chip (SoC), can now move formerly dedicated memory and network controllers off the motherboards, spreading the controllers out onto the rack. This allows standardization of backplane interconnects and motherboards for multiple types of SoCs, which allows more timely upgrades of CPUs.[71]
Another field of research is spintronics. Spintronics can provide computing power and storage, without heat buildup.[72] Some research is being done on hybrid chips, which combine photonics and spintronics.[73][74] There is also research ongoing on combining plasmonics, photonics, and electronics.[75]
Cloud computing
[edit]Cloud computing is a model that allows for the use of computing resources, such as servers or applications, without the need for interaction between the owner of these resources and the end user. It is typically offered as a service, making it an example of Software as a Service, Platforms as a Service, and Infrastructure as a Service, depending on the functionality offered. Key characteristics include on-demand access, broad network access, and the capability of rapid scaling.[76] It allows individual users or small business to benefit from economies of scale.
One area of interest in this field is its potential to support energy efficiency. Allowing thousands of instances of computation to occur on one single machine instead of thousands of individual machines could help save energy. It could also ease the transition to renewable energy source, since it would suffice to power one server farm with renewable energy, rather than millions of homes and offices.[77]
However, this centralized computing model poses several challenges, especially in security and privacy. Current legislation does not sufficiently protect users from companies mishandling their data on company servers. This suggests potential for further legislative regulations on cloud computing and tech companies.[78]
Quantum computing
[edit]Quantum computing is an area of research that brings together the disciplines of computer science, information theory, and quantum physics. While the idea of information as part of physics is relatively new, there appears to be a strong tie between information theory and quantum mechanics.[79] Whereas traditional computing operates on a binary system of ones and zeros, quantum computing uses qubits. Qubits are capable of being in a superposition, i.e. in both states of one and zero, simultaneously. Thus, the value of the qubit is not between 1 and 0, but changes depending on when it is measured. This trait of qubits is known as quantum entanglement, and is the core idea of quantum computing that allows quantum computers to do large scale computations.[80] Quantum computing is often used for scientific research in cases where traditional computers do not have the computing power to do the necessary calculations, such in molecular modeling. Large molecules and their reactions are far too complex for traditional computers to calculate, but the computational power of quantum computers could provide a tool to perform such calculations.[81]
See also
[edit]- Artificial intelligence
- Computational science
- Computational thinking
- Computer algebra
- Confidential computing
- Creative computing
- Data-centric computing
- Electronic data processing
- Enthusiast computing
- Index of history of computing articles
- Instruction set architecture
- Internet of things
- Lehmer sieve
- Liquid computing
- List of computer term etymologies
- Mobile computing
- Outline of computers
- Outline of computing
- Scientific computing
- Spatial computing
- Ubiquitous computing
- Unconventional computing
- Urban computing
- Virtual reality
References
[edit]- ^ "Computing Classification System". Digital Library. Association for Computing Machinery.
- ^ "Computing Careers & Disciplines: A Quick Guide for Prospective Students and Career Advisors (2nd edition, ©2020)". CERIC. 17 January 2020. Retrieved 4 July 2022.
- ^ "The History of Computing". mason.gmu.edu. Retrieved 12 April 2019.
- ^ Wynn-Williams, C. E. (2 July 1931), "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena", Proceedings of the Royal Society A, 132 (819): 295–310, Bibcode:1931RSPSA.132..295W, doi:10.1098/rspa.1931.0102
- ^ Lee, Thomas H. (2003). The Design of CMOS Radio-Frequency Integrated Circuits (PDF). Cambridge University Press. ISBN 978-1-139-64377-1. Archived from the original (PDF) on 9 December 2019. Retrieved 16 September 2019.
- ^ Puers, Robert; Baldi, Livio; Voorde, Marcel Van de; Nooten, Sebastiaan E. van (2017). Nanoelectronics: Materials, Devices, Applications, 2 Volumes. John Wiley & Sons. p. 14. ISBN 978-3-527-34053-8.
- ^ Lavington, Simon (1998), A History of Manchester Computers (2 ed.), Swindon: The British Computer Society, pp. 34–35
- ^ Moskowitz, Sanford L. (2016). Advanced Materials Innovation: Managing Global Technology in the 21st century. John Wiley & Sons. pp. 165–167. ISBN 978-0-470-50892-3.
- ^ Frosch, C. J.; Derick, L (1957). "Surface Protection and Selective Masking during Diffusion in Silicon". Journal of the Electrochemical Society. 104 (9): 547. doi:10.1149/1.2428650.
- ^ KAHNG, D. (1961). "Silicon-Silicon Dioxide Surface Device". Technical Memorandum of Bell Laboratories: 583–596. doi:10.1142/9789814503464_0076. ISBN 978-981-02-0209-5.
{{cite journal}}: ISBN / Date incompatibility (help) - ^ Lojek, Bo (2007). History of Semiconductor Engineering. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg. p. 321. ISBN 978-3-540-34258-8.
- ^ "Who Invented the Transistor?". Computer History Museum. 4 December 2013. Retrieved 20 July 2019.
- ^ Hittinger, William C. (1973). "Metal-Oxide-Semiconductor Technology". Scientific American. 229 (2): 48–59. Bibcode:1973SciAm.229b..48H. doi:10.1038/scientificamerican0873-48. ISSN 0036-8733. JSTOR 24923169.
- ^ Fossum, Jerry G.; Trivedi, Vishal P. (2013). Fundamentals of Ultra-Thin-Body MOSFETs and FinFETs. Cambridge University Press. p. vii. ISBN 978-1-107-43449-3.
- ^ Malmstadt, Howard V.; Enke, Christie G.; Crouch, Stanley R. (1994). Making the Right Connections: Microcomputers and Electronic Instrumentation. American Chemical Society. p. 389. ISBN 978-0-8412-2861-0.
The relative simplicity and low power requirements of MOSFETs have fostered today's microcomputer revolution.
- ^ "Definition of computer". PCMAG. Retrieved 5 February 2024.
- ^ Denny, Jory (16 October 2020). "What is an algorithm? How computers know what to do with data". The Conversation. Retrieved 5 February 2024.
- ^ Butterfield, Andrew; Ngondi, Gerard Ekembe NgondiGerard Ekembe; Kerr, Anne (21 January 2016), Butterfield, Andrew; Ngondi, Gerard Ekembe; Kerr, Anne (eds.), "computer", A Dictionary of Computer Science, Oxford University Press, doi:10.1093/acref/9780199688975.001.0001, ISBN 978-0-19-968897-5, retrieved 5 February 2024
- ^ "Common CPU components – The CPU – Eduqas – GCSE Computer Science Revision – Eduqas – BBC Bitesize". www.bbc.co.uk. Retrieved 5 February 2024.
- ^ Paulson, Laurence (28 February 2018). "Computational logic: its origins and applications". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 474 (2210). arXiv:1712.04375. Bibcode:2018RSPSA.47470872P. doi:10.1098/rspa.2017.0872. PMC 5832843. PMID 29507522.
- ^ Paulson, Lawrence C. (February 2018). "Computational logic: its origins and applications". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 474 (2210) 20170872. arXiv:1712.04375. Bibcode:2018RSPSA.47470872P. doi:10.1098/rspa.2017.0872. PMC 5832843. PMID 29507522.
- ^ "Wordreference.com: WordNet 2.0". Princeton University, Princeton, NJ. Retrieved 19 August 2007.
- ^ Rouse, Margaret (March 2019). "system software". WhatIs.com. TechTarget.
- ^ "Basic Computer Terms". web.pdx.edu. Retrieved 18 April 2024.
- ^ Morris (Aff1), Jeremy Wade; Elkins (Aff1), Evan (October 2015). "The Fibreculture Journal: 25 | FCJ-181 There's a History for That: Apps and Mundane Software as Commodity". The Fibreculture Journal (FCJ-181). Retrieved 5 February 2024.
{{cite journal}}: CS1 maint: numeric names: authors list (link) - ^ "Computer network definition". Archived from the original on 21 January 2012. Retrieved 12 November 2011.
- ^ "TCP/IP: What is TCP/IP and How Does it Work?". Networking. Retrieved 14 March 2024.
- ^ Dhavaleswarapu, Ratna. (2019). The Pallid Image of Globalization in Kiran Desai's The Inheritance of Loss. Retrieved 19 April 2024.
- ^ "Internet | Description, History, Uses & Facts". Encyclopedia Britannica. 3 June 2024. Retrieved 7 June 2024.
- ^ McGee, Vanesha (8 November 2023). "What is Coding and What Is It Used For?". ComputerScience.org. Retrieved 23 June 2024.
- ^ Nagl, Manfred, ed. (1995). Graph-Theoretic Concepts in Computer Science. Lecture Notes in Computer Science. Vol. 1017. doi:10.1007/3-540-60618-1. ISBN 978-3-540-60618-5. ISSN 0302-9743.
- ^ Parsons, June (2022). "New Perspectives Computer Concepts Comprehensive | 21st Edition". Cengage. 21st edition. ISBN 978-0-357-67481-9.
- ^ "Become a Programmer Analyst at PERI Software Solutions – The Middlebury Sites Network". sites.middlebury.edu. Retrieved 6 March 2025.
- ^ "5 Skills Developers Need Beyond Writing Code". 23 January 2019.
- ^ Bresnahan, Timothy F.; Greenstein, Shane (March 1999). "Technological Competition and the Structure of the Computer Industry". The Journal of Industrial Economics. 47 (1): 1–40. doi:10.1111/1467-6451.00088. ISSN 0022-1821.
- ^ IEEE Computer Society; ACM (12 December 2004). Computer Engineering 2004: Curriculum Guidelines for Undergraduate Degree Programs in Computer Engineering (PDF). p. iii. Archived from the original (PDF) on 12 June 2019. Retrieved 17 December 2012.
Computer System engineering has traditionally been viewed as a combination ofboth electronic engineering (EE) and computer science (CS).
- ^ Trinity College Dublin. "What is Computer System Engineering". Retrieved 21 April 2006., "Computer engineers need not only to understand how computer systems themselves work, but also how they integrate into the larger picture. Consider the car. A modern car contains many separate computer systems for controlling such things as the engine timing, the brakes and the air bags. To be able to design and implement such a car, the computer engineer needs a broad theoretical understanding of all these various subsystems & how they interact.
- ^ Abran, Alain; Moore, James W.; Bourque, Pierre; Dupuis, Robert; Tripp, Leonard L. (2004). Guide to the Software Engineering Body of Knowledge. IEEE. p. 1. ISBN 978-0-7695-2330-9.
- ^ ACM (2006). "Computing Degrees & Careers". ACM. Archived from the original on 17 June 2011. Retrieved 23 November 2010.
- ^ Laplante, Phillip (2007). What Every Engineer Should Know about Software Engineering. Boca Raton: CRC. ISBN 978-0-8493-7228-5. Retrieved 21 January 2011.
- ^ Sommerville, Ian (2008). Software Engineering (7 ed.). Pearson Education. p. 26. ISBN 978-81-7758-530-8. Retrieved 10 January 2013.
- ^ Peter, Naur; Randell, Brian (7–11 October 1968). Software Engineering: Report of a conference sponsored by the NATO Science Committee (PDF). Garmisch, Germany: Scientific Affairs Division, NATO. Retrieved 26 December 2008.
- ^ Randell, Brian (10 August 2001). "The 1968/69 NATO Software Engineering Reports". Brian Randell's University Homepage. The School of the Computer Sciences, Newcastle University. Retrieved 11 October 2008.
The idea for the first NATO Software Engineering Conference, and in particular that of adopting the then practically unknown term software engineering as its (deliberately provocative) title, I believe came originally from Professor Fritz Bauer.
- ^ "Software Engineering – Guide to the software engineering body of knowledge (SWEBOK)". International Organization for Standardization. ISO/IEC TR 19759:2015. Retrieved 21 May 2019.
- ^ "WordNet Search – 3.1". Wordnetweb.princeton.edu. Retrieved 14 May 2012.
- ^ "The Interaction Design Foundation - What is Human-Computer Interaction (HCI)?".
- ^ Schatz, Daniel; Bashroush, Rabih; Wall, Julie (2017). "Towards a More Representative Definition of Cyber Security". The Journal of Digital Forensics, Security and Law. 12 (2). doi:10.15394/jdfsl.2017.1476.
- ^ Dhar, Vasant (2013). "Data science and prediction". Communications of the ACM. 56 (12): 64–73. doi:10.1145/2500499. ISSN 0001-0782.
- ^ Cao, Longbing (31 May 2018). "Data Science: A Comprehensive Overview". ACM Computing Surveys. 50 (3): 1–42. arXiv:2007.03606. doi:10.1145/3076253. ISSN 0360-0300. S2CID 207595944.
- ^ "Definition of Application Landscape". Software Engineering for Business Information Systems (sebis). 21 January 2009. Archived from the original on 5 March 2011. Retrieved 14 January 2011.
- ^ Denning, Peter (July 1999). "COMPUTER SCIENCE: THE DISCIPLINE". Encyclopaedia of Computer Science (2000 Edition).
The Domain of Computer Science: Even though computer science addresses both human-made and natural information processes, the main effort in the discipline has been directed toward human-made processes, especially information processing systems and machines
- ^ Jessup, Leonard M.; Valacich, Joseph S. (2008). Information Systems Today (3rd ed.). Pearson Publishing. pp. –, 416.
- ^ "Computing Degrees & Careers " Information Systems". Association for Computing Machinery. Archived from the original on 6 July 2018. Retrieved 6 July 2018.
- ^ Davis, Timothy; Geist, Robert; Matzko, Sarah; Westall, James (March 2004). "τ'εχνη: A First Step". Technical Symposium on Computer Science Education: 125–129. ISBN 1-58113-798-2.
In 1999, Clemson University established a (graduate) degree program that bridges the arts and the sciences... All students in the program are required to complete graduate level work in both the arts and computer science
- ^ Khazanchi, Deepak; Bjorn Erik Munkvold (Summer 2000). "Is information system a science? an inquiry into the nature of the information systems discipline". ACM SIGMIS Database. 31 (3): 24–42. doi:10.1145/381823.381834. ISSN 0095-0033. S2CID 52847480.
From this we have concluded that IS is a science, i.e., a scientific discipline in contrast to purportedly non-scientific fields
- ^ "Bachelor of Information Sciences (Computer Science)". Massey University. 24 February 2006. Archived from the original on 19 June 2006.
Computer Science is the study of all aspects of computer systems, from the theoretical foundations to the very practical aspects of managing large software projects
- ^ Polack, Jennifer (December 2009). "Planning a CIS Education Within a CS Framework". Journal of Computing Sciences in Colleges. 25 (2): 100–106. ISSN 1937-4771.
- ^ Hayes, Helen; Onkar Sharma (February 2003). "A decade of experience with a common first year program for computer science, information systems and information technology majors". Journal of Computing Sciences in Colleges. 18 (3): 217–227. ISSN 1937-4771.
In 1988, a degree program in Computer Information Systems (CIS) was launched with the objective of providing an option for students who were less inclined to become programmers and were more interested in learning to design, develop, and implement Information Systems, and solve business problems using the systems approach
- ^ Freeman, Peter; Hart, David (August 2004). "A Science of Design for Software-Intensive Systems". Communications of the ACM. 47 (8): 19–21. doi:10.1145/1012037.1012054. ISSN 0001-0782. S2CID 14331332.
Computer science and engineering needs an intellectually rigorous, analytical, teachable design process to ensure development of systems we all can live with ... Though the other components' connections to the software and their role in the overall design of the system are critical, the core consideration for a software-intensive system is the software itself, and other approaches to systematizing design have yet to solve the "software problem"—which won't be solved until software design is understood scientifically.
- ^ Daintith, John, ed. (2009), "IT", A Dictionary of Physics, Oxford University Press, ISBN 978-0-19-923399-1, retrieved 1 August 2012 (subscription required)
- ^ "Free on-line dictionary of computing (FOLDOC)". Archived from the original on 15 April 2013. Retrieved 9 February 2013.
- ^ Chandler, Daniel; Munday, Rod (January 2011), "Information technology", A Dictionary of Media and Communication (first ed.), Oxford University Press, ISBN 978-0-19-956875-8, retrieved 1 August 2012 (subscription required)
- ^ On the later more broad application of the term IT, Keary comments- "In its original application 'information technology' was appropriate to describe the convergence of technologies with application in the broad field of data storage, retrieval, processing, and dissemination. This useful conceptual term has since been converted to what purports to be concrete use, but without the reinforcement of definition...the term IT lacks substance when applied to the name of any function, discipline, or position." Anthony Ralston (2000). Encyclopedia of computer science. Nature Pub. Group. ISBN 978-1-56159-248-7. Retrieved 12 May 2013..
- ^ Kershner, Ryan J.; Bozano, Luisa D.; Micheel, Christine M.; Hung, Albert M.; Fornof, Ann R.; Cha, Jennifer N.; Rettner, Charles T.; Bersani, Marco; Frommer, Jane; Rothemund, Paul W. K.; Wallraff, Gregory M. (2009). "Placement and orientation of individual DNA shapes on lithographically patterned surfaces". Nature Nanotechnology. 4 (9): 557–561. Bibcode:2009NatNa...4..557K. CiteSeerX 10.1.1.212.9767. doi:10.1038/nnano.2009.220. PMID 19734926. supplementary information: DNA origami on photolithography
- ^ Harlander, M. (2011). "Trapped-ion antennae for the transmission of quantum information". Nature. 471 (7337): 200–203. arXiv:1011.3639. Bibcode:2011Natur.471..200H. doi:10.1038/nature09800. PMID 21346764. S2CID 4388493.
- "Atomic antennas transmit quantum information across a microchip". ScienceDaily (Press release). 26 February 2011.
- ^ Monz, Thomas (2011). "14-Qubit Entanglement: Creation and Coherence". Physical Review Letters. 106 (13) 130506. arXiv:1009.6126. Bibcode:2011PhRvL.106m0506M. doi:10.1103/PhysRevLett.106.130506. PMID 21517367. S2CID 8155660.
- ^ "World record: Calculations with 14 quantum bits". www.nanowerk.com.
- ^ Saw-Wai Hla et al., Nature Nanotechnology 31 March 2010 "World's smallest superconductor discovered" Archived 28 May 2010 at the Wayback Machine. Four pairs of certain molecules have been shown to form a nanoscale superconductor, at a dimension of 0.87 nanometers. Access date 31 March 2010
- ^ Tom Simonite, "Computing at the speed of light", Technology Review Wed., August 4, 2010 MIT
- ^ Sebastian Anthony (Dec 10,2012), "IBM creates first commercially viable silicon nanophotonic chip", accessdate=2012-12-10
- ^ Open Compute: Does the data center have an open future? accessdate=2013-08-11
- ^ "Putting electronics in a spin". 8 August 2007. Retrieved 23 November 2020.
- ^ "Merging spintronics with photonics" (PDF). Archived from the original (PDF) on 6 September 2019. Retrieved 6 September 2019.
- ^ Lalieu, M. L. M.; Lavrijsen, R.; Koopmans, B. (10 January 2019). "Integrating all-optical switching with spintronics". Nature Communications. 10 (1): 110. arXiv:1809.02347. Bibcode:2019NatCo..10..110L. doi:10.1038/s41467-018-08062-4. ISSN 2041-1723. PMC 6328538. PMID 30631067.
- ^ Farmakidis, Nikolaos; Youngblood, Nathan; Li, Xuan; Tan, James; Swett, Jacob L.; Cheng, Zengguang; Wright, C. David; Pernice, Wolfram H. P.; Bhaskaran, Harish (1 November 2019). "Plasmonic nanogap enhanced phase-change devices with dual electrical-optical functionality". Science Advances. 5 (11) eaaw2687. arXiv:1811.07651. Bibcode:2019SciA....5.2687F. doi:10.1126/sciadv.aaw2687. ISSN 2375-2548. PMC 6884412. PMID 31819898.
- ^ "The NIST Definition of Cloud Computing" (PDF). U.S. Department of Commerce. September 2011. Archived (PDF) from the original on 9 October 2022.
- ^ Berl, A.; Gelenbe, E.; Girolamo, M. Di; Giuliani, G.; Meer, H. De; Dang, M. Q.; Pentikousis, K. (September 2010). "Energy-Efficient Cloud Computing". The Computer Journal. 53 (7): 1045–1051. doi:10.1093/comjnl/bxp080. ISSN 1460-2067.
- ^ Kaufman, L. M. (July 2009). "Data Security in the World of Cloud Computing". IEEE Security & Privacy. 7 (4): 61–64. Bibcode:2009ISPri...7d..61H. doi:10.1109/MSP.2009.87. ISSN 1558-4046. S2CID 16233643.
- ^ Steane, Andrew (1 February 1998). "Quantum computing". Reports on Progress in Physics. 61 (2): 117–173. arXiv:quant-ph/9708022. Bibcode:1998RPPh...61..117S. doi:10.1088/0034-4885/61/2/002. ISSN 0034-4885. S2CID 119473861.
- ^ Horodecki, Ryszard; Horodecki, Paweł; Horodecki, Michał; Horodecki, Karol (17 June 2009). "Quantum entanglement". Reviews of Modern Physics. 81 (2): 865–942. arXiv:quant-ph/0702225. Bibcode:2009RvMP...81..865H. doi:10.1103/RevModPhys.81.865. S2CID 59577352.
- ^ Baiardi, Alberto; Christandl, Matthias; Reiher, Markus (3 July 2023). "Quantum Computing for Molecular Biology*". ChemBioChem. 24 (13) e202300120. arXiv:2212.12220. doi:10.1002/cbic.202300120. PMID 37151197.
External links
[edit]Computing
View on GrokipediaFundamentals
Definition and Scope
Computing encompasses the systematic study of algorithmic processes that describe, transform, and manage information, including their theoretical foundations, analysis, design, implementation, efficiency, and practical applications.[1] This discipline centers on computation as the execution of defined procedures by mechanical or electronic devices to solve problems or process data, distinguishing it from mere calculation by emphasizing discrete, rule-based transformations rather than continuous analog operations. The scope of computing extends across multiple interconnected subfields, including computer science, which focuses on abstraction, algorithms, and software; computer engineering, which integrates hardware design with computational principles; software engineering, emphasizing reliable system development; information technology, dealing with the deployment and management of computing infrastructure; and information systems, bridging computing with organizational needs.[6] These areas collectively address challenges from foundational questions of computability—such as those posed by Turing's 1936 halting problem, which demonstrates inherent limits in determining algorithm termination—to real-world implementations in data storage, where global data volume reached approximately 120 zettabytes in 2023.[7] Computing's breadth also incorporates interdisciplinary applications, drawing on mathematics for complexity theory (e.g., P versus NP problem unresolved since 1971), electrical engineering for circuit design, and domain-specific adaptations in fields like bioinformatics and financial modeling, while evolving to tackle contemporary issues such as scalable distributed systems and ethical constraints in automated decision-making.[8] This expansive framework underscores computing's role as both a foundational science and an enabling technology, with professional bodies like the ACM, founded in 1947, standardizing curricula to cover these elements across undergraduate programs worldwide.[9]Theoretical Foundations
The theoretical foundations of computing rest on Boolean algebra, which provides the logical framework for binary operations essential to digital circuits and algorithms. George Boole introduced this system in 1847 through The Mathematical Analysis of Logic, treating logical propositions as algebraic variables that could be manipulated using operations like AND, OR, and NOT, formalized as addition, multiplication, and complement in a binary field.[10] He expanded it in 1854's An Investigation of the Laws of Thought, demonstrating how laws of thought could be expressed mathematically, enabling the representation of any computable function via combinations of these operations, which underpins all modern digital logic gates.[11] Computability theory emerged in the 1930s to formalize what functions are mechanically calculable, addressing Hilbert's Entscheidungsproblem on algorithmically deciding mathematical truths. Kurt Gödel contributed primitive recursive functions in 1931 as part of his incompleteness theorems, defining a class of total computable functions built from basic operations like successor and projection via composition and primitive recursion.[12] Alonzo Church developed lambda calculus around 1932–1936, a notation for expressing functions anonymously (e.g., λx.x for identity) and supporting higher-order functions, proving it equivalent to recursive functions for computability.[13] Alan Turing formalized the Turing machine in his 1936 paper "On Computable Numbers, with an Application to the Entscheidungsproblem," describing an abstract device with a tape, read/write head, and state register that simulates any algorithmic process by manipulating symbols according to a finite table of rules, solving the Entscheidungsproblem negatively by showing undecidable problems like the halting problem exist.[14] The Church-Turing thesis, formulated independently by Church and Turing in 1936, posits that these models—lambda calculus, recursive functions, and Turing machines—capture all effective methods of computation, meaning any function intuitively computable by a human with paper and pencil can be computed by a Turing machine, though unprovable as it equates informal intuition to formal equivalence.[15] This thesis implies inherent limits: not all real numbers are computable (most require infinite non-repeating decimals), and problems like determining if two Turing machines compute the same function are undecidable.[14] These foundations extend to complexity theory, classifying problems by resource requirements (time, space) on Turing machines or equivalents, with classes like P (polynomial-time solvable) and NP (verifiable in polynomial time) highlighting open questions such as P = NP, which, if false, would confirm some optimization problems resist efficient algorithms despite feasible verification.[16] Empirical implementations, like digital circuits realizing Boolean functions and software interpreting Turing-complete languages (e.g., via interpreters), validate these theories causally: physical constraints mirror theoretical limits, as unbounded computation requires infinite resources, aligning abstract models with realizable machines.Historical Development
Early Concepts and Precursors
The abacus, one of the earliest known mechanical aids to calculation, originated in ancient Mesopotamia around 2400 BCE and consisted of beads slid on rods to perform arithmetic operations like addition and multiplication through positional notation.[17] Its design allowed rapid manual computation by representing numbers in base-10, influencing later devices despite relying on human operation rather than automation.[18] In 1642, Blaise Pascal invented the Pascaline, a gear-based mechanical calculator capable of addition and subtraction via a series of dials and carry mechanisms, primarily to assist his father's tax computations.[19] Approximately 50 units were produced, though limitations in handling multiplication, division, and manufacturing precision restricted its widespread adoption.[20] Building on this, Gottfried Wilhelm Leibniz designed the Stepped Reckoner in 1671 and constructed a prototype by 1673, introducing a cylindrical gear (stepped drum) that enabled the first mechanical multiplication and division through repeated shifting and addition.[18] Leibniz's device aimed for full four-operation arithmetic but suffered from mechanical inaccuracies in carry propagation, foreshadowing challenges in scaling mechanical computation.[21] The 1801 Jacquard loom, invented by Joseph Marie Jacquard, employed chains of punched cards to automate complex weaving patterns by controlling warp threads, marking an early use of perforated media for sequential instructions.[22] This binary-encoded control system, where holes represented selections, demonstrated programmable automation outside pure arithmetic, influencing data input methods in later computing.[23] Charles Babbage proposed the Difference Engine in 1822 to automate the computation of mathematical tables via the method of finite differences, using mechanical gears to eliminate human error in polynomial evaluations up to seventh degree.[24] Though never fully built in his lifetime due to funding and precision issues, a portion demonstrated feasibility, and a complete version was constructed in 1991 confirming its operability.[25] Babbage later conceived the Analytical Engine around 1837, a general-purpose programmable machine with separate mills for processing, stores for memory, and conditional branching, powered by steam and instructed via punched cards inspired by Jacquard.[24] Ada Lovelace, in her 1843 notes expanding on Luigi Menabrea's description of the Analytical Engine, outlined an algorithm to compute Bernoulli numbers using looping operations, widely regarded as the first published computer program due to its explicit sequence of machine instructions.[26] Her annotations emphasized the engine's potential beyond numerical calculation to manipulate symbols like music, highlighting conceptual generality. Concurrently, George Boole formalized Boolean algebra in 1847's The Mathematical Analysis of Logic and expanded it in 1854's An Investigation of the Laws of Thought, reducing logical operations to algebraic manipulation of binary variables (0 and 1), providing a symbolic foundation for circuit design and algorithmic decision-making.[27] These mechanical and logical precursors established core principles of automation, programmability, and binary representation, enabling the transition to electronic computing despite technological barriers like imprecision and scale.[10]Birth of Electronic Computing
The birth of electronic computing occurred in the late 1930s and early 1940s, marking the shift from mechanical and electromechanical devices to machines using electronic components like vacuum tubes for high-speed digital operations. This era was propelled by the demands of World War II for rapid calculations in ballistics, cryptography, and scientific simulations, enabling computations orders of magnitude faster than predecessors. Key innovations included binary arithmetic, electronic switching, and separation of memory from processing, laying the groundwork for modern digital systems.[28][29] The Atanasoff-Berry Computer (ABC), developed from 1939 to 1942 by physicist John Vincent Atanasoff and graduate student Clifford Berry at Iowa State College, is recognized as the first electronic digital computer. It employed approximately 300 vacuum tubes for logic operations, a rotating drum for regenerative capacitor-based memory storing 30 50-bit words, and performed parallel processing to solve systems of up to 29 linear equations. Unlike earlier mechanical calculators, the ABC used electronic means for arithmetic—adding, subtracting, and logical negation—and was designed for specific numerical tasks, though it lacked full programmability. A prototype was operational by October 1939, with the full machine tested successfully in 1942 before wartime priorities halted further development.[28][30] In Britain, engineer Tommy Flowers designed and built the Colossus machines starting in 1943 at the Post Office Research Station for code-breaking at Bletchley Park. The first Colossus, operational by December 1943, utilized 1,500 to 2,400 vacuum tubes to perform programmable Boolean operations on encrypted teleprinter messages, achieving speeds of 5,000 characters per second. Ten such machines were constructed by war's end, aiding in deciphering high-level German Lorenz ciphers and shortening the war. Classified until the 1970s, Colossus demonstrated electronic programmability via switches and plugs for special-purpose tasks, though it did not employ a stored-program architecture.[31][32] The Electronic Numerical Integrator and Computer (ENIAC), completed in 1945 by John Mauchly and J. Presper Eckert at the University of Pennsylvania for the U.S. Army Ordnance Department, represented the first general-purpose electronic digital computer. Spanning 1,800 square feet with 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, and 10,000 capacitors, it weighed over 27 tons and consumed 150 kilowatts of power. ENIAC performed 5,000 additions per second and was reprogrammed via wiring panels and switches for tasks like ballistic trajectory calculations, though reconfiguration took days. Funded at $487,000 (equivalent to about $8 million today), it was publicly demonstrated in February 1946 and influenced subsequent designs despite reliability issues from tube failures every few hours.[33][29][34] These pioneering machines highlighted the potential of electronic computing but were constrained by vacuum tube fragility, immense size, heat generation, and manual reprogramming. Their success validated electronic digital principles, paving the way for stored-program architectures proposed by John von Neumann in 1945 and the transistor revolution in the 1950s.[29]Transistor Era and Miniaturization
The transistor, a semiconductor device capable of amplification and switching, was invented in December 1947 by physicists John Bardeen, Walter Brattain, and William Shockley at Bell Laboratories in Murray Hill, New Jersey.[35] Unlike vacuum tubes, which were bulky, power-hungry, and prone to failure due to filament burnout and heat generation, transistors offered compact size, low power consumption, high reliability, and solid-state operation without moving parts or vacuum seals.[36] This breakthrough addressed key limitations of first-generation electronic computers like ENIAC, which relied on thousands of vacuum tubes and occupied entire rooms while dissipating massive heat.[37] Early transistorized computers emerged in the mid-1950s, marking the transition from vacuum tube-based systems. The TRADIC (TRAnsistor DIgital Computer), developed by Bell Laboratories for the U.S. Air Force, became the first fully transistorized computer operational in 1955, utilizing approximately 800 transistors in a compact three-cubic-foot chassis with significantly reduced power needs compared to tube equivalents.[38] Subsequent machines, such as the TX-0 built by MIT's Lincoln Laboratory in 1956, demonstrated practical viability, offering speeds up to 200 kHz and programmability that foreshadowed minicomputers.[39] These systems halved physical size and power requirements while boosting reliability, enabling deployment in aerospace and military applications where vacuum tube fragility was prohibitive.[40] Despite advantages, discrete transistors—individual components wired by hand—faced scalability issues: manual assembly limited density, increased costs, and introduced failure points from interconnections. This spurred the integrated circuit (IC), where multiple transistors, resistors, and capacitors formed a single monolithic chip. Jack Kilby at Texas Instruments demonstrated the first working IC prototype on September 12, 1958, etching components on a germanium slice to prove passive and active elements could coexist without wires.[41] Independently, Robert Noyce at Fairchild Semiconductor patented a silicon-based planar IC in July 1959, enabling reproducible manufacturing via photolithography and diffusion processes.[42] ICs exponentially reduced size and cost; by 1961, Fairchild produced the first commercial ICs with multiple transistors, paving the way for hybrid circuits in Apollo guidance computers.[43] Miniaturization accelerated through scaling laws, formalized by Gordon Moore in his 1965 Electronics magazine article. Moore observed that the number of components per integrated circuit had doubled annually since 1960—from about 3 to 60—and predicted this trend would continue for a decade, driven by manufacturing advances like finer linewidths and larger wafers. Revised in 1975 to doubling every two years, this "Moore's Law" held empirically for decades, correlating transistor counts from thousands in 1970s microprocessors (e.g., Intel 4004 with 2,300 transistors) to billions today, while feature sizes shrank from micrometers to nanometers via processes like CMOS fabrication.[44] Causally, denser integration lowered costs per transistor (halving roughly every 1.5-2 years), boosted clock speeds, and diminished power per operation, transforming computing from mainframes to portable devices—evident in the 1971 Intel 4004, the first microprocessor integrating CPU functions on one chip.[45] These dynamics, rooted in semiconductor physics and engineering economies, not only miniaturized hardware but catalyzed mass adoption by making computation ubiquitous and affordable. Beyond Moore's Law on transistor density, computational power for applications like AI training has scaled faster through parallelism and distributed systems; since around 2010, training compute for notable AI models has doubled approximately every six months.[46][47][36]Personal and Ubiquitous Computing
The advent of personal computing marked a shift from centralized mainframe systems to affordable, individual-owned machines, enabling widespread adoption among hobbyists and professionals. The Altair 8800, introduced by Micro Instrumentation and Telemetry Systems (MITS) in January 1975 as a kit for $397 or assembled for $439, is recognized as the first commercially successful personal computer, powered by an Intel 8080 microprocessor with 256 bytes of RAM and featuring front-panel switches for input.[48] This device sparked the microcomputer revolution by inspiring homebrew clubs and software development, including the first product from Microsoft—a BASIC interpreter for the Altair.[3] Subsequent models like the Apple II, released in June 1977, advanced accessibility with built-in color graphics, sound capabilities, and expandability via slots, selling over 6 million units by 1993 and popularizing applications such as VisiCalc, the first electronic spreadsheet in 1979.[3] The IBM Personal Computer (Model 5150), announced on August 12, 1981, standardized the industry with its open architecture, Intel 8088 processor running at 4.77 MHz, 16 KB of RAM (expandable), and Microsoft MS-DOS as the operating system, priced starting at $1,565.[49] This design allowed third-party hardware and software compatibility, leading to clones like the Compaq Portable in 1982 and fostering a market that grew from niche to mainstream, with IBM generating $1 billion in revenue in the PC's first year.[50] Portability emerged concurrently, exemplified by the Osborne 1 in April 1981, the first true laptop weighing 24 pounds with a 5-inch CRT display, Zilog Z80 CPU, and bundled software, though limited by non-upgradable components.[3] By the late 1980s, personal computers had proliferated in homes and offices, driven by falling costs—average prices dropped below $2,000 by 1985—and software ecosystems, transitioning computing from institutional tools to everyday utilities. Ubiquitous computing extended this personalization by envisioning computation embedded seamlessly into the environment, rendering devices "invisible" to users focused on tasks rather than technology. The concept was formalized by Mark Weiser, chief technologist at Xerox PARC, who coined the term around 1988 and articulated it in his 1991 Scientific American article "The Computer for the 21st Century," proposing a progression from desktops to mobile "tabs" (inch-scale), "pads" (foot-scale), and "boards" (yard-scale) devices that integrate with physical spaces via wireless networks and sensors.[51] Weiser's prototypes at PARC, including active badges for location tracking (1990) and early tablet-like interfaces, demonstrated context-aware systems where computation anticipates user needs without explicit interaction, contrasting the visible, user-initiated paradigm of personal computers.[52] This vision influenced subsequent developments, such as personal digital assistants (PDAs) like the Apple Newton in 1993 and PalmPilot in 1997, which combined portability with basic synchronization and handwriting recognition, paving the way for always-on computing.[3] By the early 2000s, embedded systems in appliances and wearables began realizing Weiser's calm technology principles, where devices operate in the background—evident in the rise of wireless sensor networks and early smartphones like the BlackBerry (1999)—prioritizing human-centered augmentation over explicit control, though challenges like power constraints and privacy persisted.[51] These eras collectively democratized computing power, evolving from isolated personal machines to pervasive, interconnected fabrics of daily life.Networked and Cloud Era
The development of computer networking began with ARPANET, launched by the U.S. Advanced Research Projects Agency (ARPA) in October 1969 as the first large-scale packet-switching network connecting heterogeneous computers across research institutions.[53] The initial connection succeeded on October 29, 1969, linking a UCLA computer to one at Stanford Research Institute, transmitting the partial message "LO" before crashing.[54] This system demonstrated resource sharing and resilience through decentralized routing, foundational to modern networks.[55] ARPANET's evolution accelerated with the standardization of TCP/IP protocols, adopted network-wide on January 1, 1983, enabling seamless interconnection of diverse systems and birthing the Internet as a "network of networks."[56] By the mid-1980s, NSFNET extended high-speed connectivity to U.S. academic supercomputing centers in 1986, fostering broader research collaboration while ARPANET was decommissioned in 1990.[57] Commercialization followed, with the first Internet service provider, Telenet, emerging in 1974 as a public packet-switched network, and private backbone providers like UUNET enabling business access by the early 1990s.[54] The World Wide Web, proposed by Tim Berners-Lee at CERN in 1989 and publicly released in 1991, integrated hypertext with TCP/IP, spurring exponential user growth from under 1 million Internet hosts in 1993 to over 50 million by 1999.[58] The cloud era built on networked foundations, shifting computing from localized ownership to scalable, on-demand services via virtualization and distributed infrastructure. Amazon Web Services (AWS) pioneered this in 2006 with launches of Simple Storage Service (S3) for durable object storage and Elastic Compute Cloud (EC2) for resizable virtual servers, allowing pay-per-use access without hardware procurement.[59] Preceding AWS, Amazon's Simple Queue Service (SQS) debuted in 2004 for decoupled message processing, addressing scalability needs exposed during the 2000 dot-com bust.[60] Competitors followed, with Microsoft Azure in 2010 and Google Cloud Platform in 2011, driving cloud market growth to over $500 billion annually by 2023 through economies of scale in data centers and automation.[61] This paradigm reduced capital expenditures for enterprises, enabling rapid deployment of applications like streaming and AI, though it introduced dependencies on provider reliability and data sovereignty concerns.[62] By the 2010s, hybrid and multi-cloud strategies emerged alongside edge computing to minimize latency, with 5G networks from 2019 enhancing mobile connectivity for IoT and real-time processing.[63] Cloud adoption correlated with efficiency gains, as firms like Netflix migrated fully to AWS in 2016, handling petabytes of data via automated scaling.[64] Despite benefits, challenges persist, including vendor lock-in and energy demands of global data centers, which consumed about 1-1.5% of worldwide electricity by 2022.[62]Core Technologies
Hardware Components
Hardware components form the physical infrastructure of computing systems, enabling the manipulation of binary data through electrical, magnetic, and optical means to perform calculations, store information, and interface with users. These elements operate on principles of electron flow in semiconductors, electromagnetic storage, and signal transduction, with designs rooted in the von Neumann model that integrates processing, memory, and input/output via shared pathways.[65] This architecture, outlined in John von Neumann's First Draft of a Report on the EDVAC dated June 30, 1945, established the sequential fetch-execute cycle central to modern hardware.[66] The central processing unit (CPU) serves as the core executor of instructions, comprising an arithmetic logic unit (ALU) for computations, control unit for orchestration, and registers for temporary data. Early electronic computers like ENIAC (1945) used vacuum tubes for logic gates, but the transistor's invention in 1947 at Bell Labs enabled denser integration.[67] The first single-chip microprocessor, Intel's 4004 with 2,300 transistors operating at 740 kHz, debuted November 15, 1971, revolutionizing scalability by embedding CPU functions on silicon.[67] Contemporary CPUs, such as those from AMD and Intel, feature billions of transistors, multi-core parallelism, and clock speeds exceeding 5 GHz, driven by Moore's Law observations of exponential density growth.[68] Memory hardware divides into primary (fast, volatile access for runtime data) and secondary (slower, non-volatile for persistence). Primary memory relies on random access memory (RAM), predominantly dynamic RAM (DRAM) cells storing bits via capacitor charge, requiring periodic refresh to combat leakage; a 2025-era DDR5 module might offer 64 GB at 8,400 MT/s bandwidth.[69] Static RAM (SRAM) in CPU caches uses flip-flop circuits for constant access without refresh, trading density for speed. Read-only memory (ROM) variants like EEPROM retain firmware non-volatily via trapped charges in floating-gate transistors.[69] Secondary storage evolved from magnetic drums (1932) to hard disk drives (HDDs), with IBM's RAMAC (1956) providing 5 MB on 50 platters.[70] HDDs employ spinning platters and read/write heads for areal densities now surpassing 1 TB per square inch via perpendicular recording. Solid-state drives (SSDs), leveraging NAND flash since the 1980s but commercialized widely post-2006, eliminate mechanical parts for latencies under 100 μs, with 3D-stacked cells enabling capacities over 8 TB in consumer units; endurance limits stem from finite program/erase cycles, typically 3,000 for TLC NAND.[70] Input/output (I/O) components facilitate data exchange, including keyboards (scanning matrix switches), displays (LCD/OLED pixel arrays driven by GPUs), and network interfaces (Ethernet PHY chips modulating signals). Graphics processing units (GPUs), originating in 1980s arcade hardware, specialize in parallel tasks like rendering, with NVIDIA's GeForce 256 (1999) as the first dedicated GPU boasting 23 million transistors for transform and lighting.[67] Motherboards integrate these via buses like PCIe 5.0 (2021), supporting 128 GT/s per lane for high-bandwidth interconnects. Power supplies convert AC to DC, with efficiencies over 90% in 80 PLUS Platinum units to mitigate heat from Joule losses.[71] Cooling systems, from fans to liquid loops, dissipate thermal energy proportional to power draw, per P = V × I fundamentals.[72]Software Systems
Software systems consist of interacting programs, data structures, and documentation organized to achieve specific purposes through computer hardware execution. They form the intermediary layer between hardware and users, translating high-level instructions into machine-readable operations while managing resources such as memory, processing, and input/output.[73][74] System software and application software represent the primary classifications. System software operates at a low level to control hardware and provide a platform for other software, encompassing operating systems (OS), device drivers, utilities, and compilers that allocate CPU time, manage storage, and handle peripherals.[75] Application software, in contrast, addresses user-specific needs, such as word processing, data analysis, or web browsing, relying on system software for underlying support without direct hardware interaction.[75][76] Operating systems exemplify foundational software systems, evolving from rudimentary monitors in the 1950s that sequenced batch jobs on vacuum-tube computers to multitasking environments by the 1960s. A pivotal advancement occurred in 1969 when Unix was developed at Bell Laboratories on a PDP-7 minicomputer, introducing hierarchical file systems, pipes for inter-process communication, and portability via the C programming language, which facilitated widespread adoption in research and industry.[77] Subsequent milestones include the 1981 release of MS-DOS for IBM PCs, enabling personal computing dominance with command-line interfaces, and the 1991 debut of Linux, an open-source Unix-like kernel by Linus Torvalds that powers over 96% of the world's top supercomputers as of 2023 due to its modularity and community-driven enhancements.[77][78] Beyond OS, software systems incorporate middleware for distributed coordination, such as message queues and APIs that enable scalability in enterprise environments, and database management systems like Oracle's offerings since 1979, which enforce data integrity via ACID properties (atomicity, consistency, isolation, durability). Development of large-scale software systems applies engineering disciplines, including modular design and testing to mitigate complexity, as scaling from thousands to millions of lines of code increases error rates exponentially without rigorous verification.[74] Real-time systems, critical for embedded applications in aviation and automotive sectors, prioritize deterministic response times, with examples like VxWorks deployed in NASA's Mars rovers since 1997.[78] Contemporary software systems increasingly integrate cloud-native architectures, leveraging containers like Docker (introduced 2013) for portability across hybrid infrastructures, reducing deployment times from weeks to minutes while supporting microservices that decompose monoliths into independent, fault-tolerant components. Security remains integral, with vulnerabilities like buffer overflows exploited in historical incidents such as the 1988 Morris Worm affecting 10% of the internet, underscoring the need for formal verification and least-privilege principles in design.[78]Networking and Distributed Computing
Networking in computing refers to the technologies and protocols that interconnect multiple computing devices, enabling the exchange of data and resources across local, wide-area, or global scales. The foundational packet-switching technique, which breaks data into packets for independent routing, originated with ARPANET, the first operational network deployed on October 29, 1969, connecting four university nodes under U.S. Department of Defense funding.[79] This approach addressed limitations of circuit-switching by improving efficiency and resilience, as packets could take varied paths to destinations. By 1977, ARPANET interconnected with satellite and packet radio networks, demonstrating heterogeneous internetworking.[80] The TCP/IP protocol suite, developed in the 1970s by Vint Cerf and Bob Kahn, became the standard for ARPANET on January 1, 1983, facilitating reliable, connection-oriented (TCP) and best-effort (IP) data delivery across diverse networks.[56] [81] This transition enabled the modern Internet's scalability, with IP handling addressing and routing while TCP ensuring ordered, error-checked transmission. The OSI reference model, standardized by ISO in 1984, conceptualizes networking in seven layers—physical, data link, network, transport, session, presentation, and application—to promote interoperability, though TCP/IP's four-layer structure (link, internet, transport, application) dominates implementations for its pragmatism over theoretical purity.[82] [83] Key protocols include HTTP for web data transfer, introduced in 1991, and DNS for domain resolution, operational since 1987, both operating at the application layer.[84] Distributed computing builds on networking by partitioning computational tasks across multiple interconnected machines, allowing systems to handle workloads infeasible for single nodes, such as massive data processing or fault-tolerant services. Systems communicate via message passing over networks, coordinating actions despite issues like latency, failures, and partial synchronization, as nodes lack shared memory.[85] [86] Core challenges include achieving consensus on state amid node failures, addressed by algorithms like those in the Paxos family, which ensure agreement through proposal and acceptance phases even with byzantine faults.[87] Major advances include the MapReduce programming model, introduced by Google in 2004 for parallel processing on large clusters, which separates data mapping and reduction phases to simplify distributed computation over fault-prone hardware.[88] Apache Hadoop, an open-source implementation released in 2006, popularized this for big data ecosystems, enabling scalable storage via HDFS and batch processing on commodity clusters.[89] Cloud platforms like AWS further integrate distributed computing, distributing encryption or simulation tasks across virtualized resources for efficiency.[85] These systems prioritize scalability and fault tolerance, with replication algorithms ensuring data availability across nodes, though trade-offs persist per the CAP theorem's constraints on consistency, availability, and partition tolerance.[87]Disciplines and Professions
Computer Science
Computer science is the systematic study of computation, algorithms, and information processes, encompassing both artificial and natural systems.[90] It applies principles from mathematics, logic, and engineering to design, analyze, and understand computational methods for solving problems efficiently.[91] Unlike applied fields focused solely on hardware implementation, computer science emphasizes abstraction, formal models of computation, and the limits of what can be computed.[92] The theoretical foundations trace to early 20th-century work in mathematical logic, including Kurt Gödel's 1931 incompleteness theorems, which highlighted limits in formal systems.[93] A pivotal development occurred in 1936 when Alan Turing introduced the Turing machine in his paper "On Computable Numbers," providing a formal model for mechanical computation and proving the undecidability of the halting problem.[14] This established key concepts like universality in computation, where a single machine can simulate any algorithmic process given sufficient resources. As an academic discipline, computer science formalized in the 1960s, with the term coined around that decade by numerical analyst George Forsythe to distinguish it from numerical analysis and programming.[94] The first dedicated departments appeared in 1962 at Purdue University and Stanford University, marking its separation from electrical engineering and mathematics.[95] By the 1970s, growth in algorithms research, programming language theory, and early artificial intelligence efforts solidified its scope, driven by advances in hardware that enabled complex simulations.[96] Core subfields include:- Theory of computation: Examines what problems are solvable, using models like Turing machines and complexity classes (e.g., P vs. NP).[97]
- Algorithms and data structures: Focuses on efficient problem-solving methods, such as sorting algorithms with time complexities analyzed via Big O notation (e.g., quicksort averaging O(n log n)).[91]
- Programming languages and compilers: Studies syntax, semantics, and type systems, with paradigms like functional (e.g., Haskell) or object-oriented (e.g., C++) enabling reliable software construction.[98]
- Artificial intelligence and machine learning: Develops systems for pattern recognition and decision-making, grounded in probabilistic models and optimization (e.g., neural networks trained via backpropagation since the 1980s).[98]
- Databases and information systems: Handles storage, retrieval, and querying of large datasets, using relational models formalized by E.F. Codd in 1970 with SQL standards emerging in the 1980s.[99]
