Recent from talks
Nothing was collected or created yet.
Software portability
View on WikipediaThis article needs additional citations for verification. (November 2011) |

Software portability is a design objective for source code to be easily made to run on different platforms. An aid to portability is the generalized abstraction between the application logic and system interfaces. When software with the same functionality is produced for several computing platforms, portability is the key issue for development cost reduction.
Strategies
[edit]Software portability may involve:
- Transferring installed program files to another computer of basically the same architecture.
- Reinstalling a program from distribution files on another computer of basically the same architecture.
- Building executable programs for different platforms from source code; this is what is usually understood by "porting".
Similar systems
[edit]When operating systems of the same family are installed on two computers with processors with similar instruction sets it is often possible to transfer the files implementing program files between them.
In the simplest case, the file or files may simply be copied from one machine to the other. However, in many cases, the software is installed on a computer in a way which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different drives or directories.
In some cases, software, usually described as "portable software", is specifically designed to run on different computers with compatible operating systems and processors, without any machine-dependent installation. Porting is no more than transferring specified directories and their contents. Software installed on portable mass storage devices such as USB sticks can be used on any compatible computer on simply plugging the storage device in, and stores all configuration information on the removable device. Hardware- and software-specific information is often stored in configuration files in specified locations such as the registry on Windows).
Software which is not portable in this sense must be modified much more to support the environment on the destination machine.
Different processors
[edit]As of 2011 the majority of desktop and laptop computers used microprocessors compatible with the 32- and 64-bit x86 instruction sets. Smaller portable devices use processors with different and incompatible instruction sets, such as ARM. The difference between larger and smaller devices is such that detailed software operation is different; an application designed to display suitably on a large screen cannot simply be ported to a pocket-sized smartphone with a tiny screen even if the functionality is similar.
Web applications are required to be processor independent, so portability can be achieved by using web programming techniques, writing in JavaScript. Such a program can run in a common web browser. Such web applications must, for security reasons, have limited control over the host computer, especially regarding reading and writing files. Non-web programs, installed upon a computer in the normal manner, can have more control, and yet achieve system portability by linking to portable libraries providing the same interface on different systems.
Source code portability
[edit]Software can be compiled and linked from source code for different operating systems and processors if written in a programming language supporting compilation for the platforms. This is usually a task for the program developers; typical users have neither access to the source code nor the required skills.
In open-source environments such as Linux the source code is available to all. In earlier days source code was often distributed in a standardised format, and could be built into executable code with a standard Make tool for any particular system by moderately knowledgeable users if no errors occurred during the build. Some Linux distributions distribute software to users in source form. In these cases there is usually no need for detailed adaptation of the software for the system; it is distributed in a way which modifies the compilation process to match the system.
Effort to port source code
[edit]Even with seemingly portable languages like C and C++, the effort to port source code can vary considerably. The authors of UNIX/32V (1979) reported that "[t]he (Bourne) shell [...] required by far the largest conversion effort of any supposedly portable program, for the simple reason that it is not portable."[1]
Sometimes the effort consists of recompiling the source code, but sometimes it is necessary to rewrite major parts of the software. Many language specifications describe implementation defined behaviour (e.g. right shifting a signed integer in C can do a logical or an arithmetic shift). Operating system functions or third party libraries might not be available on the target system. Some functions can be available on a target system, but exhibit slightly different behavior such as utime() fails under Windows with EACCES, when it is called for a directory). The program code can contain unportable things, like the paths of include files, drive letters, or the backslash. Implementation defined things like byte order and the size of an int can also raise the porting effort. In practice the claim of languages, like C and C++, to have the WOCA (write once, compile anywhere) is arguable.
See also
[edit]References
[edit]- ^ Thomas B. London and John F. Reiser (1978). A Unix operating system for the DEC VAX-11/780 computer. Bell Labs internal memo 78-1353-4.
Sources
[edit]- Mooney (1997). "Bringing Portability to the Software Process" (PDF). (help). West Virginia University. Dept. of Statistics and Computer Science. Archived from the original (PDF) on 2008-07-25. Retrieved 2008-03-17.
- Garen (2007). "Software Portability: Weighing Options, Making Choices". The CPA Journal. 77 (11): 3. Archived from the original on 2010-07-08.
- Lehey (1995). "Porting UNIX Software: From Download to Debug" (PDF). (help). Retrieved 2010-05-27.
Software portability
View on GrokipediaCore Concepts
Definition
Software portability refers to the degree to which a system, product, or component can be effectively and efficiently transferred from one hardware, software, or other operational or usage environment to another without requiring significant modifications. This capability relies on abstraction layers that separate application logic from underlying system interfaces, such as operating system calls or hardware-specific features, thereby minimizing dependencies on particular platforms.[8] Key components of software portability include the ability to operate across different operating systems (e.g., from Linux to Windows), hardware architectures (e.g., from x86 to ARM processors), and runtime environments (e.g., virtual machines or containerized setups). These elements ensure that the software maintains its functionality and performance when deployed in diverse settings, addressing variations in instruction sets, memory management, and input/output mechanisms. Software portability is distinct from interoperability, which concerns the exchange and use of information between two or more systems, products, or components, often across different environments.[9] It also differs from compatibility, defined as the degree to which a product, system, or component can exchange information with others and perform required functions while sharing the same hardware or software environment, potentially relying on specific dependencies rather than full transferability. A basic prerequisite for understanding software portability is recognizing computing platforms as integrated combinations of hardware (e.g., processors and peripherals), operating systems (e.g., providing core services like file handling), and supporting libraries (e.g., standard runtime components for common operations).[10] This holistic view highlights how portability targets the seamless adaptation across such platform variations.Historical Development
In the 1960s, software portability was severely limited by the dominance of proprietary mainframe systems from vendors like IBM, where each machine ran unique operating systems and software stacks that were incompatible across hardware architectures, making code reuse nearly impossible without extensive rewriting.[11] This era's batch-processing environments prioritized hardware-specific optimizations over standardization, as mainframes were custom-built for large organizations and lacked shared interfaces.[11] The 1970s marked a pivotal shift with the development of UNIX at Bell Labs, where early porting efforts highlighted the challenges of adapting code across machines. For instance, in 1977, Thomas L. Lyon's work on porting UNIX to the Interdata 8/32 involved manual scans of source code to identify and isolate non-portable elements, such as hardcoded file paths and byte-order assumptions that differed between the PDP-11's little-endian format and the target system's big-endian layout.[12] By 1979, the UNIX/32V port to the DEC VAX, led by John Reiser and Tom London, built on these lessons, requiring similar meticulous reviews to address architecture-specific issues like word lengths and alignment, as documented in Bell Labs' internal efforts that refined C's role in facilitating such transitions.[13] The design of the C language by Dennis Ritchie in the early 1970s further advanced portability by emphasizing machine-independent constructs, enabling UNIX to be retargeted more efficiently than assembly-based predecessors.[14] During the 1980s and 1990s, standardization efforts accelerated portability in UNIX-like environments. The POSIX (Portable Operating System Interface) standard, ratified as IEEE 1003.1 in 1988, defined common APIs, utilities, and behaviors to ensure software compatibility across diverse UNIX implementations, reducing vendor-specific divergences and promoting source code reuse.[15] The rise of open-source initiatives, including the GNU project in 1983 and Linux in 1991, amplified this by fostering collaborative development of cross-platform tools, while C's widespread adoption solidified its status as a portable systems language.[14] From the 2000s onward, the proliferation of web and mobile platforms drove new portability paradigms. Java, introduced by Sun Microsystems in 1995, achieved platform independence through its virtual machine, allowing bytecode to run unchanged across diverse hardware via just-in-time compilation, a design rooted in addressing embedded device fragmentation.[16] Similarly, JavaScript, developed by Netscape in 1995, enabled client-side scripting in browsers, rendering web applications inherently portable across operating systems without native recompilation.[17] By 2011, x86 architectures dominated desktops while ARM powered most mobiles, underscoring ongoing challenges in binary portability and spurring advancements in cross-compilation tools within the Linux ecosystem, which supported seamless targeting of multiple architectures for open-source software.[18]Types of Portability
Source Code Portability
Source code portability refers to the capability of compiling and linking the same or minimally modified source code across diverse operating systems, compilers, and hardware architectures, typically requiring recompilation for the target environment. This form of portability assumes access to the human-readable source files, allowing developers to adapt code as needed without relying on pre-compiled binaries. Unlike binary portability, which focuses on executable reuse, source code portability emphasizes reuse through adaptation, where the cost of porting remains lower than full redevelopment.[3] Key techniques for achieving source code portability include adhering to standardized programming languages that minimize platform dependencies. For instance, the ANSI C standard (now ISO/IEC 9899) promotes portability by defining consistent syntax, semantics, and standard libraries that enable efficient execution across varied computing systems, provided the code avoids non-standard extensions. Similarly, interpreted languages like Python facilitate source code portability by running unmodified scripts on any platform with a compatible interpreter, leveraging a virtual machine that abstracts underlying hardware differences. Handling dependencies, such as external libraries and header files, involves using conditional compilation directives (e.g., preprocessor macros in C) or modular designs to isolate platform-specific code, ensuring core logic remains environment-agnostic.[19][20] The effort required for porting source code often hinges on identifying and addressing platform-specific elements embedded in the codebase. Common issues include hardcoded file paths that vary between operating systems (e.g., Unix-style forward slashes versus Windows backslashes) and assumptions about data representation, such as byte order (endianness), where little-endian systems like x86 differ from big-endian ones like some PowerPC architectures. Scanning tools or manual reviews detect these, but extensive rewrites may be needed if the code relies on non-portable assumptions. Parameterizing such dependencies—replacing fixed values with configurable options—reduces porting effort.[3] Open-source software enhances source code portability by providing unrestricted access to the full codebase, allowing community-driven modifications tailored to specific platforms. In Linux distributions, this availability integrates with automated build systems like GNU Autotools or CMake, which configure, compile, and link code across architectures by detecting environment variables and dependencies during the build process, thereby streamlining ports without proprietary barriers.Binary and Platform Portability
Binary portability refers to the capability of transferring and executing a software application's compiled binary files across different computing environments without requiring recompilation or source code modifications. This form of portability is typically feasible only within environments that are highly similar in terms of hardware architecture and operating system (OS) configuration, as binaries are tightly coupled to specific instruction sets and runtime behaviors. A primary challenge in binary portability arises from incompatible instruction sets between hardware platforms, such as x86 and ARM architectures, where machine code instructions designed for one processor family cannot directly execute on another without translation or emulation. For instance, binary translation techniques can map instructions from a source architecture like PowerPC to a destination like x86, but they often incur performance overheads of 33% or more due to the need to handle architecture-specific semantics, including condition codes and indirect jumps. Additionally, differences in byte order—known as endianness, where big-endian systems (e.g., some PowerPC variants) store multi-byte values with the most significant byte first, versus little-endian systems (e.g., x86) doing the opposite—can lead to data corruption if not addressed, complicating direct binary transfers even on compatible processors.[21][22] OS-specific binary formats further hinder portability, as executables are formatted according to platform conventions; for example, Windows uses the Portable Executable (PE) format for .exe files, while Linux employs the Executable and Linkable Format (ELF) for binaries, rendering them incompatible without conversion or emulation. Dynamic linking exacerbates these issues, as binaries rely on shared libraries that may differ in location, version, or API across OSes, potentially causing runtime failures if dependencies are not resolved identically. To mitigate such platform mismatches, techniques like binary translation or virtualization are employed, though they introduce emulation overheads that can slow execution by factors of 5-10 compared to native performance.[23][24][21] Representative examples of binary portability include portable applications distributed on USB drives, which can run on similar Windows systems without installation by bundling dependencies and avoiding OS-specific registry changes, enabling seamless execution across compatible machines. In contrast, web applications leveraging JavaScript achieve a form of processor independence by executing interpreted code in browsers, bypassing traditional binary ties to hardware and OS instruction sets altogether. However, these approaches are limited; for instance, USB-based portability often fails across OS boundaries like Windows to Linux due to format incompatibilities, and while source code recompilation offers a broader alternative for dissimilar platforms, it shifts the effort away from pure binary transfer.[25][26]Data and Format Portability
Data and format portability refers to the ability of systems to transfer, read, write, and process data files or databases across diverse software applications and hardware environments while preserving data integrity, structure, and meaning. This aspect of portability emphasizes interoperability for non-executable data elements, such as files and records, independent of the underlying software execution. According to ISO/IEC 19941:2017, data portability is achieved through the ease of moving data, often by supplying it in a format directly compatible with the target system, thereby minimizing conversion efforts and potential errors. A primary challenge in data and format portability arises from format dependencies, where proprietary binary structures hinder cross-system access, contrasting with open standards that facilitate seamless exchange. For instance, early Microsoft Word documents in the .doc binary format often suffered from limited interoperability, as their closed specification restricted reading and editing in non-Microsoft applications, leading to data loss or corruption during transfers. This issue was mitigated with the 2007 transition to the .docx format, based on the Office Open XML (OOXML) standard, which uses XML for structured, human-readable content to enhance compatibility across platforms. Similarly, character encoding discrepancies, such as the limitations of ASCII (restricted to 128 basic Latin characters), impede portability for multilingual data; the adoption of Unicode, particularly UTF-8 encoding, addresses this by supporting over 159,000 characters across scripts (as of Unicode 17.0) while maintaining backward compatibility with ASCII, ensuring consistent rendering in global software environments.[27] Open standards play a crucial role in promoting data and format portability by reducing vendor lock-in and enabling broad adoption. Formats like CSV, defined in RFC 4180, provide a simple, text-based structure for tabular data exchange, widely supported in spreadsheet and database tools for reliable import/export without proprietary constraints. XML, as specified by the W3C, offers extensible markup for complex, hierarchical data, allowing self-describing documents that processors can validate and transform across systems. JSON, outlined in RFC 8259, serves as a lightweight alternative for web-based data interchange, using UTF-8 for universal compatibility. For documents, the PDF format under ISO 32000 ensures consistent viewing and printing regardless of originating software, preventing alterations and lock-in by providing an open, device-independent specification. This aspect is reinforced by regulations like the European Union's General Data Protection Regulation (GDPR, Article 20), which grants individuals the right to receive personal data in a structured, commonly used, and machine-readable format for transfer to another controller.[28][29][30][31][32]Strategies for Achieving Portability
Abstraction and Modular Design
Abstraction layers serve as a fundamental strategy in software portability by encapsulating platform-specific details behind a unified interface, allowing core application logic to remain independent of underlying operating systems or hardware.[8] This approach involves designing APIs that abstract away OS-dependent functions, such as file handling or threading, thereby enabling developers to write portable code that interacts solely with the abstraction rather than direct system calls. For instance, an operating system abstraction layer (OSAL) can unify disparate OS architectures into a common API, facilitating seamless application deployment across environments like wireless sensor networks.[8] Modular architecture complements abstraction by dividing software into loosely coupled components, where each module handles a specific function and interfaces with others through well-defined boundaries.[33] This separation isolates platform dependencies within dedicated modules, making it easier to replace or adapt them without affecting the overall system.[34] In modular designs, core logic is decoupled from dependencies, promoting reusability and reducing the scope of changes required during porting.[35] Key techniques for implementing these principles include preprocessor directives for conditional compilation, which allow code to include platform-specific implementations only when needed. In C, directives like#ifdef enable selective inclusion of code blocks based on compiler flags, such as defining different I/O routines for various systems.[36] This approach maintains a single codebase while accommodating variations, as seen in widely ported libraries where conditional blocks handle OS differences.[37]
Design patterns, such as the facade pattern, further enhance uniformity by providing a simplified interface to a complex subsystem, hiding portability concerns from client code. The facade acts as a gateway, routing calls to appropriate platform-specific implementations while presenting a consistent API.[38] This pattern supports source code portability by ensuring that higher-level modules remain agnostic to underlying variations.[39]
The benefits of abstraction and modular design include significantly reduced porting effort, as changes are confined to isolated layers or modules rather than pervasive modifications.[40] For example, the UNIX philosophy emphasizes writing small, modular tools that perform single tasks well and compose via standard interfaces, influencing portable software by minimizing assumptions about the environment and leveraging text streams for interoperability. This approach assumes minimal platform differences, such as relying on standard I/O functions defined in POSIX for input/output operations, which ensures broad compatibility across UNIX-like systems.
