Recent from talks
Contribute something
Nothing was collected or created yet.
SPECint
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
SPEC INT is a computer benchmark specification for CPU integer processing power. It is maintained by the Standard Performance Evaluation Corporation (SPEC). SPEC INT is the integer performance testing component of the SPEC test suite. The first SPEC test suite, CPU92, was announced in 1992. It was followed by CPU95, CPU2000, and CPU2006. The latest standard is SPEC CPU 2017 and consists of SPEC speed and SPEC rate (aka SPECCPU_2017).
SPEC INT 2006
[edit]CPU2006 is a set of benchmarks designed to test the CPU performance of a modern server computer system. It is split into two components, the first being CINT2006, the other being CFP2006 (SPECfp), for floating point testing.
SPEC defines a base runtime for each of the 12 benchmark programs. For SPECint2006, that number ranges from 1000 to 3000 seconds. The timed test is run on the system, and the time of the test system is compared to the reference time, and a ratio is computed. That ratio becomes the SPEC INT score for that test. (This differs from the rating in SPECINT2000, which multiplies the ratio by 100.)
As an example for SPECint2006, consider a processor which can run 400.perlbench in 2000 seconds. The time it takes the reference machine to run the benchmark is 9770 seconds.[1] Thus the ratio is 4.885. Each ratio is computed, and then the geometric mean of those ratios is computed to produce an overall value.
Background
[edit]For a fee, SPEC distributes source code files to users wanting to test their systems. These files are written in a standard programming language, which is then compiled for each particular CPU architecture and operating system. Thus, the performance measured is that of the CPU, RAM, and compiler, and does not test I/O, networking, or graphics.
Two metrics are reported for a particular benchmark, "base" and "peak". Compiler options account for the difference between the two numbers. As the SPEC benchmarks are distributed as source code, it is up to the party performing the test to compile this code. There is agreement that the benchmarks should be compiled in the same way as a user would compile a program, but there is no consistent method for user compilation, it varies system by system. SPEC, in this case, defines two reference points, "base" and "peak". Base has a more strict set of compilation rules than peak. Less optimization can be done, the compiler flags must be the same for each benchmark, in the same order, and there must be a limited number of flags. Base, then, is closest to how a user would compile a program with standard flags. The 'peak' metric can be performed with maximum compiler optimization, even to the extent of different optimizations for each benchmark. This number represents maximum system performance, achieved by full compiler optimization.
SPEC INT tests are carried out on a wide range of hardware, with results typically published for the full range of system-level implementations employing the latest CPUs. For SPECint2006, the CPUs include Intel and AMD x86 & x86-64 processors, Sun SPARC CPUs, IBM Power CPUs, and IA-64 CPUs. This range of capabilities, specifically in this case the number of CPUs, means that the SPEC INT benchmark is usually run on only a single CPU, even if the system has many CPUs. If a single CPU has multiple cores, only a single core is used; hyper-threading is also typically disabled,
A more complete system-level benchmark that allows all CPUs to be used is known as SPECint_rate2006, also called "CINT2006 Rate".
Benchmarks
[edit]The SPECint2006 test suite consists of 12 benchmark programs, designed to test exclusively the integer performance of the system.
The benchmarks are:[2]
| Benchmark | Language | Category | Description |
|---|---|---|---|
| 400.perlbench | C | Perl Programming Language | Derived from Perl V5.8.7. The workload includes SpamAssassin, MHonArc (an email indexer), and specdiff (SPEC's tool that checks benchmark outputs). |
| 401.bzip2 | C | Compression | Julian Seward's bzip2 version 1.0.3, modified to do most work in memory, rather than doing I/O. |
| 403.gcc | C | C Compiler | Based on gcc Version 3.2, generates code for Opteron. |
| 429.mcf | C | Combinatorial Optimization | Vehicle scheduling. Uses a network simplex algorithm (which is also used in commercial products) to schedule public transport. |
| 445.gobmk | C | Artificial Intelligence: go playing | Plays the game of Go, a simply described but deeply complex game. |
| 456.hmmer | C | Search Gene Sequence | Protein sequence analysis using profile hidden Markov models (profile HMMs) |
| 458.sjeng | C | Artificial Intelligence: chess playing | A highly-ranked chess program that also plays several chess variants. |
| 462.libquantum | C | Physics: Quantum Computing | Simulates a quantum computer, running Shor's polynomial-time factorization algorithm. |
| 464.h264ref | C | Video Compression | A reference implementation of H.264/AVC, encodes a videostream using 2 parameter sets. The H.264/AVC standard is expected to replace MPEG2 |
| 471.omnetpp | C++ | Discrete Event Simulation | Uses the OMNet++ discrete event simulator to model a large Ethernet campus network. |
| 473.astar | C++ | Path-finding Algorithms | Pathfinding library for 2D maps, including the well known A* algorithm. |
| 483.xalancbmk | C++ | XML Processing | A modified version of Xalan-C++, which transforms XML documents to other document types. |
See also
[edit]References
[edit]- ^ "The SPEC Benchmarks". 2003-02-03. Retrieved 2008-09-01.
- ^ "SPEC CINT2006 Benchmarks".
External links
[edit]- SPEC Website
- CINT2006 (Integer Component of SPEC CPU2006)
- Published results of SPECint2006
- Description of each benchmark
SPECint
View on GrokipediaOverview
Definition and Purpose
SPECint is the integer computation component of the SPEC CPU benchmark suite, designed to evaluate the performance of computer processors on non-floating-point intensive workloads. It consists of a set of standardized benchmarks that simulate real-world applications involving integer operations, such as data compression, XML processing, and artificial intelligence simulations like game tree searches. These benchmarks focus on compute-intensive tasks that stress the CPU's integer arithmetic, branch prediction, and memory subsystem capabilities, providing a measure of how effectively a system handles such operations without relying on floating-point processing.[2] The primary purpose of SPECint is to deliver vendor-neutral and reproducible performance metrics that enable objective comparisons between different CPU architectures and systems in integer-heavy environments. By establishing a common framework, it assists hardware vendors, system integrators, and procurement teams in making informed decisions about processor selection and system design, particularly for applications where integer performance is critical, such as software compilation or network packet processing. This standardization ensures that results are comparable across diverse hardware configurations, promoting transparency and trust in performance claims.[6] A core principle of SPECint is its emphasis on simulations derived from actual user applications rather than abstract synthetic tests, which helps capture realistic performance characteristics under controlled conditions. To maintain fairness, strict run rules govern benchmark execution, requiring the use of unmodified source code provided by SPEC, standard compiler optimizations without custom benchmark-specific tweaks, and multiple runs to ensure repeatability. These guidelines, including full disclosure of hardware and software configurations, prevent misleading optimizations and facilitate verification of results by the community.[6]Relation to SPEC Organization
The Standard Performance Evaluation Corporation (SPEC) was founded in October 1988 by leading hardware vendors, including Apollo, Hewlett-Packard, MIPS Computer Systems, and Sun Microsystems, with the primary goal of developing standardized, industry-agreed benchmarks to replace earlier flawed metrics such as Dhrystone and Whetstone that lacked realism and portability.[7] This initiative addressed widespread dissatisfaction in the computing industry over inconsistent and vendor-biased performance claims, establishing SPEC as a neutral authority for objective evaluations.[8] As a non-profit consortium, SPEC operates collaboratively with more than 120 members spanning hardware and software vendors, educational institutions, and research organizations, including prominent entities like Intel Corporation, Advanced Micro Devices (AMD), and IBM.[9] SPECint, the integer-intensive component of SPEC's CPU benchmarks, falls under the governance of the SPEC CPU subcommittee within the Open Systems Group (OSG), which coordinates development, updates, and standardization efforts among members.[10] This subcommittee ensures that benchmarks evolve to reflect contemporary computing workloads while maintaining vendor neutrality. SPEC maintains SPECint through a structured process of periodic suite releases, such as those in the SPEC CPU family, where new benchmarks are selected and validated collaboratively before public availability.[11] Validated results are published exclusively on spec.org following a mandatory submission and peer-review process that enforces strict compliance with run rules.[12] Compliance is further upheld via licensing agreements required for benchmark access and use, coupled with detailed audits during result reviews to verify hardware configurations, compilation methods, and execution fidelity.[13][6] A key aspect of this framework is that official SPECint results require submission to SPEC for approval prior to any public disclosure, thereby mitigating cherry-picking by ensuring all reported metrics undergo independent validation and are presented comprehensively.[14] This policy promotes transparency and comparability across systems, reinforcing SPEC's role in fostering trustworthy performance assessments.Historical Development
Origins and Early Suites (1988–1995)
The Standard Performance Evaluation Corporation (SPEC) was founded in 1988 by leading computer vendors, including Apollo Computer, DEC, Hewlett-Packard, IBM, MIPS Computer Systems, Pyramid Technology, and Sun Microsystems, in response to widespread dissatisfaction with misleading and non-standardized performance claims in the industry.[8] These early benchmarks often exaggerated capabilities or lacked comparability across systems, prompting the consortium to develop portable, source-code-based tests for evaluating compute-intensive workloads on UNIX systems.[7] The organization's inaugural release, known as the SPEC Benchmark Suite version 1.0 (later retroactively called SPEC89), arrived in October 1989 and included 10 benchmarks total: four integer-focused programs written in C (eqntott for logic simulation, espresso for logic minimization, gcc for compilation, and li for Lisp interpretation) and six floating-point benchmarks in Fortran.[7] Performance was measured relative to a VAX 11/780 reference machine (normalized to 1.0), with the overall SPECmark89 score calculated as the arithmetic mean of the execution time ratios across all 10 benchmarks, emphasizing single-threaded CPU and memory subsystem performance.[15] By 1992, rapid advancements in processor architectures and the need for more targeted evaluation led SPEC to obsolete the unified SPEC89 suite and introduce separate integer and floating-point categories in the SPEC92 release (January 1992).[16] The integer component, CINT92 (later termed SPECint92), comprised six benchmarks: compress for data compression, eqntott for logic simulation, espresso for logic minimization, gcc for compilation, li for Lisp interpretation, and sc for synthetic computation.[15] This separation allowed for distinct SPECint92 and SPECfp92 scores, each computed as the geometric mean of the normalized time ratios for their respective benchmarks against the VAX 11/780 reference, providing a fairer aggregation that reduced bias toward outlier results compared to the arithmetic mean of SPECmark.[17] SPEC also introduced SPECrate metrics for throughput on multiprocessor systems, such as SPECrate_int92, which averaged the geometric means over multiple benchmark invocations to assess parallel integer performance.[15] These changes addressed criticisms of SPEC89's short runtimes (averaging around 2.5 billion dynamic instructions) and potential for cache residency, aiming for broader applicability to emerging RISC-based workstations.[8] The SPEC95 suite, released in June 1995, further refined the integer benchmarks to reflect evolving workloads and hardware, retiring SPEC92 by the end of 1996 while maintaining backward compatibility in methodology.[18] CINT95 included eight integer benchmarks in C: 099.go for AI game playing, 124.m88ksim for Motorola 88000 simulation, 126.gcc for compilation, 129.compress for data compression, 130.li for Lisp interpretation, 132.ijpeg for image processing, 134.perl for scripting, and 147.vortex for object-oriented database operations.[18] Run times were extended significantly (up to 520 billion dynamic instructions per benchmark) to minimize variability, incorporate larger datasets, and better stress memory hierarchies and compiler optimizations, while stricter portability rules using POSIX and ANSI standards ensured cross-platform consistency.[16] The reference machine shifted to a Sun SPARCstation 10 model 40 (40 MHz SuperSPARC, 64 MB memory), with SPECint95 and SPECint_base95 scores using geometric means of peak and base (restricted optimization) ratios, respectively; the SPECmark metric was fully retired due to its favoritism toward high-end systems under arithmetic averaging.[17] These updates established foundational principles for SPECint, emphasizing resistance to "benchmark-specific" tuning and real-world relevance in integer computing tasks.[18]SPEC CPU2000
SPEC CPU2000, released on December 30, 1999, marked a significant evolution in CPU benchmarking by introducing the first suite with 12 dedicated integer benchmarks under the CINT2000 component, designed to evaluate compute-intensive integer performance across a broader range of real-world applications.[8] These benchmarks, such as 164.gzip for data compression, 175.vpr for FPGA circuit placement and routing, and 252.eon for 3D visualization rendering, simulated larger-scale workloads including text processing akin to XML handling and graphics-intensive tasks, replacing the older SPEC CPU95 suite entirely with no overlapping programs.[19] The selection emphasized portability and realism, with benchmarks written primarily in C and one in C++, totaling over 500,000 lines of code across the integer set, to better reflect contemporary software demands like compilers (176.gcc) and scripting (253.perlbmk).[20] A key refinement in SPEC CPU2000 was the introduction of SPECint_base2000 and SPECint_peak2000 metrics, which provided standardized ways to compare systems while accommodating varying optimization strategies.[20] The base metric enforced portable compilation flags and a single high-optimization level across all benchmarks, ensuring fair, repeatable results without architecture-specific tweaks, whereas the peak metric permitted individualized tuning, such as processor-specific flags or feedback-directed optimization, to capture maximum potential performance.[20] This dual approach addressed limitations in prior suites by balancing conservatism with realism, with peak scores often exceeding base by 20-70% depending on the hardware, as seen in early results on Alpha and UltraSPARC systems.[20] To mitigate the clock-speed dominance observed in earlier benchmarks, where short runtimes favored higher-frequency processors over architectural efficiency, SPEC CPU2000 scaled workloads to execute a minimum of 1 billion instructions per benchmark, extending execution times to 10 seconds or more on reference hardware.[21] This adjustment promoted measurement of sustained performance, including memory subsystem interactions, rather than transient startup effects. The suite was retired on February 24, 2007, with no further results accepted, by which time published scores had begun reflecting the industry's transition from single-core dominance to early multi-core configurations, particularly through the SPECint_rate2000 metric that evaluated throughput with multiple concurrent instances.[22]SPEC CPU2006
SPEC CPU2006, released on August 24, 2006, by the Standard Performance Evaluation Corporation (SPEC), introduced the SPECint2006 benchmark suite as a standardized tool for evaluating integer-intensive CPU performance in compute-heavy workloads.[23] This suite comprises 12 benchmarks written primarily in C and C++, targeting diverse applications such as scripting, compression, compilation, and simulation to reflect real-world integer computation demands.[24] Representative examples include 400.perlbench, which simulates Perl scripting tasks involving email processing with tools like SpamAssassin and MHonArc, and 403.gcc, a C compiler benchmark that generates code for a specific processor architecture, emphasizing compilation efficiency.[25] These benchmarks were designed to stress CPU integer units, memory hierarchies, and compiler optimizations, providing a more comprehensive assessment than prior suites by incorporating larger, more complex workloads.[26] A key innovation in SPEC CPU2006 was the distinction between SPECint_speed2006 and SPECint_rate2006 metrics, enabling targeted evaluations of single-threaded latency versus multi-processor throughput.[23] The SPECint_speed2006 measures the time to complete a single instance of each benchmark, focusing on per-task execution speed for latency-sensitive applications, while SPECint_rate2006 runs multiple concurrent copies of benchmarks to gauge system throughput on multi-core or multi-processor configurations.[27] This separation allowed vendors to highlight strengths in both uniprocessor efficiency and parallel scalability, with results normalized as geometric means of individual benchmark ratios.[28] Testing protocols require three consecutive runs per benchmark to compute the median execution time for repeatability, with workloads scaled to execute for hundreds of seconds on reference hardware to stress CPU and memory systems adequately.[26] Performance scores are derived from ratios relative to reference execution times on a baseline system—a Sun Microsystems Ultra Enterprise 2 workstation equipped with a 296 MHz UltraSPARC II processor—providing a consistent scale for comparisons across hardware generations.[29] Although SPEC CPU2006 laid groundwork for efficiency considerations by encouraging documentation of power-related configurations in submissions, formal power metrics were not integrated until subsequent suites.[27]SPEC CPU2017
SPEC CPU2017, released in June 2017, represents the latest iteration of the SPEC CPU benchmark suite and serves as the current standard for evaluating integer compute-intensive performance through its SPECint component.[30] The SPECint2017 suite comprises 10 integer workloads, each available in both rate (SPECrate2017_int) and speed (SPECspeed2017_int) variants, focusing on diverse applications such as scripting, compression, and AI algorithms.[31] Representative examples include 500.perlbench_r, which benchmarks Perl scripting language interpretation, and 525.x264_r, which assesses H.264/AVC video encoding tasks. A key advancement in SPECint2017 is the introduction of larger input datasets, often up to 10 times the size of those in prior suites like SPEC CPU2006, to better reflect modern workload demands and stress memory hierarchies more realistically.[32] The suite also enhances multi-core support through OpenMP integration, allowing configurations with up to 128 threads to evaluate scalability in parallel environments.[33] Additionally, SPECint_peak2017 incorporates an efficiency metric that measures performance per unit of energy consumed, directly addressing power efficiency concerns in data centers by providing scores in terms of SPEC operations per joule.[34] Version 1.1 of SPEC CPU2017, released in September 2019, formalized and expanded power measurement capabilities, enabling comprehensive reporting of energy metrics alongside traditional performance scores.[35] As of November 2025, SPECint2017 remains the active benchmark for integer evaluations, with SPEC publishing results from vendors worldwide; development of a successor, SPEC CPU v8, is ongoing in the evaluation phase following the closure of benchmark submissions in 2023.[5][11] Its relevance persists in emerging areas, including integer-intensive tasks in AI and machine learning, such as tree search algorithms exemplified by benchmarks like 541.leela_r for Go game AI.Benchmark Components
Integer Workloads in SPEC CPU2006
The integer workloads in SPEC CPU2006, collectively known as the CINT2006 suite, consist of 12 benchmarks derived from real-world applications to evaluate compute-intensive integer performance across diverse domains such as scripting, compression, compilation, and simulation. These benchmarks emphasize integer arithmetic, control flow, and memory access patterns while avoiding significant floating-point operations to differentiate them from the floating-point suite. Selected for their representativeness and lack of overlap with floating-point tasks, the suite requires approximately 10-20 hours to complete on reference hardware, with inputs scaled for substantial execution times compared to prior versions.[26] The benchmarks are primarily implemented in ANSI C or C++, with input datasets designed to stress system components like processors, memory hierarchies, and compilers. Below is a description of each:- 400.perlbench: This benchmark simulates the execution of a cut-down version of the Perl v5.8.7 interpreter, including third-party modules, processing scripts like SpamAssassin for email filtering and MHonArc for mailing list archives. Written in ANSI C, it uses no file I/O and focuses on string manipulation and regular expressions; the reference input involves multiple scripts totaling around half a million lines of effective code.[26]
- 401.bzip2: It tests the bzip2 v1.0.3 compression and decompression algorithms by processing data entirely in memory, using three blocking factors on six input files (including JPEG images, binary executables, tar archives, HTML text, and a mixed source code collection). Implemented in ANSI C without file I/O, the benchmark highlights data compression efficiency on large datasets up to several megabytes.[26]
- 403.gcc: Based on the GNU C compiler with optimizations enabled, this benchmark compiles nine preprocessed C files (.i inputs) of varying sizes, generating x86-64 assembly code. Written in C, it features altered inlining decisions and high memory usage, simulating real compilation workloads with inputs ranging from small test files to larger programs.[26]
- 429.mcf: This optimizes vehicle routing and scheduling using a network simplex algorithm on timetabled and deadhead trip data, requiring large memory footprints (860 MB in 32-bit mode, 1.7 GB in 64-bit). Implemented in ANSI C, the reference input models complex supply chain scenarios with extensive graph structures.[26]
- 445.gobmk: Derived from the GNU Go program, it performs tactical analysis of Go game positions using AI heuristics on Smart Game Format (.sgf) files. Written in C, the benchmark evaluates multiple game states without portability issues, focusing on pattern recognition and search algorithms across various input sizes.[26]
- 456.hmmer: This searches protein databases using profile Hidden Markov Models (HMMs), employing functions like hmmsearch and hmmcalibrate on inputs such as the sprot41.dat sequence database and nph3.hmm model. Implemented in C, it simulates bioinformatics tasks with large biological datasets emphasizing sequence alignment.[26]
- 458.sjeng: It conducts game tree searches for chess and variants like Shatranj using alpha-beta pruning and transposition tables on nine Forsyth-Edwards Notation (FEN) positions. Written in ANSI C, the benchmark requires at least 32-bit integers and tests AI decision-making under computational constraints.[26]
- 462.libquantum: Simulating a quantum computer, this implements Shor's algorithm to factorize a command-line specified integer, modeling decoherence effects. Using C99, the reference input targets a modestly sized number, focusing on quantum bit operations and modular exponentiation.[26]
- 464.h264ref: Based on the H.264/AVC video compression reference software v9.3, it encodes video sequences using baseline and main profiles on YUV-format inputs like the 120-frame Foreman clip and 171-frame Soccer sequence. Written in C, the benchmark stresses integer-based motion estimation and transform coding for multimedia processing.[26]
- 471.omnetpp: This discrete-event simulation models an Ethernet network backbone with 8,000 computers and 900 switches, using NED topology files and omnetpp.ini configurations. Implemented in C++, the reference workload simulates packet routing in a large-scale campus environment.[26]
- 473.astar: Employing three variants of the A* pathfinding algorithm for 2D game AI, it navigates binary map files representing terrains with obstacles. Written in C++, the benchmark processes grid-based searches, with inputs scaled to test heuristic efficiency in route optimization.[26]
- 483.xalancbmk: A modified XSLT processor using Xerces-C++ v2.5.0, it transforms large XML documents with XSL stylesheets, such as converting DocBook to HTML. Implemented in C++, the reference input involves substantial XML parsing and tree manipulation for data processing tasks.[26]
Integer Workloads in SPEC CPU2017
The integer workloads in SPEC CPU2017 comprise 12 benchmarks designed to evaluate compute-intensive integer processing across a range of contemporary applications, reflecting advancements in areas such as multimedia, AI, simulation, and data processing. These benchmarks form the core of both the SPECrate 2017 Integer (throughput-oriented, denoted by "_r" suffix for most) and SPECspeed 2017 Integer (response-time-oriented) suites, with updates to accommodate 2010s-era technologies including multi-core processors and increased memory demands—up to 256 GB in high-end configurations for running multiple instances or large datasets. Unlike earlier suites, these workloads incorporate modern elements like advanced video codecs and AI-driven algorithms, emphasizing scalability on multi-socket systems while minimizing I/O to focus on CPU performance.[2][36] The benchmarks draw from real-world inspirations, simulating tasks in scripting, compilation, optimization, networking, encoding, AI, modeling, processing, simulation, and compression. Below is a catalog of the 12 integer benchmarks, highlighting their inspirations and key computational demands:| Benchmark | Real-World Inspiration | Computational Demands |
|---|---|---|
| 500.perlbench_r | Interpreted Perl scripting for text processing (e.g., email filtering with SpamAssassin). | High instruction counts (over 1.7 billion per run) involving string manipulation, regular expressions, and hash computations; demands efficient interpreter overhead handling.[37][38] |
| 502.gcc_r | GCC compiler for C code generation in software development. | Intensive code parsing, optimization, and assembly; features billions of instructions with complex control flow and aliasing challenges.[2][38] |
| 505.mcf_r | Network flow optimization for route planning (e.g., vehicle scheduling in logistics). | Solves minimum-cost flow problems using graph algorithms; low IPC (around 0.9) due to branch-heavy loops and high cache miss rates (up to 66% L2 misses).[2][38] |
| 520.omnetpp_r | Discrete event simulation for computer networks (e.g., protocol modeling). | Event scheduling and queue management in C++; moderate IPC with emphasis on pointer chasing and dynamic memory allocation.[2][38] |
| 523.x264_r | H.264 video encoding for multimedia compression, simulating multiple resolutions. | Motion estimation and discrete cosine transforms; high IPC (over 3.0) with intensive loop unrolling and SIMD operations on frame data.[2][38] |
| 525.x265_r | H.265 (HEVC) video encoding, an advanced codec for high-efficiency streaming. | Enhanced prediction and entropy coding over x264; demands greater computational depth for larger block sizes and parallelizable intra-prediction tasks.[2][38] |
| 526.deepsjeng | Chess AI using alpha-beta search with deep learning elements for game tree evaluation. | Branch-and-bound search with neural network approximations; high L3 cache misses (around 68%) from irregular access patterns in position analysis.[2][38] |
| 528.wrf_r | Integer aspects of weather modeling and atmospheric simulation. | Grid-based computations for fluid dynamics; memory-intensive with moderate IPC, focusing on array operations and conditional branching for forecast iterations.[2][38] |
| 538.imagick_r | Image processing and manipulation (e.g., via ImageMagick library). | Pixel transformations, filtering, and format conversions; balanced IPC with demands on vectorized operations for raster data handling.[2][38] |
| 544.nab_r | Biomolecular simulation for protein-nucleic acid interactions. | Sequence alignment and energy minimization; moderate IPC with focus on iterative solvers and data-dependent branching in molecular dynamics.[2][38] |
| 548.exchange2_r | Financial modeling via recursive algorithms (e.g., stock exchange simulations or puzzle solving like Sudoku). | Array manipulations and recursive generation; high store instructions (over 15%) with low memory footprint but intensive combinatorial exploration.[2][38] |
| 557.xz_r | Data compression using LZMA algorithm for file archiving. | Dictionary-based encoding and Huffman coding; high instruction throughput with emphasis on bit-level operations and buffer management.[2][38] |
