Hubbry Logo
List of compilersList of compilersMain
Open search
List of compilers
Community hub
List of compilers
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
List of compilers
List of compilers
from Wikipedia

This page lists notable software that can be classified as a compiler, a compiler generator, an interpreter, translator, a tool foundation, an assembler,an automatable command line interface (shell), or similar.

Ada compilers

[edit]
Compiler Author Windows Unix-like Other OSs License type
PTC ObjectAda[citation needed] PTC, Inc. Yes Yes Yes Proprietary
GCC (GNAT) GNU Project Yes Yes Yes GPLv3+
GNAT LLVM[citation needed] AdaCore Yes Yes Yes GPLv3+
GreenHills Ada Optimizing Compiler[citation needed] Green Hills Software Yes Yes No Proprietary
PTC ApexAda[citation needed] PTC, Inc. No Yes Yes Proprietary
SCORE Ada[citation needed] DDC-I Yes Yes Yes Proprietary
Symbolics Ada[citation needed] Symbolics No No Symbolics Genera Proprietary
Tandem Ada[1] Tandem Computers No Yes Guardian, NonStop Kernel Proprietary

ALGOL 60 compilers

[edit]
Compiler Author Windows Unix-like Other OSs License type
ALGOL 60 RHA (Minisystems) Ltd No No DOS, CP/M Free for personal use
ALGOL 60 (Whetstone)[citation needed] Randell and Russell No No KDF9 Unknown
ALGOL 60 (Kidsgrove)[citation needed] Hawkins and Huxtable No No KDF9 Unknown
Persistent S-algol Paul Cockshott Yes No DOS Copyright only
MCP Burroughs No No MCP Proprietary

ALGOL 68 compilers

[edit]

cf. ALGOL 68s specification and implementation timeline

Name Year Purpose State Description Target CPU Licensing Implementation Language
ALGOL 68r0[citation needed] 1968 Standard Intl IFIP WG 2.1 Final Report Specification – August ACM
ALGOL 68-RR 1970 Military UK ICL 1900 ALGOL 60
EPOS ALGOLE[citation needed] 1971 Scientific
ALGOL 68RSRS 1972 Military UK Portable compiler system ICL 2900/Series 39, Multics, VMS & C generator (1993) Crown Copyright ALGOL 68RS
Mini ALGOL 68[citation needed] 1973 Research NL An interpreter for simple Algol 68 programs"An interpreter for simple Algol 68 Programs" (PDF). Centrum Wiskunde & Informatica. Archived from the original (PDF) on 2011-07-18. Portable interpreter Mathematisch Centrum ALGOL 60
ALGOL 68CC 1975 Scientific UK Cambridge Algol 68 ICL, IBM 360, PDP 10 & Unix, Telefunken, Tesla & Z80(1980)[2] Cambridge ALGOL 68C
ALGOL 68 Revised Reportr1[citation needed] 1975 Standard Intl IFIP WG 2.1 Revised Report Specification ACM
Odra Algol 68[citation needed] 1976 practical uses USSR/Poland Odra 1204/IL Soviet ALGOL 60
FLACCF 1977 Multi-purpose CA Revised Report complete implementation with debug features System/370 lease, Chion Corporation Assembler
Leningrad ALGOL 68L[citation needed] 1980 Telecommunications USSR Full Language + Modules IBM, DEC, CAMCOH, PS 1001 & PC Soviet
Interactive ALGOL 68I 1983 UK Incremental compilation PC Noncommercial shareware
ALGOL 68SS 1985 Scientific Intl Sun version of ALGOL 68 Sun-3, Sun SPARC (under SunOS 4.1 & Solaris 2), Atari ST (under GEMDOS), Acorn Archimedes (under RISC OS), VAX-11 under Ultrix-32
Algol68toC[3][better source needed] (ctrans) 1985 Electronics UK ctrans from ELLA ALGOL 68RS Portable C generator  Open sourced & Public Domained (1995) ALGOL 68RS
Algol 68 GenieG[citation needed] 2001 Full Language NL Includes standard collateral clause Portable interpreter GPL C
GCC (ga68) 2025 Full Language ES GCC Front-End Portable compiler GPL C

Assemblers (Intel *86)

[edit]
Assembler Author Windows Unix-like Other OSs License type
A86 assembler Eric Isaacson Yes No No Proprietary
FASM Tomasz Grysztar Yes Yes Yes BSD
GNU Assembler The GNU Project Yes Yes Yes GPLv3
High Level Assembly (HLA) Randall Hyde Yes Yes Yes Public domain
JWasm Assembler[4] Japheth and others Yes Yes Yes Sybase Open Watcom Public License
Microsoft Macro Assembler Microsoft Yes No No Proprietary
Netwide Assembler Simon Tatham and Julian Hall Yes Yes Yes BSD
Turbo Assembler Borland Yes No No Proprietary

Assemblers (Motorola 68*)

[edit]
Assembler Author Windows Unix-like Other OSs License type
Assembler[citation needed] Motorola Yes No No Proprietary
Devpac[citation needed] HiSoft Systems No No Amiga, Atari ST Proprietary
GNU Assembler The GNU Project Yes Yes Yes GPLv3
rmac[citation needed] James Hammons, George Nakos, Landon Dyer Yes Yes Yes Freeware

Assemblers (Zilog Z80)

[edit]
Assembler Author Windows Unix-like Other OSs License type
Microsoft MACRO-80 Microsoft No No Yes Proprietary
Zeus Assembler Neil Mottershead, Simon Brattel No No Yes Proprietary
Prometheus Proxima software No No No Proprietary

Assemblers (other)

[edit]
Assembler Author Windows Unix-like Other OSs License type
TMS 9900 assembler Texas Instruments Yes Yes Yes Proprietary
GNU Assembler The GNU Project Yes Yes Yes GPLv3
TAL (Tandem Application Language)[citation needed] Tandem Computers No Yes Guardian, NonStop Kernel Proprietary
pTAL (The Application Language)[citation needed] Tandem Computers No Yes NonStop Kernel, NonStop OS Proprietary
epTAL (The Application Language)[citation needed] HPE No Yes NonStop OS Proprietary

BASIC compilers

[edit]

Compiler Author Working state Windows Unix-like Other OSs License type Standard conformance
Minimal BASIC Full BASIC
AppGameKit The Game Creators Current Yes Yes No Proprietary ? ?
BASIC-PLUS-2[citation needed] Digital Equipment Corporation Discontinued No ? RSTS/E, RSX-11M Proprietary ? ?
BBC BASIC for SDL 2.0 Richard T. Russell Current Yes Yes (Linux, macOS, Android) Yes (Raspberry Pi OS) zlib License No No
BlitzMax Blitz Research Discontinued Yes Yes (Linux, macOS) No zlib License No No
DarkBASIC The Game Creators Inactive Yes No No MIT License No No
ECMA-55 Minimal BASIC compiler[5] John Gatewood Ham Current No Linux No GPLv2 Yes No
FreeBASIC FreeBASIC Development Team Current Yes Yes MS-DOS, FreeBSD, Linux GPLv2+ Partial[6][unreliable source?] No
FutureBASIC Brilor Software Current No macOS Classic Mac OS Proprietary Partial No
Gambas Benoît Minisini Current No Yes No GPLv2+ No No
GFA BASIC Frank Ostrowski Abandoned Yes No Amiga, Atari ST, MS-DOS Proprietary No No
Mercury RemObjects Current Yes Yes (Linux, macOS, Android, iOS) Yes (WebAssembly) Proprietary No No
PowerBASIC (formerly Turbo Basic) PowerBASIC, Inc. Inactive Yes No DOS Proprietary ? ?
PureBasic Fantaisie Software Current Yes Yes Yes Proprietary No No
QB64 Galleon Current Yes Yes Yes LGPLv2.1 Partial No
QuickBASIC Microsoft Discontinued No No MS-DOS Proprietary Partial No
Tandem BASIC[citation needed] Tandem Computers Historic No No Guardian, NonStop Kernel, NonStop OS Proprietary No No
True BASIC True BASIC Current Yes No No Proprietary Yes Partial[7]
VSI BASIC for OpenVMS VMS Software, Inc. Current No No OpenVMS Proprietary No No
Xojo (formerly REALbasic) Xojo Inc. (formerly Real Software) Current Yes Yes Yes Proprietary No No

BASIC interpreters

[edit]

C compilers

[edit]
Compiler Author Operating system[i] Bare machine License type Standard conformance
Microsoft Windows Unix-like Other OSs C89 C99 C11 C17
8cc[citation needed] Rui Ueyama Yes Yes ? ? MIT Yes Yes Yes No
Acorn C/C++ Acorn and Codemist No No RISC OS ? Proprietary Yes Yes No Yes
AMD Optimizing C/C++ Compiler (AOCC) AMD No Yes No ? Proprietary Yes Yes Yes Yes
Aztec C Manx Software Systems No No CP/M, CP/M-86, DOS, Classic Mac OS ? Proprietary ? ? ? ?
Amsterdam Compiler Kit Andrew Tanenbaum and Ceriel Jacobs No Yes Yes ? BSD ? ? ? ?
BDS C BD Software No No CP/M ? Public domain ? ? ? ?
bcc (Bruce's C Compiler)[8] Bruce Evans No Yes No ? GNU License ? ? ? ?
C++Builder Embarcadero Yes Yes (iOS, Android) No ? Proprietary Yes Yes Partial ?
cc65 No Yes No ? Zlib License No No No No
Ch SoftIntegration, Inc Yes macOS, FreeBSD, Linux, Solaris, HP-UX, AIX, QNX Yes ? Freeware Yes Yes No ?
Clang LLVM Project Yes Yes Yes ? Apache (LLVM Exception) Yes Yes Yes Yes
CompCert INRIA Yes Yes No ? Freeware (source code available for non-commercial use) or GPL Yes Partial No ?
cproc[citation needed] Michael Forney Yes Yes No ? ISC Yes Yes Yes Yes
DEC C[citation needed] Originally Digital Equipment Corporation, now VSI No Tru64, Linux OpenVMS ? Proprietary Yes Yes ? ?
Digital Mars Digital Mars Yes No No ? Proprietary ? ? ? ?
Digital Research C[9][better source needed] Digital Research ? ? CP/M, DOS ? Proprietary ? ? ? ?
Edison Design Group Edison Design Group Yes Yes Yes ? Proprietary Yes Yes Yes Yes
GCC (gcc) GNU Project MinGW, Cygwin, WSL Yes IBM mainframe, AmigaOS, VMS, RTEMS, DOS[10] Yes GPL Yes Partial[ii] Partial[ii] Partial[ii]
IAR C/C++ Compilers[citation needed] IAR Systems Yes Yes[note 1] No ? Proprietary Yes Yes Yes Yes
Intel oneAPI DPC++/C++ Compiler
(icx)
Intel Yes Linux No ? Freeware (optional priority support) Yes Yes Yes[11] Yes
Intel C++ Compiler Classic
(icc)
Intel Yes Linux, macOS No ? Freeware (optional priority support) Yes Partial[12] Partial[12] ?
Interactive C KISS Institute for Practical Robotics Yes Unix, macOS, Linux, IRIX, Solaris, SunOS No ? Freeware Partial No No ?
keil C/C++ Compilers Keil (company) Yes Yes No ? Proprietary ? ? ? ?
Lattice C Lifeboat Associates No Yes DOS, OS/2, Commodore, Amiga, Atari ST, Sinclair QL ? Proprietary ? ? ? ?
lcc Chris Fraser and David Hanson Yes Yes Yes ? Freeware (source code available for non-commercial use) Yes No No ?
Mark Williams C Mark Williams Company Yes Coherent Yes ? Proprietary - Coherent Compiler 3-clause BSD[clarification needed] ? ? ? ?
MCP Unisys No No MCP ? Proprietary ? ? ? ?
MikroC Compiler Mikroelektronika Yes Yes Yes ? Proprietary ? ? ? ?
MPW C Apple No No Classic Mac OS ? Proprietary ? ? ? ?
Open64 AMD, SGI, Google, HP, Intel, Nvidia, PathScale, Tsinghua University and others No Yes Yes ? GPL ? ? ? ?
Pacific C[citation needed] Hi-tech software No No DOS ? Freeware[13] ? ? ? ?
Pelles C[citation needed] Pelle Orinius Yes No No ? Freeware No Yes Yes Yes
Personal C Compiler (PCC)[citation needed] DeSmet No No DOS ? GPL[14] Yes[iii] No No No
PGCC The Portland Group Yes Yes Unknown ? Proprietary ? ? ? ?
Portable C Compiler Stephen C. Johnson, Anders Magnusson and others Yes Yes Yes ? BSD Yes Partial No ?
QuickC Microsoft Yes No No ? Proprietary ? ? ? ?
Ritchie C Compiler (PDP-11) Dennis Ritchie and John Reiser; converted to cross-compiler by Doug Gwyn Yes Yes Yes ? Freeware Partial Partial Partial Partial
Alan Snyder's Portable C Compiler Alan Snyder and current Maintainer larsbrinkhoff|Snyder-C-compiler No Yes No ? MIT License ? ? ? ?
SEGGER Compiler[15] SEGGER Microcontroller Yes Yes Yes ? Proprietary Yes Yes Partial Partial
SCC[citation needed] Roberto E. Vargas Caballero Yes Yes ? ? ISC Yes Yes No No
Small-C Ron Caine, James E. Hendrix, Byte magazine Yes Yes CP/M, DOS ? Public domain Partial No No ?
Small Device C Compiler Sandeep Dutta and others Yes Yes Unknown Yes GPL ? ? ? ?
Symbolics C[citation needed] Symbolics No No Symbolics Genera ? Proprietary ? No No No
Tandem C[16] Tandem Computers No Yes Guardian, NonStop Kernel, NonStop OS No Proprietary ? ? No No
Tasking[citation needed] Altium Yes Linux, MacOS No ? Proprietary ? ? ? ?
THINK C, Lightspeed C THINK Technologies No No Classic Mac OS ? Proprietary ? ? ? ?
Tiny C Compiler Fabrice Bellard Yes Yes No ? LGPL Yes Partial Partial ?
(Borland) Turbo C Embarcadero Yes No Yes ? Proprietary - V 2.01 freely available ? ? ? ?
VBCC Volker Barthelmann Yes Yes Yes ? Freeware (source code available, modification not allowed) Yes Partial No ?
Microsoft Visual C++ Microsoft Yes No No ? Proprietary (Freeware) Yes No[17] Partial[iv] Yes[iv]
Oracle C compiler Oracle No Solaris, Linux No ? Proprietary (Freeware) Yes Yes Yes No
Watcom C/C++,
Open Watcom C/C++
Watcom Yes experimental DOS, OS/2 ? Sybase Open Watcom Public License Yes Partial No ?
Wind River (Diab) Compiler Wind River Systems Yes Yes Yes ? Proprietary ? ? ? ?
Whitesmiths C compiler Whitesmiths Ltd No Yes No ? proprietary (source code available for non-commercial use) No ? No ?
XL C, XL C/C++ IBM No AIX, Linux z/OS, z/VM ? Proprietary Yes[18][19][20] Yes[18][19][20] Yes[18][19][20] Yes[18][19][20]
Zig cc Zig Software Foundation Yes Yes Yes ? MIT License ? ? ? ?

Notes:

  1. ^ List of host operating systems and/or ⟨cross-compilation targets⟩.
  2. ^ a b c Complete except for floating point.
  3. ^ ANSI 89 compliant from version 3.1h and up
  4. ^ a b Visual Studio v16.8.

C++ compilers

[edit]
Compiler Author Operating system[i] License type IDE Standard conformance
Windows Unix-like Other C++11 C++14 C++17 C++20 C++23
AMD Optimizing C/C++ Compiler (AOCC) AMD No Yes No Proprietary (Freeware) No Yes Yes Yes Partial Partial
C++Builder (classic Borland, bcc*) Embarcadero (CodeGear) Yes (bcc32) macOS (bccosx)[21] No Proprietary (Free Community Edition)[22] Yes Yes[23][24] No No ? ?
C++Builder (modern, bcc*c) Embarcadero (LLVM)[25] Yes (bcc32c,bcc64,
bcc32x,bcc64x)
iOS⟩ (bccios*), ⟨Android⟩ (bcca*)[21] No Proprietary (Freeware - 32bit CLI,[26] Free Limited Commercial Edition)[22] Yes Yes[ii][23][24] Yes[27] Yes[28] ? ?
Turbo C++ (tcc) Borland (CodeGear) Yes No DOS Freeware Yes No No No ? ?
CINT CERN Yes Yes BeBox, DOS, etc. X11/MIT Yes No No No ? ?
Cfront Bjarne Stroustrup No Yes No ? No No No No ? ?
Clang (clang++) LLVM Project Yes Yes Yes UoI/NCSA Xcode, QtCreator (optional) Yes[ii][29][30][24] Yes Yes Partial Partial
Comeau C/C++ Comeau Computing Yes Yes Yes Proprietary No No[iii] No No ? ?
Cray C/C++ (CC) Cray No No No Proprietary No Yes[iv][31][32] Yes[iv] Yes Partial No
Digital Mars C/C++ (dmc) Digital Mars Yes No DOS Proprietary No Partial[33][24] No No ? ?
EDG C++ Front End (eccp, edgcpfe) Edison Design Group Yes Yes Yes Proprietary No Yes[iii][34][24] Yes Yes Partial Partial
EKOPath (pathCC) PathScale and others No Yes Yes Mixed (Proprietary,
Open–source & GPL)
No Yes[v][35] Partial No ? ?
GCC (g++) GNU Project MinGW, MSYS2,
Cygwin, Windows Subsystem
Yes Yes GPLv3 QtCreator, Kdevelop, Eclipse,
NetBeans, Code::Blocks, Dev-C++, Geany
Yes[v][36][37][24] Yes Yes Partial Partial
HP aC++ (aCC) Hewlett-Packard No HP-UX No Proprietary No Partial[38][24] No No ? ?
IAR C/C++ Compilers (icc*) IAR Systems Yes No ⟨Yes⟩ Proprietary IAR Embedded Workbench Yes[39] Yes Partial ? ?
Intel C++ Compiler (icc) Intel Yes Linux, macOS, FreeBSD; ⟨Android (x86)⟩ No Proprietary (Freeware)[40] Visual Studio, Eclipse, Xcode Yes[iii][41][24] Yes[42] Yes[43] Partial Partial
KAI C++ (KCC) Kuck & Associates, Inc.
⟨subsumed by Intel
No TOPS-20, Digital Unix, HP-UX, Linux (x86),
IRIX 5.3 & 6.x, Solaris 2.x, UNICOS
No Proprietary No No[iii][44] No No ? ?
Microtec C/C++ (mcc) MentorSiemens Yes Yes Yes Proprietary EDGE Developer Suite No No No ? ?
EDGE C/C++[vi] MentorSiemens Yes Yes Yes Proprietary EDGE Developer Suite No No No ?
Open64 (openCC) HP, AMD, Tsinghua University and others No Yes No Modified GPLv2 No No[v][vii][45] No No ? ?
PGC++ (pgc++) PGINvidia Unsupported[46] Linux, macOS No Proprietary Eclipse, Xcode, Visual Studio Yes[iii][47][24] Yes Partial ? ?
ProDev WorkShop Silicon Graphics No IRIX 5.3 & 6.x Yes Proprietary Yes ? ? ? ? ?
RealView Compilation Tools (armcc) KeilArm Yes Yes ⟨Yes⟩ Proprietary RealView Development Suite No[iii][48] No No ? ?
Arm Compiler (armcc) KeilArm Yes Yes ⟨Yes⟩ Proprietary μVision, DS-5 Yes[iii][49][50] No No ? ?
Arm Compiler (armclang) KeilArm⟩ (LLVM) Yes No ⟨Yes⟩ Proprietary μVision, DS-5 Yes[ii][51][52] Yes No ? ?
Salford C++ Compiler Silverfrost Yes No No Proprietary Yes ? ? ? ? ?
SAS/C C++ SAS Institute Windows NT/95 AIX, Solaris/SunOS, Linux IBM mainframe, DOS Proprietary No ? ? ? ? ?
SCORE C++ (tpp) DDC-I Yes Yes Yes Proprietary Yes Yes No No ? ?
SEGGER Compiler SEGGER Microcontroller Yes Yes Yes Proprietary Yes Yes Partial Partial ? ?
Oracle C++ Compiler (CC) Oracle No Linux, Solaris No Proprietary (Freeware) Oracle Developer Studio, NetBeans Yes[53][54][24] Yes No ? ?
Tandem C++[55] Tandem Computers No Yes NonStop Kernel, NonStop OS Proprietary Eclipse ? No No ?
TenDRA (tcc) TenDRA Project No Yes No BSD No No[56] No No ? ?
VectorC Codeplay Yes No PS2⟩, ⟨PS3⟩, etc. Proprietary Visual Studio, CodeWarrior Partial[57] No No ? ?
Visual C++ (cl) Microsoft Yes Linux, macOS; ⟨Android⟩, ⟨iOS DOS Proprietary (Free for Individuals and Enterprise under $1M Profit Cap)[58] Visual Studio, QtCreator Yes[59][60][24] Yes Yes[61] Yes[62] Partial
XL C/C++ (xlc++) IBM No Linux (Power), AIX z/OS, z/VM Proprietary Eclipse Yes[18][19][20] Yes[18][19][20] Yes[18][19][20] Experimental for AIX[19] No
Diab Compiler (dcc) Wind RiverTPG Capital Yes Linux, Solaris VxWorks Proprietary Wind River Workbench No[iii][63] No No ? ?
Zig c++ Zig Software Foundation Yes Yes Yes MIT License ? ? ? ? ? ?

Notes:

  1. ^ List of host operating systems and/or ⟨cross-compilation targets⟩.
  2. ^ a b c Uses a Clang Front End.[29][30]
  3. ^ a b c d e f g h Uses an EDG Front End.[34]
  4. ^ a b The Cray C++ Libraries do not support wide characters and only support a single locale.
  5. ^ a b c Uses a GCC Front End.[36][37]
  6. ^ The EDGE C/C++ compiler is based on the Microtec C/C++ compiler.
  7. ^ Last Open64 v5.0 uses GCC 4.2 as its Front End, which doesn't support any C++11.[36][37]

C# compilers

[edit]
Compiler Author Type Windows Unix-like Other OSs License type IDE?
Visual C# Microsoft JIT Yes iOS No Proprietary Yes
Visual C# Express Microsoft JIT Yes No No Freeware Yes
Mono Xamarin JIT Yes Yes Yes GPLv2 Yes
Portable.NET DotGNU AOT Yes Yes No GPL No
SharpDevelop IC#Code Team. JIT Yes No No LGPL Yes
Roslyn .NET Foundation JIT/AOT Yes Partial No Apache 2.0[64] No
RemObjects C# RemObjects AOT Yes Yes (Linux, macOS, Android, iOS) Yes (WebAssembly) Proprietary Yes
IL2CPP Unity Technologies AOT Yes Yes Yes Proprietary No
IL2CPU COSMOS AOT Yes Yes Yes BSD licenses[65] No
Bartok Microsoft Research AOT Yes No No Proprietary No
RyuJIT .NET Foundation. JIT Yes Yes Yes MIT License[66] Yes
CoreRT .NET Foundation. AOT/JIT Yes Yes Yes MIT License[66] Yes
bflat[67] Michal Strehovský AOT Yes Yes Yes GPL[68] No

COBOL compilers

[edit]
Compiler Author Operating system License type IDE? Standard conformance
Windows Unix-like Other COBOL-85 COBOL 2002
IBM COBOL IBM Yes AIX, Linux z/OS, z/VM, z/VSE, IBM i Proprietary IBM Developer for z/OS Yes Partial
NetCOBOL Fujitsu, GTSoftware Yes Yes No Proprietary Yes Yes Partial
GnuCOBOL (formerly OpenCOBOL) Keisuke Nishida, Roger While, Simon Sobisch Yes Yes Yes GPL OpenCobolIDE, GIX, HackEdit Yes Partial
GCC (gcobol)[69] COBOLworx (Symas) Yes Yes Yes GPL No Yes[70] Planned[70]
Otterkit[71][72] Gabriel Gonçalves Yes Yes Yes (Common Language Infrastructure) Apache 2.0 Yes Partial Release candidate
Visual COBOL Micro Focus Yes Yes Yes Proprietary Yes Yes No
isCOBOL Evolve Veryant Yes Yes Yes Proprietary Eclipse Yes Partial
VMS COBOL Originally Digital Equipment Corporation, now VSI No No OpenVMS Proprietary Visual Studio Code Yes No
MCP COBOL Unisys No No MCP Proprietary CANDE Yes[73] No
OS 2200 COBOL Unisys No No OS 2200 Proprietary ? Yes[74] No
Tandem COBOL[75][16] Tandem Computers No No Guardian, NonStop Kernel, NonStop OS Proprietary Eclipse, Micro Focus COBOL Workbench[76] ? ?
PDP-11 COBOL Digital Equipment Corporation No No RSTS/E, RSX-11M Proprietary ? No No
COBOL-85 Digital Equipment Corporation No No RSTS/E, RSX-11M, VMS Proprietary ? ? ?
Austec Cobol Esmond & David Pitt and Derek Trusler, Austec International Inc. No Yes Yes Proprietary No Partial Partial

Common Lisp compilers

[edit]
Compiler Author Target Windows Unix-like Other OSs License type IDE?
Allegro Common Lisp Franz, Inc. Native code Yes Yes Yes Proprietary Yes
Armed Bear Common Lisp Peter Graves JVM Yes Yes Yes GPL Yes
CLISP GNU Project Bytecode Yes Yes Yes GPL No
Clozure CL Clozure Associates Native code Yes Yes No LGPL Yes
CMU Common Lisp Carnegie Mellon University Native code, Bytecode No Yes No Public domain Yes
Corman Common Lisp Corman Technologies Native code Yes No No MIT license Yes
Embeddable Common Lisp Juanjo Garcia-Ripoll Bytecode, C Yes Yes Yes LGPL Yes
GNU Common Lisp GNU Project C Yes Yes No GPL No
LispWorks LispWorks Ltd Native code Yes Yes No Proprietary Yes
mocl Wukix Native code No Yes Yes Proprietary No
Movitz Frode V. Fjeld Native code, own OS No No Yes BSD No
Open Genera Symbolics Ivory emulator, own OS No No Yes Proprietary Yes
Scieneer Common Lisp Scieneer Pty Ltd Native code No Yes No Proprietary No
Steel Bank Common Lisp sbcl.org Native code Yes Yes Yes Public domain Yes

D compilers

[edit]
Compiler Author Windows Unix-like Other OSs License type IDE?
D (DMD) Digital Mars and others Yes 32-bit Linux, macOS, FreeBSD No Boost
No
D for .NET ? Yes Yes ? ? ?
GCC (GDC) GNU Project Yes Yes No GPL No
LDC LLVM Yes Yes No multiple Open Source license depending on module No

DIBOL/DBL compilers

[edit]
Compiler Author Windows Unix-like Other OSs License type IDE?
DIBOL Digital Equipment Corporation No No RSTS/E, VMS Proprietary No
Synergy DBL[77][78][79] Synergex Yes Yes Yes Proprietary Yes

ECMAScript interpreters

[edit]

Eiffel compilers

[edit]
Compiler Author Windows Unix-like Other OSs License type IDE?
EiffelStudio Eiffel Software / Community developed (SourceForge) Yes Yes Yes GPL Yes
LibertyEiffel (fork of SmartEiffel) D. Colnet and community ? Yes ? GPLv2 ?
SmartEiffel D. Colnet ? Yes ? GPLv2 ?

Forth compilers and interpreters

[edit]
Compiler Author Windows Unix-like Other OSs License type
Win32Forth[80] Andrew McKewan, Tom Zimmer, et al. Yes No No public domain
VFX Forth[81][82] MPE Yes Yes Yes ?
SwiftForth[83] Forth Inc. Yes Yes No Proprietary
SP-Forth Andrey Cherezov Yes Yes ? GPL3
Retro Forth[84] Charles Childers Yes Yes ? ISC license
pForth Phil Burk Yes Yes Yes public domain
Open Firmware ? ? ? ? BSD license
iForth[85] Marcel Hendrix Yes Yes No Proprietary
Gforth Bernd Paysan and Anton Ertl Yes Yes No GPL3
colorForth Charles H. Moore ? ? ? public domain
ciforth[86][87][88] Albert van der Horst Yes Yes No GPL
Atlast[89] John Walker ? Yes No public domain
Collapse OS[90] Virgil Dupras No Yes Yes ?
FreeForth[91] ? Yes Yes (Linux) ? public domain
ByteForth[92] ? ? ? ? ?
noForth[93] ? ? ? RISC-V baremetal ?
4tH[94] Hans Bezemer Yes Yes Yes LGPL

Fortran compilers

[edit]
Compiler Author Working state Operating system License type IDE?
Windows Unix-like Other
AMD Optimizing C/C++ Compiler (AOCC) AMD Current No Yes No Freeware No
PDP-11 FORTRAN IV Digital Equipment Corporation Discontinued No No Yes Proprietary No
PDP-11 FORTRAN-IV-Plus Digital Equipment Corporation Discontinued No No Yes Proprietary No
Fortran 77 Digital Equipment Corporation Discontinued No ? RSTS/E, VMS Proprietary ?
Fortran H (equivalent to Fortran IV) IBM Discontinued No No Yes Proprietary No
Oracle Fortran Oracle Discontinued No Linux, Solaris No Freeware Oracle Developer Studio
PGFORTRAN The Portland Group Discontinued Yes Linux only Yes Proprietary Visual Studio on Windows
PathScale Compiler Suite SiCortex Discontinued No Linux only No Proprietary Yes
Absoft Pro Fortran Absoft Discontinued Yes Linux, macOS Yes Proprietary Yes
G95 Andy Vaught Inactive Yes Yes Yes GPL No
VS/9 Fortran IV Unisys Discontinued No No Yes Proprietary No
GCC (GNU Fortran) GNU Project Current Yes Yes Yes GPLv3 Photran (part of Eclipse), Simply Fortran, Lahey Fortran
Intel Fortran Compiler Classic (ifort) Intel Current Yes Linux and macOS No Freeware, optional priority support Yes (plugins), Visual Studio on Windows, Eclipse on Linux, XCode on Mac
Intel Fortran Compiler (ifx) Intel Current Yes Linux No Freeware, optional priority support Yes (plugins), Visual Studio on Windows, Eclipse on Linux
Open64 Google, HP, Intel, Nvidia, PathScale, Tsinghua University and others Finished No Yes Yes GPL No
Classic Flang LLVM Project Current Yes Yes Yes NCSA Yes
LLVM Flang LLVM Project Current Yes Yes Yes NCSA Yes
LFortran The LFortran team Current Yes Yes Yes BSD Yes
FTN95 Silverfrost Current Yes No No Proprietary Yes
NAG Fortran Compiler Numerical Algorithms Group ? Yes Yes No Proprietary Yes
Tandem Fortran[16] Tandem Computers Discontinued No ? Guardian, NonStop Kernel, NonStop OS Proprietary No
VS Fortran IBM Current No No z/OS, z/VSE and z/VM Proprietary Eclipse
XL Fortran IBM Current No Linux (Power and AIX No Proprietary Eclipse
sxf90 / sxmpif90 NEC ? No Yes SUPER-UX Proprietary Yes
MCP Unisys Discontinued No No MCP Proprietary CANDE
Open Watcom Sybase and Open Watcom Contributors Current Yes Yes DOS, OS/2 Sybase Open Watcom Public License on Windows, OS/2
Symbolics Fortran Symbolics Discontinued No No Symbolics Genera Proprietary Yes
Cray Cray Current Yes Yes Yes Proprietary Yes

Go compilers

[edit]
Compiler Working state Operating system License type
Windows Unix-like Other
Gc Current Yes Yes Yes BSD 3-Clause
GCC (gccgo) Current MinGW, Cygwin Yes Yes GPL
RemObjects Gold Current Yes Linux, macOS, Android, iOS Yes (WebAssembly) Proprietary
LLVM (llgo) Dropped[95] No Yes No NCSA
Gopherjs Current Yes Yes Yes BSD 2-Clause
TinyGo Current Yes Yes Yes BSD 3-Clause

Haskell compilers

[edit]
Compiler Author Windows Unix-like Other OSs License type Actively maintained?
HBC Lennart Augustsson
Thomas Johnsson
? Yes No Open source No
GHC GHC Yes Yes No Open source Yes
YHC YHC Yes Yes No Open source No
JHC John Meacham Yes Yes No Open source Yes

ISLISP compilers and interpreters

[edit]
Name Author Working state Target Written in Operating system License type Standard conformance
Windows Unix-like Other
Easy-ISLisp[96] Kenichi Sasagawa Current C, bytecode C, Lisp No Linux, macOS, OpenBSD No BSD 2-Clause Yes
OpenLisp Eligis Current C, bytecode C, Lisp Yes macOS, Linux, BSD, AIX, Solaris, QNX ? Proprietary Yes
dayLISP[97] Matthew Denson Inactive Java bytecode Java, Lisp Yes Yes Yes (JVM) BSD 3-Clause Partial
Iris[98] Masaya Taniguchi[99] Inactive Bytecode Go Yes Yes Yes MPL 2.0 Yes
Iris web REPL[100] Masaya Taniguchi[99] Inactive JavaScript Go, JavaScript Yes Yes Yes MPL 2.0 Yes
Kiss[101] Yuji Minejima Inactive Bytecode C, Lisp Yes Yes ? GPLv3+ Partial
OKI ISLISP[102] Kyoto University and Oki Electric Industry Co. Finished Bytecode C Yes No No Freeware Yes
PRIME-LISP Mikhail Semenov Discontinued Bytecode C# Yes No No Shareware, freely redistributable binaries No
ISLisproid Hiroshi Gomi Discontinued Bytecode Java No Android No Proprietary ?

Java compilers

[edit]
Compiler Author Working state Windows Unix-like Other OSs License type IDE?
Edison Design Group Edison Design Group Discontinued Yes Yes Yes Proprietary No
GCC (gcj) GNU Project Inactive No Yes No GPL No
javac Sun Microsystems (Owned by Oracle) Current Yes Yes Yes BCL Yes
javac OpenJDK Sun Microsystems (Owned by Oracle) Current Yes Yes Yes GPLv2 Yes
ECJ (Eclipse Compiler for Java) Eclipse project ? Yes Yes Yes EPL Yes
Jikes IBM Inactive ? Yes ? IPL ?
Power J[103] Sybase (Owned by SAP) Discontinued Yes ? ? ? Yes
Iodine RemObjects Current Yes Yes (Linux, macOS, Android, iOS) Yes (WebAssembly) Proprietary Yes

Pascal compilers

[edit]
Compiler Author Windows Unix-like Other OSs License type IDE?
Amsterdam Compiler Kit Andrew Tanenbaum
Ceriel Jacobs
No Yes Yes BSD No
Delphi Embarcadero (CodeGear) Yes Yes (Linux, Mac OS) Yes (iOS, Android) Proprietary Yes
Oxygene (formerly Delphi Prism) RemObjects Yes Yes (Linux, macOS, Android, IOS) Yes (WebAssembly) Proprietary Yes
Free Pascal Florian Paul Klämpfl Yes Yes Yes (OS/2, FreeBSD, Solaris, Haiku, Android, DOS, etc.[note 2]) GPL FPIDE, Lazarus, Geany (on Ubuntu)
GCC (GNU Pascal) GNU Project Yes Yes Yes GPL No
Kylix Borland
(CodeGear)
No Yes (Linux) No Proprietary Yes
Turbo Pascal for Windows Borland
(CodeGear)
Yes (3.x) No No Proprietary Yes
Microsoft Pascal Microsoft No No Yes (DOS) Proprietary Yes
OMSI Pascal Oregon Software No No Yes (RT-11, RSX-11, RSTS/E) Proprietary No
Symbolics Pascal Symbolics No No Symbolics Genera Proprietary Yes
Tandem Pascal[16] Tandem Computers No ? Guardian, NonStop Kernel Proprietary ?
VSI Pascal VMS Software Inc No No Yes (OpenVMS) Proprietary Yes
Turbo Pascal CodeGear
(Borland)
No No Yes Freeware Yes
Vector Pascal Glasgow University Yes Yes No OpenSource No
Virtual Pascal Vitaly Miryanov Yes Yes Yes (OS/2) Freeware Yes
MCP Unisys No No MCP Proprietary CANDE

Perl interpreters

[edit]
Interpreter Author Windows Unix-like Other OSs License type
ActivePerl interpreter[citation needed] ActiveState Yes Yes Yes Noncommercial or Proprietary
Perl interpreter[citation needed] Wall/Perl developers Yes Yes Yes Artistic or GPL v1

PHP compilers

[edit]
Compiler Author Windows Unix-like Other OSs License type IDE?
Phalanger Devsense Yes No Partial Apache 2.0 Yes
PeachPie iolevel Yes Yes Yes Apache 2.0 Yes

PL/I compilers

[edit]
Compiler Author Windows Unix-like Other OSs License type IDE?
IBM Enterprise PL/I for z/OS[citation needed] IBM No No z/OS Proprietary No
IBM PL/I for AIX[citation needed] IBM No AIX No Proprietary No
IBM PL/I(F)[citation needed] IBM No No z/OS Freeware No
IBM VisualAge PL/I Enterprise for OS/2 and Windows NT[citation needed] IBM Yes No OS/2 Proprietary No
Iron Spring PL/I for Linux and OS/2[citation needed] Iron Spring Software No Linux OS/2 Warp and EComStation Proprietary; library source is LGPL No
Micro Focus Open PL/I[citation needed] Micro Focus Yes Yes No Proprietary Yes
GCC (pl1gcc) Henrik Sorensen Yes Yes Yes GPL No

Python compilers and interpreters

[edit]
Compiler Author Target Windows Unix-like Other OSs License type IDE?
Cython C Yes Yes Yes PSFL No
IronPython CLI Yes Yes Yes (CLI) Apache 2.0 No
Jython JVM Yes Yes Yes (JVM) PSFL No
Nuitka Kay Hayen C, C++ Yes Yes Yes Apache 2.0 No
Numba Anaconda LLVM (JIT) Yes Yes Yes BSD 2-Clause No
Psyco Armin Rigo
Christian Tismer
x86-32 (JIT) Yes Yes Yes MIT No
PyPy Own VM (JIT) Yes Yes Yes MIT No
Shed Skin C++ Yes Yes Yes GPLv3 and BSD No

Ruby compilers and interpreters

[edit]
Compiler Author Target Windows Unix-like Other OSs License type IDE?
YARV Koichi Sasada bytecode Yes Yes Yes Ruby License No
IronRuby Microsoft .NET Yes Yes Yes Apache 2.0 No
JRuby JVM Yes Yes Yes EPL, GPL, LGPL No
Mruby Yukihiro Matsumoto bytecode Yes Yes Yes MIT No
TruffleRuby Oracle native, JVM Yes Yes Yes EPL, GPL No

Rust compilers

[edit]
Compiler Author Windows Unix-like Other OSs Bare machine License type
rustc Rust Foundation Yes Yes Yes Yes Apache License
GCC Rust[104] GNU Project MinGW, Cygwin, WSL Yes No Yes GPL

Scheme compilers and interpreters

[edit]
Compiler Author Target Windows Unix-like Other OSs License type IDE?
Bigloo Manuel Serrano native, bytecode Yes Yes ? GPL (compiler) and LGPL (runtime) No
Chez Scheme R. Kent Dybvig native Yes Yes No Apache 2.0 No
Chicken The Chicken Team C Yes Yes ? BSD No
Gambit Marc Feeley C Yes Yes ? LGPL No
GNU Guile GNU Project bytecode Yes Yes ? LGPL No
Ikarus Abdulaziz Ghuloum native Yes Yes ? GPL No
IronScheme Llewellyn Pritchard CLI Yes Yes Yes (Common Language Infrastructure) Ms-PL No
JScheme Ken Anderson, Tim Hickey, Peter Norvig bytecode Yes Yes Yes (JVM) zlib License No
Kawa Per Bothner bytecode Yes Yes Yes (JVM) MIT No
MIT/GNU Scheme GNU Project native Yes Yes ? GPL No
Racket PLT Inc. bytecode + JIT Yes Yes macOS, Microsoft Windows LGPL DrRacket
Scheme 48 Richard Kelsey, Jonathan Rees C, bytecode Yes Yes ? BSD No
SCM Aubrey Jaffer C Yes Yes AmigaOS, Atari ST, Classic Mac OS, DOS, OS/2, NOS/VE, OpenVMS LGPL No
SISC Scott G. Miller, Matthias Radestock bytecode Yes Yes Yes (JVM) GPL and MPL No
Stalin Jeffrey Mark Siskind C ? Yes ? LGPL No
STklos Erick Gallesio bytecode ? Yes ? GPL No
Interpreter Author Windows Unix-like Other OSs License type IDE?
Gauche Shiro Kawai Yes Yes ? BSD No
Petite Chez Scheme R. Kent Dybvig Yes Yes No Apache 2.0 No
TinyScheme ? ? ? Yes BSD No

Smalltalk compilers

[edit]
Compiler Author Target Windows Unix-like Other OSs License type IDE?
Pharo Pharo Team VM Yes Yes Yes MIT License Yes
GNU Smalltalk GNU Smalltalk project bytecode + JIT Yes Yes No GPL No
VisualWorks Cincom Systems ? Yes Yes Yes Proprietary Yes
Smalltalk MT ObjectConnect native Yes No No Proprietary Yes

Tcl interpreters

[edit]
Interpreter Author Windows Unix-like Other OSs License type
ActiveTcl ActiveState Yes Yes Yes Noncommercial or Proprietary
Tclsh MKS and many others Yes Yes Yes Proprietary and/or free
Wish Yes Yes Yes

Command language interpreters

[edit]
Interpreter Author Windows Unix-like Other OSs License type
DCL (Digital Control Language) Digital No No OpenVMS, RSX-11M, RSTS/E Proprietary
TACL (Tandem Advanced Command Language) Tandem Computers No No Guardian, NonStop Kernel, NonStop OS Proprietary

Rexx interpreters

[edit]
Interpreter Author Windows Unix-like Other OSs License type
Amiga ARexx Commodore No No Yes Proprietary
ObjectRexx IBM Yes ? Yes Proprietary
Open Object Rexx OO Organisation Yes Yes No CPL

CLI compilers

[edit]
Compiler Author Working state Operating system License type IDE?
Windows Unix-like Other OSs
Visual Studio Microsoft Current Yes No No Proprietary Yes
Mono Mono Current Yes Yes No MIT Yes
Delphi Prism RemObjects Current Yes Yes Yes Proprietary Yes
Portable.NET DotGNU Inactive Yes Yes No GPL Unknown

Source-to-source compilers

[edit]

This list is incomplete. A more extensive list of source-to-source compilers can be found here.

Compiler Author Target Input Target Output Auto-Parallelizer Windows Unix-like Other OSs License type Framework?
DMS Software Reengineering Toolkit Semantic Designs C/C++, COBOL, PL/I, many others Arbitrary languages No Yes Yes Yes Proprietary Yes
ROSE Lawrence Livermore National Laboratory C, Fortran, and more C/C++, Fortran, and more Yes No Yes Yes BSD Yes

Free/libre and open source compilers

[edit]

Production quality, free/libre and open source compilers.

Research compilers

[edit]

Research compilers are mostly not robust or complete enough to handle real, large applications. They are used mostly for fast prototyping new language features and new optimizations in research areas.

See also

[edit]

Footnotes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A list of compilers is a catalog of software tools that translate source code written in high-level programming languages, such as C, Java, FORTRAN, or COBOL, into machine code or other executable forms that computers can directly run. These lists also include interpreters and other translation tools for certain languages. These lists serve as references for programmers, researchers, and educators, typically organizing compilers by the programming language they target, reflecting the specialized nature of translation for procedural, functional, logic-based, or other language paradigms. Compilers perform this translation through structured phases, including to break code into tokens, syntax to verify structure against grammar rules, semantic analysis for meaning validation, code generation to produce target output, and optimization to improve efficiency, such as by eliminating redundant operations or evaluating constants at . This process bridges the gap between human-readable code and hardware-specific instructions, enabling efficient program execution across diverse systems. The development of compilers traces back to the early days of computing, with the first compiler, A-0, created by in 1951 for the to automate the conversion of mathematical expressions into machine code. Subsequent milestones include the compiler in the late 1950s, which marked the first high-level language to achieve widespread practical use, and the evolution of multi-language collections like the GNU Compiler Collection (GCC), released in 1987, which supports C, C++, Fortran, Ada, and others through modular front-ends and back-ends. Today, lists of compilers highlight a vast ecosystem, from open-source projects like GCC and (part of the project) to commercial tools tailored for specific platforms or performance needs, underscoring compilers' role in advancing , optimization, and innovation in computer science.

Early and Historical Languages

ALGOL 60 compilers

, formalized in 1960, marked a pivotal advancement in programming languages through its introduction of block-structured syntax and recursive procedures, necessitating innovative compilation strategies that influenced subsequent language implementations. The language's Backus-Naur Form (BNF) specification enabled precise syntactic descriptions, facilitating the development of early compilers that parsed and translated code into machine instructions while preserving lexical scope and dynamic storage allocation. One of the earliest implementations was the X1 ALGOL 60 compiler, developed by Edsger W. Dijkstra and Jaap A. Zonneveld in the summer of 1960 for the Electrologica X1 computer, a Dutch mainframe with ferrite-core memory. This compiler, operational by August 1960, was the first full realization of the ALGOL 60 standard, handling recursive calls and block structures through a recursive descent parser that generated code for the X1's 40-bit architecture; it remains discontinued but has been restored for emulation. Another pioneering effort was the Elliott ALGOL compiler, created by C.A.R. Hoare, J. Hoare, and J.S. Hillmore at Elliott Brothers (London) Ltd. between 1960 and 1962 for the Elliott 803 transistorized computer. Targeted at scientific computing on a machine with 8K words of core memory, this nearly complete ALGOL 60 implementation supported block scoping via a syntax-directed approach, achieving efficient single-pass compilation; it is now historical and preserved in archival simulations. The Whetstone , developed by the English Electric Company's Data Processing and Control Systems Division for the KDF9 computer, represented a sophisticated early production implementation released in 1964. Designed for the 48-bit KDF9 mainframe used in scientific and applications, it employed a table-driven parser to manage 's complex syntax, including own variables and call-by-name evaluation, with optimizations for the system's multiprogramming environment; this is discontinued but documented in detail for historical study. Similarly, the Burroughs for the B5000, introduced in 1961 by , was tailored to the machine's stack-based architecture, which natively supported block structures through hardware-managed descriptor-based memory allocation. This single-pass enabled efficient execution of nested blocks and procedures without explicit stack management in code, standardizing 's scoping model in enterprise computing; it evolved into extended variants but the pure 1960 implementation is obsolete. ALGOL 60's formal BNF notation profoundly shaped syntax-directed compilation techniques, as exemplified by E. T. Irons' 1961 design for an parser that integrated with semantic actions during a single pass. This approach, using priority-directed to resolve ambiguities in the language's recursive , influenced early parser generators like those derived from BNF variants, enabling modular construction for block-structured languages. By 1962, surveys indicated over 20 implementations worldwide, underscoring the standard's rapid adoption and its role in advancing through rigorous syntactic specifications.
Compiler NameDevelopersTarget PlatformYearStatus
X1 ALGOL 60, Jaap A. ZonneveldElectrologica X11960Discontinued, emulated
Elliott ALGOLC.A.R. Hoare et al., Elliott BrothersElliott 8031960-1962Discontinued, archived
Whetstone ALGOLEnglish Electric Co.KDF91964Discontinued, documented
Burroughs ALGOL 60B50001961Discontinued, evolved

ALGOL 68 compilers

ALGOL 68, revised in 1970, introduced advanced features such as strict mode-based typing and dynamic arrays, which allowed for flexible data structures with runtime dimensioning, but its implementation required compilers to navigate the intricate van Wijngaarden two-level grammar for parsing the language's context-sensitive syntax. This grammar, developed by Adriaan van Wijngaarden, used a metalinguistic layer to define productions, enabling precise specification of —where language constructs could be combined freely without arbitrary restrictions—but posed significant challenges for compiler writers due to its non-standard formalism and the need for specialized generators to produce the effective syntax rules. Compilers for emphasized robust mode checking to enforce across its orthogonal design, often implementing parallel processing hints and dynamic allocation for arrays to support scientific computing applications. Handling the van Wijngaarden grammar typically involved preprocessors or dedicated syntax analyzers to expand the metasyntax into a conventional context-free form, allowing standard techniques while preserving the language's expressiveness for general-purpose programming. Despite these innovations, the language's complexity—stemming from its elaborate report and features like strong typing without exceptions—limited widespread adoption to academic and specialized environments, such as research labs and some European universities, where it influenced modular but was overshadowed by simpler alternatives like Pascal. Several notable compilers emerged in the 1970s, targeting mainframes and early minicomputers, with varying degrees of full-language support; by 2025, most historical implementations are archived, though open-source efforts keep the language viable for legacy and educational use.
Compiler NameDeveloper/InstitutionInitial Release YearTarget SystemsKey Features and Notes
Algol 68-RUniversity of St Andrews and Royal Radar Establishment (RRE)1970ICL 1900 series mainframesSupported core orthogonality and dynamic arrays with runtime checks; included partial mode checking for efficiency, omitting some advanced parallel features; widely used in UK academia for teaching and research.
ALGOL 68RSRoyal Signals and Radar Establishment (RSRE), Malvern1970sPDP-11, VAX, Unix systems (via C backend)Full strict mode checking and dynamic array handling; integrated with the ELLA hardware description language for VLSI design, translating ALGOL 68 to C for portability; emphasized orthogonality in scientific simulations.
Algol 68 Genie (a68g)Marcel van der Veer (independent)2005 (ongoing updates)Linux, Windows, macOS, Unix variantsHybrid compiler-interpreter with complete support for van Wijngaarden-defined features, including strict typing, dynamic arrays, and parallelism; active open-source implementation as of 2025, used for modern experimentation and porting legacy code.
GNU Algol 68Jose E. Marchesi (Oracle, GNU project)2025 (in development)Platforms supported by GCCFront-end for GCC integrating full ALGOL 68 semantics, with emphasis on mode safety and dynamic structures; aims to revive the language for contemporary systems, building on historical orthogonality.

Fortran compilers

The Fortran programming language, originally designed for scientific and numerical computations, has seen a progression of compilers since its inception, evolving to support increasingly complex (HPC) workloads such as simulations and . The earliest compilers focused on translating mathematical formulas into for mainframe systems, with subsequent versions incorporating optimizations for vector processing and parallel execution to handle large-scale scientific applications. Modern Fortran compilers emphasize compliance with evolving ISO standards, enabling features like array operations and intrinsic functions tailored for numerical precision and efficiency in fields like physics and engineering. The inaugural compiler, FORTRAN I, was developed by under and released in April 1957 for the computer, marking the first successful high-level language implementation for formula translation in scientific programming. This processed fixed-format and generated optimized assembly for arithmetic expressions, significantly reducing programming time for numerical tasks compared to . Subsequent iterations, II (1958) and Fortran III (1959), introduced subroutines, independent compilation, and input/output enhancements, both still targeted at mainframes and laying the groundwork for portable scientific software. In the 1960s, IBM's FTN compiler for the System/360 series extended Fortran IV support, incorporating optimizations for multitasking environments and , which became a staple for early HPC installations in research labs. Concurrently, the WATFOR compiler, developed in 1965 at the for the IBM 7040/7090, prioritized rapid compilation for educational use, processing student programs in seconds—a vast improvement over slower contemporaries—and influencing quick-turnaround tools in academia. Its successor, WATFIV (introduced around 1967), added aids and extensions for , remaining in use through the mid-1980s before discontinuation as standards advanced beyond its Fortran IV base. Fortran 77 compilers, such as early versions of IBM's VS , enforced fixed-form with columns 1-5 for labels and 7-72 for statements, standardizing block IF and character data types for reliable numerical simulations. In contrast, Fortran 2018 compilers support free-form source (statements anywhere on a line) and advanced parallel extensions like Coarray (CAF) for distributed-memory programming and directives for shared-memory parallelism, enabling scalable HPC applications on clusters. These features facilitate one-sided communication in CAF and loop-level parallelism in , crucial for scientific modeling. Among free and open-source options, the GNU Fortran compiler (gfortran), integrated into the GNU Compiler Collection since 2004, provides cross-platform support for Windows, , and macOS, with full compliance to 95/2003 and substantial Fortran 2018 features, including 5.0. The GCC 15.1 release in April 2025 enhanced runtime library optimizations for numerical intrinsics, making it a go-to for academic and open-source HPC projects without licensing costs. Proprietary compilers dominate high-end HPC environments for their specialized optimizations. The Fortran Compiler, part of oneAPI, offers full Fortran 2018 support plus select Fortran 2023 features like improved team concurrency in CAF; its 2025.0.0 update discontinued the legacy ifort edition in favor of the LLVM-based ifx, incorporating AI acceleration via vectorized tensor operations on Intel CPUs and GPUs for workloads in scientific computing. Similarly, the HPE Fortran compiler (ftn wrapper) targets supercomputers like the Cray XC series, supporting Fortran 2018 with vendor extensions for vectorization and achieving up to 90% efficiency in on /Intel nodes, optimized for exascale simulations. HPC-focused optimizations in these compilers include , where loops over arrays are transformed into SIMD instructions for processors like x86 , boosting throughput in numerical kernels by factors of 4-16x depending on data alignment. For instance, Intel's -xHost flag enables architecture-specific vectorization, while Cray's pragmas guide , ensuring minimal overhead in large-scale array computations central to Fortran's scientific legacy. Cross-platform tools like gfortran bridge free access with broad compatibility, whereas ones like and excel in tuned performance for dedicated HPC hardware.

COBOL compilers

COBOL compilers are specialized tools designed to translate code written in the Common Business-Oriented Language (), a programming language developed in 1959 for business data processing and widely used in enterprise systems for financial transactions, inventory management, and administrative applications. These compilers emphasize precision in decimal arithmetic and support for large-scale, record-oriented data handling, making them essential for legacy mainframe environments while adapting to modern distributed systems. Key implementations have evolved to maintain compatibility with evolving COBOL standards, such as COBOL 85, , , and the 2023 revision, which introduce enhancements like improved interoperability and cloud-native capabilities. Prominent compilers include Enterprise COBOL, which traces its origins to the early 1960s as part of 's support for the initial standard and remains a cornerstone for mainframe . This compiler, currently at version 6.5 for as of May 2025, fully supports the z17 architecture for optimized performance and includes features for modernizing applications, such as integration with cloud environments and AI-assisted code analysis tools like watsonX. Visual COBOL, originally developed by founded in 1976 to port to minicomputers and microprocessors, supports the latest 2023 standard and enables development across Windows, , and Unix platforms with native integration for .NET and ecosystems. , an open-source implementation initiated in 2001 as OpenCOBOL and integrated into the Project, provides free access to 2014 and partial 2023 compliance, supporting 19 dialects and running on , Windows, and macOS without licensing costs. COBOL compilers incorporate core language features tailored for , including the hierarchical division, which organizes items using level numbers (e.g., 01 for groups, 05 for subgroups) to define record structures for files and working storage, facilitating complex, nested representations common in enterprise records. The PERFORM statement serves as a fundamental control mechanism for executing procedures, loops, and conditional blocks, allowing without reliance on statements and supporting variations like PERFORM VARYING for iterative processing. SQL embedding is a widely supported capability, enabling direct integration of database queries within code using EXEC SQL blocks delimited by END-EXEC, which preprocesses SQL statements for execution on systems like , thus bridging procedural code with relational access. Historically, compilers prioritized fixed-point decimal arithmetic to ensure exact representations of monetary values, avoiding rounding errors inherent in floating-point operations, a design choice rooted in the language's 1960 origins for precise business calculations. Over time, standards evolved to include floating-point support starting with COBOL 85, allowing compilers to handle scientific and mixed-precision computations while retaining fixed-point as the default for financial integrity, with modern implementations like Enterprise COBOL optimizing both modes for . As of 2025, active COBOL compilers continue to thrive in mainframe environments like IBM z/OS for high-volume transaction processing, while ports to Linux and Windows support hybrid cloud migrations and developer accessibility. For instance, IBM Enterprise COBOL targets z/OS and extends to Linux on IBM Z and x86, Micro Focus Visual COBOL runs natively on Windows and Linux for cross-platform deployment, and GnuCOBOL offers portable compilation on open systems. Discontinued compilers, such as RMS COBOL associated with DEC VAX and OpenVMS systems from the 1980s, have been phased out following the end of VMS support, prompting migrations to active alternatives like VSI COBOL for remaining legacy OpenVMS sites.

PL/I compilers

PL/I compilers implement the Programming Language/I (PL/I), a high-level procedural language developed by IBM in the mid-1960s to serve scientific, business, and systems programming needs, combining features for data processing, numerical computation, and low-level control. The language specification emerged in 1964, with the first production compiler, the PL/I F compiler, shipped by IBM in 1966 as part of the OS/360 operating system for the System/360 mainframe, enabling early adoption in enterprise environments. This initial implementation supported a broad subset of the language, focusing on portability across IBM hardware, and laid the foundation for PL/I's role in mission-critical applications despite initial delays in full optimization. Over time, PL/I evolved through ANSI X3.74-1987 standardization, which defined the full language, including enhancements for precision and concurrency, allowing compilers to mature in handling complex, multitasking programs. Key historical and modern PL/I compilers include IBM's foundational offerings and specialized implementations. The IBM PL/I Optimizing Compiler, introduced in the 1970s, succeeded the PL/I F version and became the primary tool for OS/360 and subsequent systems, optimizing code for performance in large-scale data processing until the 1990s. The Multics PL/I compiler, developed for the Multics operating system and operational by late 1969, was pioneering as the first full PL/I compiler written in PL/I itself, generating high-speed and influencing compiler design through its self-hosting approach. In modern contexts, IBM Enterprise PL/I for z/OS Version 6.2, generally available since June 2025, provides updates for IBM z16 hardware, including improved multithreading and integration with z/OS tools for scalable web-based applications. PL/I compilers emphasize features for robust execution in demanding environments, such as multitasking via ON-conditions, which handle exceptions like arithmetic errors or failures through predefined or user-defined routines, enabling resilient program flow without halting execution. Precision arithmetic is supported through FIXED declarations, allowing fixed-point decimal operations with specified scale and precision (e.g., FIXED (10,2) for two decimal places), which ensures accurate financial and scientific calculations by avoiding binary floating-point issues inherent in other languages. Storage management in PL/I includes based variables for dynamic allocation at arbitrary addresses and controlled variables for automatic reuse across procedure calls, reducing manual memory handling while supporting tasks like pointer manipulation. PL/I has found niche applications in for its reliability in real-time and embedded systems, where compilers like custom variants were evaluated for high-level language execution on flight computers, prioritizing verifiable precision and over low-level assembly. Discontinued compilers, such as the original PL/I F from the 1960s and various vendor-specific implementations for legacy minicomputers (e.g., early Data General ports), highlight the language's contraction outside IBM ecosystems, though remnants persist in archived mainframe environments for maintenance of historical codebases.

Procedural and Imperative Languages

C compilers

C compilers are essential tools for translating C source code into machine-executable binaries, supporting the language's evolution from the original K&R specification in 1978 to the ISO/IEC 9899:2024 (C23) standard. Designed for , C compilers emphasize low-level control over hardware, enabling efficient code generation for operating systems, embedded devices, and performance-critical applications while prioritizing portability across diverse architectures. Their role in Unix development underscored C's influence, with compilers facilitating direct memory manipulation and to achieve cross-platform compatibility. The (PCC), developed by at in the late 1970s, was one of the earliest efforts to standardize compilation beyond the initial PDP-11 implementation. Released with Unix Version 7 in 1979, PCC introduced a retargetable front-end and back-end design, allowing adaptation to multiple hardware platforms like VAX and , which enhanced C's portability for early Unix ports. Though superseded by more advanced tools, modern forks of PCC continue limited development for niche systems, maintaining its legacy in compiler design principles. Among contemporary C compilers, the GNU Compiler Collection (GCC) stands as a cornerstone, initiated by Richard Stallman in 1987 to provide a free alternative for Unix-like systems. GCC's C front-end, gcc, supports standards from K&R C through C23, with substantial C23 support in version 15.1 released in 2025, including features like the #embed directive for binary inclusion. As of November 2025, GCC 15 defaults to C23 with near-complete support. Widely used in Linux distributions and embedded development, GCC excels in cross-compilation, generating code for over 20 architectures via configurable back-ends. Clang, part of the LLVM project, emerged in 2007 under Apple's sponsorship as a modular, high-performance alternative to GCC, leveraging LLVM's intermediate representation for optimized code generation. Integrated into Apple's Xcode since 2012, Clang powers iOS and macOS development, offering rapid compilation and superior diagnostics for C code up to the C23 standard, with near-complete support by 2025 and advances in version 19. Its design enables seamless interoperability with Apple's ecosystems, including bitcode generation for app thinning, while supporting cross-compilation to ARM and x86 targets. Microsoft Visual C++ (MSVC), originating from Microsoft's C compiler in the early 1980s—initially based on Lattice C and released as Microsoft C 1.0 in 1983—has evolved into a comprehensive for Windows development. By 2025, MSVC supports core C standards up to C17 with partial C23 features like typeof and attributes, focusing on integration with the for . It includes tools for debugging pointer-heavy code and optimizations tailored to x86/ Windows environments, though full C23 adoption lags behind open-source alternatives. For embedded systems, the (SDCC) targets resource-constrained 8- and 16-bit microcontrollers, such as 8051 and STM8, with support for ANSI C89 through C23 standards as of its 4.5 release in 2025. SDCC features a retargetable optimizer that handles limited memory efficiently, generating compact code for devices with under 64 KB RAM, and includes libraries for suited to embedded portability. C compilers universally process preprocessor directives—such as #include, #define, and conditional compilation (#ifdef)—to expand macros and manage source dependencies before , ensuring modular across projects. Pointer arithmetic, a hallmark of C's systems-level capabilities, allows compilers to treat pointers as offsets into memory arrays, enabling efficient data structure traversal but requiring careful bounds checking to avoid (UB). UB, including dereferencing null pointers or signed , permits compilers to assume non-occurrence for aggressive optimizations; for instance, GCC and eliminate redundant checks under -O2, potentially improving performance by 20-30% in pointer-intensive loops. Modern C compilers incorporate optimizations like function inlining, which substitutes callee code at call sites to reduce overhead, and , which replicates loop bodies to minimize branch instructions—GCC's -finline-functions and 's equivalent flags can yield up to 15% speedup in numerical computations. These techniques, combined with vectorization for SIMD instructions, underscore C compilers' role in high-performance . Cross-compilation tools in GCC and Clang, via specifications like --target=arm-linux-gnueabi, facilitate building for remote architectures without native hardware, essential for portable embedded and server applications.
CompilerDeveloperFirst ReleaseStandards Supported (as of 2025)Key Notes
PCC (Stephen C. Johnson)1979K&R C, ANSI C89Retargetable design for early Unix ports; historical influence on portability.
GCC1987K&R to C23 (near-full)Open-source, multi-target; defaults to C23 in GCC 15.
Clang/LLVM2007ANSI C89 to C23 (near-full)Modular, fast diagnostics; Apple ecosystem integration.
MSVC1983K&R to C17 (partial C23)Windows-focused; strong debugger for pointer arithmetic.
SDCCOpen-source community1999ANSI C89 to C23Embedded optimization for 8/16-bit MCUs; compact code generation.

Pascal compilers

Pascal compilers implement the Pascal programming language, designed by Niklaus Wirth in 1970 to promote structured programming principles such as modularity, strong typing, and the use of variants for data abstraction, making it particularly suitable for educational purposes. Wirth's design emphasized readability and portability, influencing its adoption in university curricula worldwide during the 1970s and 1980s as a tool for teaching algorithmic thinking and software engineering basics. Early implementations focused on mainframes but quickly expanded to microcomputers, enabling broader accessibility for students and hobbyists. One of the earliest notable Pascal compilers was , released in 1978 by the , as part of the p-System operating environment. This portable implementation used p-code for a , allowing it to run on diverse hardware like the and , and highlighted Pascal's strong typing and module system for reliable code development. Its educational impact was significant, with the system distributed to over 100,000 users by the early 1980s, fostering in academic settings. Turbo Pascal, introduced by Borland in 1983, revolutionized Pascal compilation with its fast, for and systems. Developed from Anders Hejlsberg's earlier work, it supported Pascal's core features like procedures, records, and variants while adding extensions for efficiency, becoming a staple for teaching and . Its affordability and speed—compiling programs in seconds—made it popular in education and small-scale until the 1990s. For adherence to the formal ISO 7185 standard, which defines Pascal's syntax and semantics including modules and strong type checking, several compilers emerged, such as the GNU Pascal Compiler (GPC) integrated with GCC. GPC, active from the , provided a free, standards-compliant implementation targeting multiple platforms and emphasizing Pascal's portability for educational and portable code. In modern contexts, , initiated in 1993 by Florian Klämpfl, offers an open-source compiler compatible with and ISO 7185 dialects, supporting strong typing, modules, and cross-compilation to architectures like x86, , and PowerPC. It integrates with the Lazarus IDE as of 2025, enabling graphical application development while preserving Pascal's structured approach for teaching and legacy maintenance. extends the language with object-oriented features, powering Embarcadero's since 1995, which builds on 's lineage for native Windows, macOS, and mobile apps. This evolution maintains Pascal's emphasis on and modularity, with variants enhanced for runtime polymorphism, and remains used in for introducing OOP concepts.

Ada compilers

Ada compilers implement the Ada programming language, standardized by ISO/IEC 8652 since 1983, with revisions in 1995, 2005, 2012, and 2022, emphasizing reliability for safety-critical and embedded systems in domains like aerospace and defense. These compilers support Ada's strong typing, modularity, and concurrency features, enabling development of verifiable software that meets stringent certification requirements. Early development of Ada compilers began with the NYU Ada/Ed translator, the first prototype released in 1983 and written in the language, which provided foundational translation capabilities for the initial Ada 83 standard. A prominent modern compiler is , developed by AdaCore since 1992 and based on the GNU Compiler Collection (GCC), offering full support for Ada standards up to 2022 across multiple platforms including embedded targets. integrates with the SPARK toolset for , allowing static analysis of Ada code to prove absence of runtime errors and adherence to specifications. Another key compiler is Green Hills AdaMULTI, an from that supports Ada alongside C, C++, and for embedded applications, featuring optimizing compilers that enhance code efficiency and reliability in resource-constrained environments. Ada's tasking model, introduced in Ada 83, uses rendezvous for synchronous communication between tasks, where a calling task waits at an entry call until the called task accepts it, ensuring controlled concurrency in real-time systems. Generics in Ada enable parameterized packages and subprograms for type-safe reuse, compiled at instantiation time without runtime overhead, supporting abstraction in large-scale software. Many Ada compilers achieve certification, the aviation standard for software assurance levels A through E, with GNAT Pro providing qualified runtimes and toolsets that facilitate traceability, coverage analysis, and verification for airborne systems. The language has evolved to Ada 2022, incorporating enhancements to contract-based programming such as improved preconditions, postconditions, and invariants for subprograms and types, enabling compile-time checks that strengthen software correctness. In 2025, Ada compilers like are advancing support for in automotive systems through partnerships, such as AdaCore's collaboration with to implement Ada and SPARK for verifiable AI firmware in applications including autonomous systems, aligning with guidelines. Ada's structured roots trace back to Pascal, influencing its emphasis on safe, .

D compilers

The D programming language, designed for systems programming with a focus on performance and productivity, is supported by three primary compilers: DMD, the reference implementation; LDC, an LLVM-based optimizer; and GDC, a GCC frontend. These compilers enable D's advanced metaprogramming capabilities, allowing code generation and execution at compile time to produce efficient binaries suitable for high-performance applications. DMD, developed by at Digital Mars, serves as the official reference compiler and was first released in 2001 as the initial implementation of the language. It features fast compilation speeds and direct code generation, making it ideal for development and testing, while supporting all language features including seamless interoperability with C and C++ code through direct linking and mangling compatibility. LDC, initiated in 2007, leverages the backend for superior optimization and cross-platform code generation, often achieving better runtime performance than DMD in compute-intensive tasks due to advanced vectorization and inlining. GDC, started in 2004 by David Friedman and integrated into GCC since version 9 in 2018, uses the GCC infrastructure for broad target support, including embedded systems, and emphasizes mature optimization passes shared with other GCC languages. A hallmark of D compilers is support for through (CTFE), which interprets and executes functions during to enable dynamic code generation without runtime overhead. Template mixins further enhance this by allowing entire blocks of code, including templates, to be expanded and inserted at , facilitating reusable and patterns akin to macros but type-safe. Garbage collection in D is optional and configurable, with compilers providing runtime libraries that can be disabled or customized for real-time systems; the @nogc attribute enforces no-garbage-collection usage, ensuring deterministic for safety-critical code. C++ interoperability is facilitated via D's C-compatible ABI and tools like the cppimport utility, allowing direct calls to C++ libraries with minimal wrapping. In 2025 releases, such as D 2.111.0, compilers introduced enhancements for parallelism, including improved std.parallelism.task for better management and placement new expressions compatible with @nogc, enabling efficient allocation in concurrent contexts without GC pauses. These updates, along with extern support in DMD, bolster D's suitability for parallel and , with LDC particularly benefiting from LLVM's ongoing parallelization optimizations.

BASIC compilers

BASIC compilers translate dialects of the programming language into , supporting by enabling efficient, standalone s rather than relying solely on interpretation. Originating from the need to accelerate performance in early personal computing, these compilers evolved alongside BASIC variants, starting with simple structured dialects and progressing to graphical, event-driven environments. Key examples include Microsoft's , released in 1985 as an and compiler for , which built on the dialect by adding compilation to executable files and features like subroutines for modular . Early BASIC compilers often limited support to integer arithmetic to optimize speed and memory usage on resource-constrained hardware. By the late and , dialects progressed from text-based systems like (1983) to more advanced forms, culminating in Microsoft Visual Basic 1.0 in 1991, which introduced event-driven compilation for Windows GUI applications, where code responds to user interactions like button clicks via compiled event handlers. Visual Basic 6.0, released in 1998, refined this model with native code generation options, p-code intermediates, and form-based development, making it a staple for quick prototyping of desktop software. Open-source and specialized compilers continue this legacy for modern hobbyist and embedded applications. , initially released in 2004, is a multi-platform compatible with syntax, incorporating pointers, object-oriented extensions, and native code output for Windows, , and DOS, ideal for game development and legacy among enthusiasts. For embedded systems, BASCOM-AVR, developed starting in 1995 for the AVR family, provides a Windows-based IDE with BASIC compilation to optimized assembly, supporting peripherals like I/O pins and timers for in hobbyist and projects. While interpreter variants handle runtime evaluation for interactive scripting, BASIC compilers emphasize performance through ahead-of-time native generation.

Object-Oriented Languages

C++ compilers

C++ compilers implement the ISO/IEC 14882 standard, initially published in 1998, which formalized features like templates and the (STL) for and container abstractions. Subsequent revisions through have expanded support for modern idioms, including concurrency and modules, enabling efficient compilation of complex object-oriented and generic code across diverse hardware. These compilers must handle C++'s manual memory management, , and zero-overhead abstractions, distinguishing them from garbage-collected languages like . The evolution of C++ standards began with the Annotated C++ Reference Manual () in 1990, which guided early implementations before the first ISO standard (C++98) codified core features such as for robust error propagation and (RAII) for deterministic resource management via constructors and destructors. Lambda expressions, introduced in C++11, allow inline function objects for concise functional-style programming, with compilers optimizing them to avoid runtime overhead. By , standards emphasize modules for faster builds and better encapsulation, reducing reliance on header files and improving compilation scalability. Major C++ compilers include the GNU Compiler Collection (GCC), originating in 1987 and providing full support for C++23 features like coroutines and concepts, with experimental C++26 capabilities since GCC 14. Clang, developed by the LLVM project since 2007, offers robust modules support in C++20 and beyond, enabling incremental compilation and diagnostics for large codebases. Microsoft Visual C++ (MSVC), with roots in the 1970s but mature C++ support from the 1990s, implements C++20 concepts for constrained templates in its 2025 updates (version 19.50), aiding type-safe generic code. Intel's oneAPI DPC++ Compiler, based on Clang, extends C++ with for , supporting STL extensions and lambda offloading to accelerators. These compilers facilitate cross-platform development, with GCC and targeting Windows, , macOS, and embedded systems via portable backends. For GPU compilation, oneAPI DPC++ enables SYCL-based parallelism on GPUs and FPGAs, while supports interoperability through NVPTX backend for hardware, allowing seamless host-device code integration without . Exception handling in these tools ensures unwind semantics compliant with Itanium ABI for GCC and MSVC, preserving RAII during stack unwinding.
CompilerOrigin YearKey C++ Standard SupportNotable Features
GCC1987Full , experimental C++26Templates/STL optimization, cross-platform portability
2007 with modulesFast diagnostics, backend for GPUs
MSVC1970s (C++ from 1990s)C++20 concepts (2025)Windows integration, RAII in debug tools
oneAPI DPC++2019 (based on ) + extensionsGPU/FPGA offload, parallelism

Java compilers

The javac compiler, developed by Sun Microsystems and now maintained by Oracle as part of the OpenJDK project, serves as the reference implementation for compiling Java source code into bytecode for the Java Virtual Machine (JVM). Originally evolving from the Oak programming language project initiated in 1991, Java and its compiler were publicly released in 1995, with javac included in the first stable JDK 1.0 in 1996. The compiler has undergone continuous evolution, incorporating support for major language features such as annotations and generics in Java SE 5 (2004), which introduced type erasure to maintain backward compatibility by removing generic type information at runtime while enforcing type safety during compilation. By Java SE 8 (2014), javac added support for lambda expressions, enabling functional programming constructs that compile to invokedynamic bytecode instructions for efficient runtime invocation. Further advancements in Java SE 9 (2017) integrated the Java Platform Module System (JPMS), allowing javac to enforce modular boundaries during compilation, promoting better encapsulation and reducing classpath complexities in large-scale applications. As of JDK 25, released in September 2025, javac supports preview features for upcoming enhancements while maintaining compatibility with Java SE standards. The Compiler for (ECJ), developed by the as part of the JDT Core project, provides an independent, incremental compiler alternative to , optimized for IDE integration and . First integrated into the IDE around 2001, ECJ compiles source directly to without runtime dependencies, offering faster incremental builds for development workflows and full compliance with Java language specifications up to the latest versions. It excels in handling large codebases by recompiling only changed files, making it a popular choice for build tools like Maven and plugins. GraalVM, an advanced toolkit from Labs, extends compilation capabilities by enabling ahead-of-time (AOT) native image generation alongside traditional output, reducing startup times and memory footprint for cloud-native and serverless applications. Introduced in 2018, its Native Image tool analyzes applications at build time to compile them into standalone executables, embedding only necessary classes and libraries while supporting JPMS for modular apps. This approach contrasts with compilation in standard JVMs, providing peak performance from startup without warm-up phases, though it requires static analysis to handle dynamic features like reflection. In enterprise environments, 's J9 (now Eclipse OpenJ9) integrates a high-performance compiler with standard generation, often using or ECJ for front-end compilation before optimizing hot paths to native code at runtime. Originally released by in the early for robust in mission-critical systems, it emphasizes low-latency garbage collection and AOT compilation options for faster initialization. For Android development, the Jack toolchain—Google's experimental for Java 8 features directly to Dalvik —was introduced in 2015 but deprecated in 2017 in favor of desugaring via standard and D8/R8 tools.

C# compilers

C# compilers are integral to the .NET ecosystem, enabling the compilation of C# code versions from 1.0 (released in 2002) to the latest (introduced with .NET 10 in November 2025). These compilers produce intermediate language (IL) code that runs on the (CLR), supporting managed execution across Windows, , and macOS through the unified .NET platform. The primary compiler for C# is Roslyn, Microsoft's open-source .NET Compiler Platform, first announced in 2011 and fully open-sourced in 2014 under the 2.0 license. Roslyn replaced the legacy C# compiler (csc.exe, originally written in C++ and used from C# 1.0 through 5.0) starting with 2015 and C# 6.0, providing a modular for code analysis, syntax trees, and semantic models that facilitates tools like Visual Studio's IntelliSense. The modern csc.exe executable is now powered by Roslyn, supporting all C# features up to version 14, including async/await (introduced in C# 5.0 for asynchronous programming), (enhanced in C# 7.0 and later for expressive data deconstruction), and (added in C# 9.0 for immutable data types). Roslyn's evolution reflects C#'s roots in addressing limitations of prior efforts like J#, a transitional language for migration to .NET, by emphasizing a cleaner syntax influenced by while incorporating C++-style features for enterprise development. Its cross-platform capabilities were bolstered with .NET Core (now .NET 5+), allowing compilation for non-Windows environments, and extended to web scenarios via , which uses Roslyn to compile C# to for browser execution or ahead-of-time (AOT) native code. An alternative is the Mono compiler (mcs), part of the open-source Mono project initiated in 2001 to provide a cross-platform .NET implementation. Mono's mcs supports C# up to version 8.0 fully and partial features of later versions, compiling to IL for the Mono runtime, which remains relevant for legacy applications and mobile development via (acquired by in 2016 and integrated into .NET). While Roslyn has become the standard for new .NET development, Mono's compiler offers compatibility for scenarios requiring independent .NET runtimes outside Microsoft's ecosystem.

Eiffel compilers

Eiffel compilers support the Eiffel programming language, which emphasizes object-oriented programming through design by contract, enabling developers to specify preconditions, postconditions, and class invariants to ensure software reliability and correctness. These compilers facilitate features like multiple inheritance for code reuse and void safety to prevent null pointer dereferences at runtime, aligning with the language's goal of producing robust, maintainable systems. The language adheres to international standards, including ECMA-367 (equivalent to ISO/IEC 25436:2006), which defines its syntax, semantics, and implementation requirements for analysis, design, and programming. The primary commercial implementation is EiffelStudio, developed by Eiffel Software since the 1980s and continuously updated to the present, including version 25.02 released in March 2025. EiffelStudio integrates a full with an IDE that supports the entire Eiffel , including advanced contract enforcement during compilation and runtime checks to verify preconditions before routine execution and postconditions afterward. It generates optimized code for multiple platforms, incorporating void safety mechanisms that require explicit annotations for potentially void references, thus eliminating common runtime errors. Gobo Eiffel is an open-source compiler project initiated in the late , providing a portable command-line tool called GEC that compiles Eiffel code to and works across various environments. Released under a BSD-like license, it emphasizes library portability and supports core Eiffel features such as hierarchies and contract-based assertions, with the latest version 25.10 issued in October 2025. Gobo Eiffel prioritizes standards compliance, enabling the creation of reusable components without . SmallEiffel, originally released in the 1990s as the GNU Eiffel compiler, was a free implementation that bootstrapped itself and optimized Eiffel code to efficient C output, supporting through integrated assertion checking. Development slowed in the early and was largely discontinued by the mid-, with its last stable release at version 2.3, though it influenced subsequent open-source efforts like SmartEiffel and Liberty Eiffel. Eiffel compilers hold a niche in , where they are used to teach principles of reliable object-oriented , programming, and verification techniques in academic settings. Universities often incorporate EiffelStudio in courses to demonstrate how preconditions and postconditions promote modular, error-free development, fostering skills in building extensible systems with strong guarantees.

Smalltalk compilers

Smalltalk compilers primarily target the language's unique image-based execution model, where the entire runtime environment—including classes, methods, and objects—is persisted in a file that can be incrementally modified and reloaded. Originating from the pioneering work at PARC in the , Smalltalk's compilation process involves translating into platform-independent executed by a (VM), often augmented by just-in-time () compilation in contemporary systems to optimize performance during runtime. This approach supports Smalltalk's reflective nature, allowing code to be compiled, executed, and modified live without restarting the system. A hallmark of Smalltalk's design is its uniform object model, featuring that treat classes themselves as objects; each class is an instance of its metaclass, which inherits from a root metaclass hierarchy, enabling meta-programming capabilities such as dynamic class modification. Blocks, known as first-class closures, are delimited by square brackets and capture lexical scope, allowing deferred execution or iteration patterns like collection methods (e.g., select: or collect:) that pass blocks as arguments for functional-style operations. These elements underpin Smalltalk's pure object-oriented paradigm, where everything, including control structures, is achieved through . VisualWorks, originally released in the late by ParcPlace Systems and now maintained by Cincom, represents a traditional commercial Smalltalk compiler emphasizing enterprise-grade tools for application development. It compiles Smalltalk code to within its image-based system, supporting cross-platform deployment and integration with external databases and web services through extensions like the versioning system. VisualWorks has been used in high-stakes domains such as , leveraging its stable VM for reliable, long-running applications. Squeak, introduced in 1996 as an open-source derivative of Smalltalk-80 by a team including and , focuses on educational and exploratory programming with a lightweight compiler embedded in its VM. Its image-based architecture facilitates , and the system includes the Morphic interface for direct manipulation of objects, making it ideal for teaching object-oriented concepts. Squeak's supports metacircular evaluation, where the compiler itself is implemented in Smalltalk, promoting accessibility for learners. Pharo, forked from in 2008, advances Smalltalk compilation with open-source innovations, including JIT capabilities through the Cog VM, which dynamically translates hot bytecode methods to native for improved speed in compute-intensive tasks. Pharo's integrates with modern development workflows, such as Git-based via Iceberg, while retaining image-based for refactoring and . It is actively maintained for both research and production use, with recent versions enhancing performance through advanced optimizations like in the JIT pipeline. Evolution from PARC's foundational systems has led to modern web adaptations, where Smalltalk compilers enable browser-native execution; for instance, Amber Smalltalk transpiles code to , allowing full image-based development directly in web environments with access to DOM APIs via seamless interoperability. Similarly, SqueakJS ports the Squeak VM to , running unmodified images in browsers for educational demos and interactive applications. These ports extend Smalltalk's live environment to web platforms without compromising its core principles.

Functional and Logic Languages

Common Lisp compilers

Common Lisp compilers implement the ANSI Common Lisp standard, established on December 8, 1994, which defines a portable dialect emphasizing extensibility through macros that allow users to define new syntactic constructs at . This portability enables code to run across diverse implementations with minimal changes, supporting both interactive development and production deployment. The standard builds on the 1984 specification in "Common Lisp: The Language" by Guy L. Steele Jr., providing a robust foundation for general-purpose programming. Prominent open-source compilers include (SBCL), first released in 2001 as a of CMU Common Lisp from 1999, known for its optimizing native code generation and advanced features like for performance. Clozure Common Lisp (CCL), a free implementation with roots in Macintosh Common Lisp, excels in multi-platform support and fast compilation, including precise garbage collection. Embeddable Common Lisp (ECL) compiles Lisp to C for seamless integration with C/C++ codebases, making it ideal for embedded systems and libraries. These compilers fully support core language features such as the Common Lisp Object System (CLOS) for with multimethods, the condition system for structured error handling and restarts, and tail-call optimization in many cases to prevent stack overflows in recursive code. In modern applications, particularly AI as of 2025, compilers power symbolic reasoning and via libraries like MGL, which supports neural networks and probabilistic models on SBCL and other implementations. This continued relevance stems from the language's dynamic typing and macro system, facilitating in AI research while maintaining efficiency through compiled execution.

Haskell compilers

Haskell compilers implement the programming language, a standardized purely functional language defined by the Haskell 2010 report and subsequent revisions, emphasizing —where expressions are evaluated only when needed—and a sophisticated system for polymorphism and overloading. This design enables advanced features like monads for handling side effects in a pure context, type families for dependent types, and parallel evaluation strategies, distinguishing Haskell from more minimalist functional languages by supporting and theorem proving through its strong static . Compilers for Haskell 2010 and later focus on optimizing non-strict semantics, evolving from early implementations that prioritized correctness over performance to modern systems incorporating extensions like those in the latest GHC versions for enhanced expressiveness. The Glasgow Haskell Compiler (GHC), first released in 1990, remains the reference implementation for Haskell, supporting the Haskell 2010 standard and GHC extensions, including linear types (available since GHC 9.0 in 2020) for explicit resource management and affine typing to prevent resource leaks in performance-critical applications. Since GHC 9.8 (2024), a native JavaScript backend is integrated into GHC, enabling direct compilation to JavaScript for web applications while preserving Haskell features. GHC's core innovations include its implementation of monads via the Monad type class, allowing composable abstractions for input/output and state, and type families that enable associated types for modular design in libraries. For parallelism, GHC provides strategies like par and pseq in the Control.Parallel module, enabling declarative parallel programming on multicore systems without explicit thread management, with optimizations in its runtime system achieving scalable performance on large-scale computations. GHC's evolution traces back to its origins in non-strict evaluation using graph reduction techniques, refined over decades to support just-in-time compilation and native code generation for multiple backends including x86, ARM, and JavaScript. GHCJS, a Haskell-to-JavaScript compiler built on GHC's frontend, targets by compiling code to efficient JavaScript, preserving and type classes while integrating with browser APIs through foreign function interfaces. It supports 2010 features like monads and type families, allowing full libraries to run client-side, and has been used in production for interactive web applications since around 2016.

Scheme compilers and interpreters

Scheme, a dialect of , originated in 1975 as a minimalist programming language designed by Gerald Jay Sussman and Guy L. Steele Jr. at the MIT Artificial Intelligence Laboratory to explore concepts in actor-based computation and lexical scoping. Its early implementations focused on simplicity, with the first formal description appearing in a 1975 MIT AI Memo. Over time, Scheme evolved through standards like R5RS (1998), which formalized core features, and R7RS (2013), which emphasized modularity and compatibility while maintaining a small core language. These standards balance minimalism—requiring only essential constructs—with allowances for extensions in implementations, enabling both pure functional programming and practical applications. A hallmark of Scheme is its support for advanced control structures, including proper tail calls, first-class continuations, and hygienic macros via syntax-rules. Tail calls, mandated by R5RS, ensure that recursive calls in tail position do not consume stack space, supporting efficient unbounded . Continuations, captured via call-with-current-continuation, allow programs to capture and manipulate the execution context as reified procedures, enabling coroutines and non-local . The syntax-rules macro system, introduced in R5RS, provides hygienic expansion to prevent variable capture, facilitating domain-specific languages without runtime overhead. These features, rooted in Scheme's 1975 design, distinguish it from other dialects by prioritizing lexical scoping and . Scheme implementations typically include both compilers for native code generation and interpreters for interactive development, often supporting R5RS or R7RS as baselines while adding extensions for libraries, foreign function interfaces, and embedding. Compilers like Chez Scheme, first released in 1985 by R. Kent Dybvig, emphasize high-performance native code via an incremental optimizing compiler, achieving near-C speeds on benchmarks while conforming to R7RS. Its evolution influenced R6RS standards, and it supports cross-module optimizations for large programs. Chibi Scheme, a lightweight R7RS-compliant implementation, compiles to bytecode for a virtual machine, prioritizing embeddability in C applications with a minimal footprint under 1 MB, suitable for scripting and extensions without heavy dependencies. Interpreters such as Guile, the GNU project's Scheme implementation first released in 1995, provide a dynamic environment with an integrated compiler-to-bytecode, supporting extensions for Unix integration and module systems beyond R5RS. Guile's design facilitates in GNU software like for scripting configuration and automation. Racket, evolving from PLT Scheme since 1995, serves as a versatile host for compilers and language experimentation, Scheme runtimes into applications for and rapid prototyping of domain-specific languages. Its macro system extends syntax-rules for multi-language support, enabling embedded uses in education and tool-building. These implementations highlight Scheme's adaptability, from minimalist interpreters to optimizing compilers, while preserving core standards for portability.

ISLISP compilers and interpreters

ISLISP, standardized by the as ISO/IEC 13816 in 1997, defines a portable dialect of intended for industrial and embedded applications, emphasizing simplicity, extensibility, and international compatibility. Unlike , which is an ANSI standard with a larger feature set, ISLISP adopts a more minimalist core language while incorporating key elements like lexical scoping and a condition system for error handling. The standard supports modules for code organization, a fixed set of predefined packages (such as USER and KEYWORD) to promote portability without dynamic package creation, and an object-oriented system called LOOPS, which provides CLOS-like functionality but with simplified method dispatch and fewer metaclasses. Several implementations of ISLISP have been developed, though adoption has remained limited compared to more popular Lisp dialects. OpenLisp, created by Christian Jullien under Eligis, is a commercial interpreter (free for noncommercial use) that fully conforms to the ISO standard and includes extensions for compatibility, such as hashtables and sequences; it was the first complete ISLISP processor, released in 1997, and features high performance due to its C-based kernel. Easy-ISLisp (EISL), developed by Kenichi Sasagawa, is an open-source interpreter and compiler that translates ISLISP code to C for execution via GCC, targeting portability across platforms like Unix, Windows, and embedded systems; it remains actively maintained as of 2025 with enhancements for parallelism. TISL, an experimental system by Masato Izumi and Takayasu Ito at , provides both an interpreter and an interpreter-based , focusing on into ISLISP's kernel derived from Japanese proposals during standardization. Other notable implementations include , a compiler written in by Kim Taegyun to highlight syntactic differences from , and Iris Lisp, a modern runtime built on Go and for cross-platform use including web environments. OKI Lisp and Prime-Lisp offer additional free options, with Prime-Lisp implemented in C++ and extending the standard with vector and matrix operations for numerical . These tools demonstrate ISLISP's embeddability and efficiency, though dynamic variables are handled via a DECLARE form rather than 's SPECIAL declarations, and reader macros are absent from the core to avoid implementation complexity. Post-1990s, ISLISP saw limited industrial uptake due to the dominance of in AI and , with efforts shifting toward archival preservation and hobbyist projects by the . As of 2025, active development persists in open-source repositories like EISL, supporting educational and experimental uses, while older systems like OpenLisp are maintained in archival forms for historical reference.

Lisp dialects compilers (other)

Clojure, released in 2007 by , is a dynamic dialect designed to run on the (JVM), where its generates JVM for seamless integration with libraries and ecosystems. This interop allows direct invocation of Java methods and classes from Clojure code, leveraging the JVM's , garbage collection, and threading model without additional bindings. Clojure emphasizes immutable data structures and , with unique concurrency primitives like agents—reactive entities that handle asynchronous updates to state via , enabling robust multithreaded applications. Racket, developed in the 2000s as an evolution of PLT Scheme, serves as a full-spectrum dialect supporting through its #lang declaration system, which enables the creation and use of domain-specific languages (DSLs) within a single runtime. Its expands macros to core Racket forms, facilitating extensible syntax and modular language definitions, as seen in dialects like Typed Racket for static typing. Racket pioneered higher-order contracts, a system for enforcing behavioral interfaces at module boundaries, which promotes safer and error detection in large-scale software. Arc, introduced in 2001 by Paul Graham and Robert Morris, is a minimalist dialect optimized for exploratory and concise programming, compiling to or other hosts while prioritizing brevity in syntax and semantics. It builds on Lisp's macro system to allow radical customization, such as reader macros for streamlined expression, fostering a "programmable programming language" ideal for rapid prototyping and web applications like . Arc's design philosophy emphasizes fun and malleability, though it remains unfinished for production-scale s. Hy, a Lisp dialect embedded directly in Python since its inception around 2013, compiles s-expressions into Python's (AST), enabling full interoperability with Python's and third-party packages without foreign function interfaces. This homoiconic approach allows Lisp-style macros and higher-order functions to coexist with Python's object-oriented and imperative paradigms, making Hy suitable for and scripting tasks. In 2025, Hy reached version 1.1.0, enhancing stability and integration amid growing Python efforts like Pyodide, which indirectly support Hy code execution in browser environments.

Scripting and Dynamic Languages

Python compilers and interpreters

CPython, the reference implementation of the Python programming language, was created by Guido van Rossum and first released in 1991 as an interpreter written primarily in C. It serves as the default and most widely used Python interpreter, executing Python bytecode through a virtual machine while providing core features such as dynamic typing, object-oriented programming, and extensive standard library support. A key architectural element of CPython is the Global Interpreter Lock (GIL), a mutex that prevents multiple native threads from executing Python bytecodes simultaneously, simplifying memory management and avoiding race conditions but limiting multi-threaded performance on multi-core systems. Python's evolution includes significant language features like decorators, introduced in Python 2.4 for modifying functions or classes without altering their source code, and the asyncio module, added in Python 3.5 to enable asynchronous programming via coroutines and an event loop for handling I/O-bound tasks efficiently. The transition from Python 2.x to 3.x, beginning with Python 3.0 in 2008, involved breaking changes such as print becoming a function and unicode strings as defaults, prompting a gradual migration aided by tools like 2to3 for automated code translation; Python 2 reached end-of-life in 2020, solidifying Python 3 as the standard. Alternative implementations address CPython's limitations, particularly in performance. PyPy, an interpreter and JIT compiler developed starting in 2003 and featuring a production-ready JIT since its 1.0 release in 2010, translates Python code to at runtime, often achieving 2-10x speedups for compute-intensive tasks while maintaining compatibility with extensions via cpyext. , an optimizing static compiler based on , translates Python code (and its extended syntax) into C modules that can be compiled into efficient extensions for , enabling seamless integration of performance-critical sections with C libraries. , a written in Python, converts Python scripts into standalone C executables that bypass the Python runtime, supporting Python 3.4 through 3.13 with optimizations like and offering compatibility for distribution without requiring a Python installation. Other implementations include for the JVM and IronPython for .NET. Recent advancements in focus on performance under the Faster CPython project, with Python 3.12 introducing adaptive specializing and quickening for bytecode optimization, and Python 3.13, released in October 2024, delivering further gains through enhanced peephole optimizations, a new JIT experiment for select loops, and reduced interpreter overhead, resulting in up to 10-60% faster execution for various workloads compared to prior versions. These efforts, including experimental free-threading builds without the GIL in 3.13, aim to improve concurrency and speed while preserving .

Ruby compilers and interpreters

The primary implementation of the Ruby programming language is Matz's Ruby Interpreter (MRI), also known as CRuby, which serves as the reference implementation written primarily in C. Introduced in the mid-1990s and evolving through the 2000s, MRI adopted the Yet Another Ruby Virtual Machine (YARV) bytecode interpreter starting with Ruby 1.9 in 2007, replacing the earlier interpreter to improve performance while maintaining compatibility with Ruby's dynamic features. YARV compiles Ruby source code into bytecode for execution on a stack-based virtual machine, enabling efficient handling of Ruby's metaprogramming capabilities, such as dynamic method definition and code generation at runtime. Ruby's design emphasizes object-oriented purity and elegance, drawing influences from Perl's scripting expressiveness but prioritizing a consistent, readable syntax for and everyday use. Key language aspects supported across implementations include blocks, which act as closures for functional-style iteration and (e.g., in methods like each or map); mixins via modules, allowing reusable behavior composition without traditional ; and fibers, lightweight coroutines introduced in Ruby 1.9 for and non-blocking I/O. These features facilitate Ruby's prowess, where developers can introspect and modify classes, methods, and objects dynamically, as seen in frameworks like . For Ruby 3.1 and later versions, MRI continues to lead with full support, including enhancements to —a feature first stabilized in Ruby 3.0 (2020) after experimental introduction in 2.7 (2019), evolving through refinements in 3.1 and 3.2 for deeper destructuring of arrays, hashes, and objects. Pattern matching builds on Ruby's historical progression from Perl-inspired text processing to modern structural data handling, reducing boilerplate in conditional logic and parsing. By 2025, MRI's YARV has optimized these for production scripting and web applications, where Ruby excels in and server-side development via tools like Sinatra or the dominant framework. JRuby, an implementation targeting the (JVM) since its inception in 2001, provides seamless integration with libraries and leverages JVM's just-in-time () compilation for enhanced performance in enterprise environments. It achieves full compatibility with 3.1+, including metaprogramming constructs like method_missing hooks and eval-based , while supporting blocks, mixins, and fibers through JVM threading models. JRuby's evolution has focused on , making it ideal for hybrid Ruby-Java web services and large-scale scripting tasks, such as pipelines. TruffleRuby, built on the platform since 2014, offers a high-performance alternative by interpreting Ruby abstract syntax trees (ASTs) and applying partial evaluation for optimization, often outperforming MRI in parallel workloads. It fully implements Ruby 3.1+ features, including advanced for runtime reflection and the complete suite of blocks, mixins, fibers, and , with added support for running C extensions in a managed environment. TruffleRuby's design enables polyglot applications, extending Ruby's scripting utility to embedded systems and high-throughput web backends alongside languages like or Python.

Perl interpreters

Perl interpreters primarily execute Perl 5 code, the stable version of the language developed since its initial release in October 1994 by and maintained by the Perl 5 Porters group. The core interpreter, often referred to as , is a bytecode-based that compiles Perl into an for execution, supporting dynamic typing with optimizations in areas like regex processing. This interpreter handles Perl's distinctive features, including sigils—prefix symbols like $ for scalars, @ for arrays, and % for hashes—that indicate variable types and enable contextual behavior, where the same operator or function can return scalars in scalar context or lists in list context, enhancing expressiveness in text processing tasks. A key strength of the 5 interpreter lies in its integrated engine, which provides powerful capabilities native to the language and serves as the inspiration for the (PCRE) library, a widely adopted C implementation that mirrors Perl's syntax and semantics for use in other systems. The (CPAN), established in 1995, extends the interpreter's functionality through over 224,000 modules as of 2025, allowing seamless integration of extensions for tasks ranging from to . For , the module offers a modern, meta-object protocol-based system that simplifies class definition, inheritance, and roles, drawing influences from Perl 6 and to reduce boilerplate while maintaining Perl's flexibility. In addition to the core interpreter, Inline::C provides a mechanism for embedding and compiling C code directly within Perl scripts, automatically generating XS bindings to create hybrid extensions without manual compilation steps, which is particularly useful for performance-critical sections. Perl 6, first prototyped in 2000 as a redesign of Perl, evolved into the separate Raku language, with its primary interpreter Rakudo implementing the specification on the MoarVM backend since 2015. Raku interpreters support , , and advanced concurrency primitives like Promises for asynchronous operations, Supplies for , and Channels for thread-safe communication, enabling efficient parallel processing. In 2025, Rakudo's release cycle, including version 2025.08, continued enhancements to concurrency support by improving MoarVM's thread scheduling and integrating better async I/O handling, aligning with Raku's 6.d specification for more robust hyperparallelism in data-intensive applications.

PHP compilers

PHP compilers primarily facilitate the execution of for web applications, transforming PHP into or native code to enhance performance in dynamic environments like HTTP request handling. The standard implementation, the , serves as the core runtime for most PHP deployments, compiling scripts into opcodes for interpretation within a optimized for integration. Introduced in mid-1999 alongside PHP 4.0, the marked a significant advancement over prior interpreters by providing a more efficient and runtime, enabling features such as superglobals—predefined variables like SERVERand_SERVER and _SESSION available across all scopes without explicit declaration—which streamlined by simplifying access to HTTP data and session state starting from PHP 4.1 in 2001. For performance-critical applications, alternative compilers like (HipHop Virtual Machine) offer just-in-time () compilation to accelerate execution. Developed by Meta (formerly Facebook) and first released in 2013 as an evolution of the 2010 HipHop for PHP project, uses to translate PHP bytecode into optimized at runtime, achieving substantial speedups for high-traffic web servers by reducing interpretation overhead. It supports PHP compatibility while extending to the Hack language, focusing on web acceleration through techniques like and inline caching tailored to server-side workloads. In 8.3 and later versions (released November 2023 onward), the incorporates via the OPcache extension, which caches precompiled opcodes and optionally compiles them to native code for repeated executions, boosting throughput in web scenarios by up to 20-30% in benchmarks for compute-intensive scripts. OPcache, introduced in 5.5 in 2013, stores these opcodes in to avoid reparsing, while the mode—enabled since 8.0 in 2020—targets hot code paths common in web applications, such as trait-based inheritance introduced in 5.4 (2012) for code reuse across classes. This evolution supports modern constructs like enums in 8.1 (2021), which provide type-safe named constants for cleaner server-side logic without runtime overhead. Another notable compiler, PeachPie, targets .NET integration for cross-platform web deployment. Launched in 2017 as an open-source project based on Microsoft's Roslyn platform and evolving from the earlier , PeachPie compiles to .NET assemblies, allowing execution on the .NET runtime with full 8.3+ compatibility, including opcodes, traits, and enums, while leveraging .NET's garbage collection and web hosting features like for accelerated server-side rendering. This approach enables legacy web applications to migrate to .NET ecosystems, preserving superglobals and JIT-like optimizations through .NET's .

Tcl interpreters

Tcl, or Tool Command Language, is a dynamic originally developed in by at the , as an embeddable command interpreter for tools. The core Tcl interpreter, known as tclsh, forms the maintained by the Tcl Core Team, with ongoing development from to the present; the latest stable release is Tcl 9.0.3 (November 2025), first introduced in September 2024 to support 64-bit data structures and full handling. ActiveTcl, distributed by ActiveState since 1998, provides a commercially supported version of the core interpreter with additional packages and enterprise features for multi-platform deployment. For resource-constrained environments, Jim Tcl offers a compact, open-source reimplementation of the language, emphasizing a small of under 200 KB while covering a large subset of Tcl's syntax and commands. A hallmark of Tcl's design is its "everything-as-string" philosophy, where all data types—numbers, lists, and objects—are represented internally as strings, facilitating seamless and manipulation without type issues. The upvar command enables efficient variable linking across procedure scopes, allowing shared access without copying, which enhances modularity in script organization. Coroutines, added in Tcl 8.6 and refined in subsequent versions, provide lightweight by suspending and resuming execution contexts, ideal for asynchronous operations like event handling. These features, combined with Tcl's C-based extensibility, make it particularly suited for embedding scripts within larger applications, as demonstrated by its original intent as a library package linkable into C programs for custom command . Historically, Tcl gained prominence through its integration with the toolkit, released in , which provided a cross-platform widget set for building graphical user interfaces; by the early , Tcl/ became the for rapid GUI prototyping on Unix systems due to its simplicity and avoidance of proprietary alternatives. This pairing enabled developers to create interactive tools, such as browsers and debuggers, with minimal overhead, influencing GUI scripting in domains like . Tcl's embeddability remains a core strength, allowing seamless integration into host applications via APIs like Tcl_CreateInterp for interpreter creation and Tcl_Eval for script execution, supporting use cases from configuration scripting to runtime extensions in software like systems. In 2025, Tcl interpreters continue to support (IoT) integrations, leveraging variants like Jim Tcl for embedded devices in automation and control systems, where its low resource demands enable on-device scripting for sensor data processing and protocol handling.

ECMAScript interpreters

ECMAScript interpreters, also known as JavaScript engines, are runtime environments that execute code conforming to the ECMAScript standard defined in ECMA-262 by . These engines facilitate dynamic, event-driven scripting primarily in web browsers and server-side applications, supporting core language features such as prototype-based object inheritance, asynchronous programming with async/await syntax, and modular code organization via statements. The dominant ECMAScript interpreters include V8, SpiderMonkey, and JavaScriptCore, each optimized for high performance through just-in-time (JIT) compilation and garbage collection. V8, developed by and released in 2008, is an open-source engine written in C++ that powers and . It implements prototype-based by linking objects to constructor prototypes for property sharing and method delegation, enables async/await for non-blocking operations on Promises as introduced in ECMAScript 2017, and supports ES6 modules for encapsulating dependencies. V8 aligns with ECMAScript 2025 enhancements, including iterator helper methods and new Set operations for improved iterable handling. SpiderMonkey, Mozilla's open-source engine written in C++, , and , drives and other projects like Servo. It provides full support for 's prototype chains, allowing dynamic property resolution through delegation; async/await for simplifying asynchronous code flows; and native module loading since ECMAScript 2015. SpiderMonkey incorporates ECMAScript 2025 features, such as new Set methods for enhanced data structure operations. JavaScriptCore, the WebKit project's engine used in , implements with multi-tier compilation for efficient execution. It handles prototype-based inheritance via internal prototype links for object extensibility, supports async/await within async functions for resolution, and enables module imports for code reusability. As of 2025, it integrates 2025 updates, including escaping improvements. Server-side ECMAScript execution contrasts with client-side browser environments by emphasizing secure, standalone runtimes for tasks like API development. Deno, a V8-based runtime written in and launched in 2018, offers secure-by-default execution with built-in support, natively handling prototypes, async/await, and modules without external dependencies. It supports 2025 features, bridging web standards to server applications. A significant advancement in recent interpreters is the Temporal API, proposed for standardization and implemented experimentally in major engines by , providing immutable, time-zone-aware date and time objects to supersede the mutable Date constructor. This API enhances precision in global applications, supporting calendar systems and durations while maintaining compatibility with async operations and modular imports.

Rexx interpreters

REXX, originally developed in 1979 by Mike Cowlishaw at IBM as part of the Technical Reference Library (TRL-2) project, is an interpretive programming language designed for system automation and scripting tasks. Its interpreters execute code clause by clause without compilation, enabling rapid development of scripts for tasks like file manipulation and process control. Key features include stems, which function as dynamic compound variables or arrays (e.g., mytable.1 = "value"), the SAY instruction for simple output of strings to the console, and the PARSE instruction for breaking down input strings into variables based on delimiters or templates. Regina, an open-source REXX interpreter first released in the early by Anders Christensen, provides portable implementation across systems (including , , and Solaris), Windows, , and DOS. It fully supports ANSI-standard features, including stems for array-like data handling, SAY for output, and PARSE for input processing, while adding extensions like RexxUtil libraries for platform-specific functions such as file I/O and dialog management. Regina's interpretive execution model allows seamless integration with shell environments for automation scripts, making it a popular choice for cross-platform system administration. ooRexx (Open Object Rexx), released in March 2005 by the Language Association (LA) as an open-source extension of IBM's proprietary Object , introduces object-oriented capabilities while maintaining upward compatibility with classic . It supports core interpretive features like clauses, stems for extensible data structures, SAY, and PARSE, augmented by classes, , and methods for more in automation tasks. Available for Windows, , and macOS, ooRexx is used in environments requiring both procedural scripting and object-based extensions, such as GUI automation and . IBM , integrated into the z/VM operating system since its introduction with VM/SP Release 3 in 1983, serves as a native interpretive processor for mainframe , coexisting with CMS EXEC processors. It implements TRL-2-derived features, including interpretive clause execution, stems for handling variable arrays in memory-constrained environments, SAY for logging output, and PARSE for parsing command-line arguments or data streams. This interpreter excels in and interactive scripting on hardware, supporting system management tasks like . The legacy of in OS/400 (now ) dates to the system's early releases in the late , where it was implemented as an interpretive language for integrated system automation on midrange servers. OS/400 retains classic features such as stems for database-like variable grouping, SAY for user feedback, and PARSE for dissecting CL command inputs or spool files, facilitating tasks like job scheduling and report generation. Despite the platform's evolution to , this implementation remains a cornerstone for legacy scripting, akin to command language interpreters in its focus on procedural system control.

Systems and Low-Level Languages

Rust compilers

The Rust compiler, rustc, serves as the primary toolchain for compiling the Rust programming language, a systems programming language emphasizing memory safety, concurrency, and performance without a garbage collector. Development of rustc began in 2010 as part of the initial Rust project initiated by Graydon Hoare at Mozilla, with the compiler initially implemented in OCaml before becoming self-hosting in Rust. Rust achieved version 1.0 stable release on May 15, 2015, marking the language's commitment to API stability and introducing core features like ownership and borrowing rules enforced at compile time to prevent common errors such as null pointer dereferences and data races. The compiler integrates with Cargo, Rust's official build system and package manager, which handles dependency resolution, building, and testing, streamlining development workflows since its inclusion in the 1.0 release. Central to rustc's design are its optimizations on the Mid-level Intermediate Representation (MIR), a structured intermediate language introduced in 2016 to facilitate borrow checking, type verification, and code generation after high-level syntax desugaring. MIR enables aggressive optimizations such as inlining, dead code elimination, and loop unrolling, which contribute to Rust's zero-cost abstractions—high-level features that compile to efficient machine code without runtime overhead. Key language features compiled by rustc include the ownership and borrowing system, which uses lifetimes to track variable scopes and ensure thread safety; traits, providing abstraction and polymorphism similar to interfaces; and asynchronous programming support via async/await syntax, stabilized in Rust 1.39 (December 2019) to simplify concurrent I/O operations. Const generics, allowing parameterization of types with compile-time constants, were stabilized in Rust 1.51 (March 2021), enabling more expressive generic programming for fixed-size data structures and algorithms. These features have evolved through Rust's edition system, with editions 2015 (initial stable), 2018 (productivity enhancements like non-lexical lifetimes), 2021 (improved trait and pattern matching consistency), and 2024 (introducing async closures and diagnostic improvements, released February 2025 with version 1.85.0). rustc supports a wide range of compilation targets, including (WASM) via tiers like wasm32-unknown-unknown, enabling high-performance modules that integrate seamlessly with ecosystems for web applications, with optimizations for small binary sizes and no runtime pauses. For embedded systems, rustc compiles to bare-metal targets such as (e.g., thumbv7m-none-eabi) using the no_std subset of the , providing static guarantees, fearless concurrency, and with C codebases for resource-constrained devices like microcontrollers. These capabilities position Rust compilers alongside those for languages like Go in , but with a focus on lifetimes and zero-cost abstractions for fine-grained control over and performance.

Go compilers

The primary compiler for the Go programming language is gc, the standard toolchain developed by since the language's inception in 2009. Originally written in C for , gc has since been rewritten in Go itself, enabling fast compilation times that typically complete in seconds even for large codebases. As of Go 1.25 released in August 2025, gc supports advanced optimizations such as (PGO) enhancements introduced in Go 1.22, which improve runtime performance by 2–14% through better devirtualization of interface calls. It fully implements Go's concurrency model, including goroutines—lightweight threads managed by the runtime scheduler—and channels for safe communication between them, allowing developers to write scalable concurrent programs without explicit locks. An alternative to gc is gccgo, a frontend for the Compiler Collection (GCC) originally developed by Ian Taylor and integrated into GCC 4.7.1 in 2012. Maintained by the community, gccgo adheres to the Go specification and leverages GCC's mature optimization infrastructure, often producing faster execution for CPU-intensive workloads despite longer compilation times compared to gc. Like gc, it supports core Go features such as interfaces, which enable structural typing and polymorphism without , facilitating modular code design. Gccgo remains officially supported for scenarios requiring GCC-specific integrations, such as certain cross-compilation targets not fully optimized in gc. For embedded systems and targets, TinyGo provides a specialized compiler that uses the backend to generate compact binaries, often significantly smaller than those from gc or gccgo. Initiated around 2017 to address limitations in standard Go tools for resource-constrained environments, TinyGo supports a subset of Go's while preserving key language features like goroutines and channels, though with adaptations for no-garbage-collection modes in tiny environments. It excels in producing code for microcontrollers and , where binary size and are critical. Go compilers emphasize ease of cross-compilation, allowing binaries for different operating systems and architectures to be built from a single host machine by setting environment variables like GOOS and GOARCH before invoking the build command—no additional toolchains required for most targets. This feature, present since early releases, stems from Go's design philosophy of simplicity and portability. Additionally, generics—introduced in Go 1.18 in March 2022—enable parameterized types and functions, reducing code duplication for algorithms working across multiple types while maintaining Go's . By November 2025, generics are fully mature in gc, gccgo, and TinyGo, supporting common use cases like generic data structures and utilities.

Forth compilers and interpreters

Forth compilers and interpreters implement the Forth programming language, a stack-oriented, interactive system originally developed by Charles H. "Chuck" Moore in 1970 for controlling telescope hardware at the National Radio Astronomy Observatory, emphasizing efficiency in resource-limited environments. Moore's design replaced complex software hierarchies with a single-layer approach, using a dictionary-based vocabulary that users can extend at runtime, making it ideal for embedded and real-time applications. The language's core execution model relies on two stacks—a data stack for operands and a return stack for control flow—enabling immediate compilation and execution without traditional source-to-object separation. The ANSI Forth standard, ratified in 1994, formalized the language's core words, threading mechanisms, and extensibility features, ensuring portability across diverse hardware while supporting interactive development. ANS Forth implementations typically employ threading models to represent code as linked sequences of executable primitives, optimizing for speed and memory in constrained systems. Indirect threading, the most common model, stores addresses of code blocks in a thread that the interpreter fetches sequentially via a pointer, allowing compact representations suitable for microcontrollers. Direct threading enhances performance by embedding directly in the thread, reducing overhead, while subroutine threading uses native call instructions for each primitive, balancing simplicity and efficiency. Central to Forth's extensibility are colon definitions, initiated with the : word and terminated by ;, which compile sequences of existing words into new vocabulary entries, allowing users to build domain-specific languages incrementally. For more advanced customization, the DOES> word enables defining words that create instances with runtime-specific behavior; after a CREATE-like definition, DOES> appends execution semantics to child words, such as parameterizing constants or structures without repetitive code. This mechanism, part of the ANS core extensions, supports Forth's self-hosting nature, where the language can define its own primitives. Prominent ANS Forth implementations include Gforth, an open-source system initiated in 1995 by Anton Ertl and others at the , featuring a fast indirect-threaded , cross-platform portability, and integration with tools like for interactive debugging. SwiftForth, from FORTH, Inc., provides a native-code for 32- and 64-bit architectures on Windows, , and macOS, incorporating an (SWOOP) that eliminates needs for external assemblers or linkers, targeting professional embedded development. JonesForth, authored by Richard W. M. Jones in 2007 as an x86 assembly tutorial, demonstrates a self-hosting Forth kernel that bootstraps its own interpreter and , illustrating threading and management in under 3,000 lines for educational purposes. Forth's deterministic, low-latency execution has long suited real-time embedded systems, such as control and devices, due to its avoidance of garbage collection and predictable handling. In 2025, recent developments emphasize extensions for modern real-time needs, including enhanced multi-tasking with priority inheritance in implementations like Mecrisp-Stellaris for microcontrollers, enabling safer concurrent operations in safety-critical applications without compromising Forth's frugality. These extensions build on ANS Forth by adding words for deadline scheduling and atomic operations, supporting industries like automotive and IoT where real-time responsiveness is paramount.

Assemblers (Intel x86)

Assemblers for the x86 architecture translate human-readable assembly code into machine instructions executable on x86 and processors, a lineage originating with the 8086 released by in 1978. This complex instruction set computing (CISC) design, characterized by variable-length instructions and extensive addressing modes, has evolved through generations including the 80286, 80386, and subsequent models up to modern implementations. Early assemblers, such as Intel's own 8086 Macro Assembler distributed with development kits, established the foundational Intel syntax, where operands follow the instruction mnemonic and source precedes destination, as seen in historical documentation from the late . Prominent assemblers for x86 include the Microsoft Macro Assembler (MASM), first introduced in 1981 as part of Microsoft's development tools for MS-DOS and Windows environments. MASM supports Intel syntax natively, features a powerful macro language for looping, arithmetic, and string processing, and has been updated to handle 16-bit, 32-bit, and 64-bit code, with ML64.exe dedicated to x86-64 assembly. The GNU Assembler (GAS), developed as part of the GNU Binutils project since the 1980s, defaults to AT&T syntax for x86, where instructions include operand size suffixes (e.g., "l" for long), dollar signs for immediates, and percent signs for registers, ensuring compatibility with Unix-like systems and GCC output. The Netwide Assembler (NASM), released in 1996, is a free, portable tool targeting Intel syntax and designed for cross-platform use, generating flat binaries, ELF, or COFF objects while supporting multi-pass assembly for optimizations. YASM, a modular rewrite of NASM initiated in the early 2000s under a BSD license, extends portability by accepting both NASM (Intel-like) and GAS (AT&T) syntaxes, outputting formats like ELF32/64, Win32/64, and Mach-O for diverse operating systems. x86 assembly revolves around general-purpose registers such as EAX, the 32-bit accumulator register (extendable to RAX in ), which historically traces back to the 8086's AX register for arithmetic and data movement operations. The MOV instruction, fundamental since the 8086 era, transfers data between registers, memory, or immediates without altering flags, exemplified in syntax as "mov eax, 5" to load the immediate value 5 into EAX, or in syntax as "movl $5, %eax". This instruction supports various operand sizes and addressing modes, from direct register-to-register copies to complex memory indirection, enabling low-level control over the processor's 16 general-purpose registers in mode. Modern x86 assemblers provide comprehensive support for vector extensions, including (AVX) introduced by in 2011, which expand SIMD capabilities to 256-bit YMM registers for parallel floating-point and integer operations on up to eight single-precision values per instruction. AVX instructions, such as VADDPS for packed single-precision addition, integrate seamlessly with existing x86 assemblers; for instance, NASM and GAS encode them using EVEX prefixes for compatibility, allowing developers to leverage enhanced throughput in applications like scientific computing. As of 2025, assemblers like NASM version 3.00 fully support , the 512-bit extension enabling operations on 16 single-precision floats via ZMM registers, alongside emerging AVX10 features for broader vector widths and masking, as integrated into recent updates for upcoming and processors. MASM and YASM similarly handle through toolchains and modular backends, respectively, ensuring optimized code generation for workloads.

Assemblers (Motorola 68k and other architectures)

Assemblers for the (68k) family of processors translate code into , supporting the architecture's rich set of addressing modes and instructions. Key examples include AS68, a classic assembler developed for 68k systems, and vasm, a modern portable and retargetable tool that generates linkable objects or absolute code for multiple formats. Vasm's flexibility allows it to handle 68k syntax while supporting cross-compilation for legacy platforms. The 68k architecture features 14 addressing modes, including data register direct (Dn), address register direct (An), and indirect modes such as (An) for post-increment or pre-decrement addressing, enabling efficient memory access patterns. Opcodes cover arithmetic operations like ADD and MULS, as well as control instructions such as BSR for subroutine calls, with operand sizes ranging from byte to longword. Historically, 68k assemblers were integral to development for personal computers like the and Atari ST, where they facilitated low-level programming for graphics and system tasks in the and . For the , a popular 8-bit , assemblers like z80asm provide robust support for official and unofficial mnemonics, complex expressions, and macro expansion, making it suitable for embedded and retro projects. SjASMPlus, an enhanced cross-compiler based on earlier SjASM code, offers advanced features including module support, conditional assembly, and output to various formats like binary or , with compatibility for Windows, , and . These tools emphasize Z80's register-based instructions and indexed addressing, aiding development for systems like the and computers. Beyond 68k and Z80, the GNU Assembler (GAS) serves as a versatile tool for architectures including and , with command-line options for architecture-specific directives and generation in formats like ELF. For , GAS handles and modes, supporting vector extensions for modern embedded applications. In 2025, GAS's integration in 2.45 has expanded support, including enhancements for the ratified vector extension (RVV) and instructions, aligning with the growing emphasis on open hardware designs. This focus on RISC-V assemblers underscores its role in scalable, royalty-free embedded systems, with ongoing advancements in AI and workloads.

Specialized and Niche Languages

DIBOL/DBL compilers

DIBOL, or Digital Business Oriented Language, is a language developed by (DEC) in the late 1960s for business applications on PDP systems. It features a structure divided into a data division for defining records and fields, and a procedure division for executable statements, supporting with up to 16,383-character records and decimal arithmetic up to 18 digits. The language emphasizes record-oriented operations, such as READS for sequential file access, WRITES for relative files, and STORE for indexed files, enabling efficient handling of business records like customer or inventory data. DIBOL's screen handling capabilities facilitate interactive terminal-based applications through commands like DISPLAY for outputting text and control codes to specific positions (e.g., DISPLAY (1, 'HELLO', 13, 10)) and ACCEPT for capturing user input, often integrated with terminal devices via OPEN statements. These features, combined with its COBOL-like separation of data definitions from procedural logic and support for field-level manipulations, make it suitable for database-centric , including BCD arithmetic and handling up to 8191 fields. Early implementations targeted DEC's minicomputers, with compilers generating object code compatible with operating systems like RT-11 and . The primary compiler for DIBOL on DEC VMS systems is the VAX DIBOL compiler, introduced in the 1980s as part of the VMS Language and Tools environment, which compiles source files into linkable object modules while detecting syntax errors and producing optional listing files with symbol tables and cross-references. This compiler integrates with VMS services like Record Management Services (RMS) for file I/O and the Run-Time Library (RTL) for memory and screen management, alongside tools such as the DIBOL Debugging Technique (DDT) for troubleshooting and utility programs for interprogram communication. It supports modular development with subroutine libraries (e.g., Universal External Library for general routines and Operating System Specific Library for VMS functions), enabling efficient resource use in multi-user environments like manufacturing and accounting applications on VAX processors. Legacy DIBOL systems remain prevalent on OpenVMS platforms, where they handle interactive data processing but face challenges from hardware obsolescence. DBL variants emerged as compatible extensions, with Synergy DBL—developed by Digital Information Systems Corporation (DISC, later Synergex) in the late 1970s—offering a DIBOL-compatible compiler initially for PDP-11 systems running RT-11 and RSTS/E, evolving into a full business language by 1979. Synergex's modern ports extend Synergy DBL to platforms including Windows, .NET, Unix, Linux, and OpenVMS, preserving DIBOL syntax while adding support for object-oriented features, web services, and database connectivity. These ports use tools like xfServerPlus to expose legacy DIBOL routines from OpenVMS to cloud or web clients without full rewrites, facilitating hybrid deployments. As of 2025, migrations of DIBOL/DBL applications to cloud environments remain limited, primarily due to the entrenched nature of VMS-based legacy systems, though Synergex solutions enable incremental modernization for select industries like finance and government.

Lisaac compiler

The Lisaac compiler serves as the primary and official implementation for the Lisaac programming language, a prototype-based object-oriented system designed for simplicity and efficiency in . Originating in 1993 at the (ENS) in as part of research into effective programming paradigms, it was developed to support the Isaac operating system project, emphasizing while achieving near-C performance levels. Central to its design is a process that enables the to self-host, compiling its own written in Lisaac, which facilitates iterative development and portability across architectures via intermediate C code generation. It also incorporates a lightweight (VM) for runtime execution, optimizing for low-level control without sacrificing high-level abstractions. These elements allow Lisaac to target platforms including , macOS, Windows, and embedded systems like . The compiler's core features revolve around slot-based inheritance, where objects inherit behavior and state through modifiable slots rather than rigid classes, drawing from prototype models for flexibility. Pervasive types are enforced through advanced static type flow analysis, resolving over 90% of method calls at compile time to enhance performance and safety. Influences from Eiffel are evident in its adoption of design-by-contract principles, including pre- and postconditions, integrated seamlessly into the prototype framework. Despite its innovative approach, the Lisaac supports a small, niche community primarily composed of academic researchers and enthusiasts interested in prototype-based systems. As of November 2025, the project is active, with the recent "Omega" release on November 13, 2025, introducing features such as native support, unlimited arithmetic, and a built-in IDE written in Lisaac, while preserving its role as an experimental tool rather than a widely adopted production .

Command language interpreters

Command language interpreters, also known as command shells or shell interpreters, are programs that read and execute commands entered by users, often facilitating scripting, , and interaction with operating system resources. These interpreters process text-based input, supporting features like command chaining, manipulation, and data between es, which enable efficient task automation in and Windows environments. Unlike interpreters, command shells prioritize system-level operations, such as file manipulation and control, while allowing users to write reusable scripts for repetitive tasks. The foundational command shell, the (sh), developed by Stephen Bourne at in 1977, introduced core concepts like scripting with variables (e.g., assignment via VAR=value) and basic control structures, setting the stage for modern interpreters. Its successor, Bash (Bourne-Again SHell), released in 1989 as part of the GNU Project by Brian Fox, extended these capabilities with enhancements including command-line editing via the Readline library, unlimited command history storage, job control for managing background processes, and shell functions for modular scripting. Bash's piping mechanism, using the | operator, allows output from one command (e.g., ls) to serve as input for another (e.g., grep), streamlining workflows like ls | grep .txt to filter file listings. Variable expansion with $VAR enables dynamic scripting, such as echo $PATH to display the executable search path, making Bash a staple for Unix-like systems. Building on Bourne shell lineage, Zsh (Z shell), created by Paul Falstad in 1990, emphasizes interactivity and customization while maintaining compatibility with Bash scripts. Zsh incorporates advanced features like programmable autocompletion, themeable prompts, and enhanced globbing for pattern matching, alongside piping and variable support similar to Bash (e.g., $VAR for expansion). Its scripting capabilities extend Bourne-style syntax with influences from the Korn shell (ksh) and C shell (csh), allowing complex scripts for tasks like array handling and plugin management via frameworks like Oh My Zsh. Zsh has gained popularity for its user-friendly defaults, often serving as the default shell in macOS since version 10.15 (Catalina) in 2019. PowerShell, introduced by in 2006 and built on the .NET Framework, represents an evolution tailored for Windows but now cross-platform, offering object-oriented where data flows as .NET objects rather than text streams. This enables richer scripting, such as Get-Process | Where-Object {$_.CPU -gt 100}, which pipes objects and filters by CPU usage, surpassing text-based in expressiveness. Variables use $var syntax, supporting complex types like arrays and hashtables, and scripts (.ps1 files) integrate with modules for automation. Initially Windows-exclusive, PowerShell Core (now PowerShell 7+) became open-source in 2016, supporting and macOS via .NET Core/.NET 5+. By 2025, cross-platform execution has unified these interpreters, with Bash and Zsh readily available on Windows via the (WSL), which integrates environments into Windows 10/11 and Windows Server 2025, allowing seamless piping and scripting across OS boundaries without dual-booting. This setup supports hybrid workflows, such as running Bash scripts on Windows for development tasks.

BASIC interpreters

BASIC interpreters execute code line by line in real time, offering an interactive environment that allows immediate feedback and experimentation, which made them ideal for educational and hobbyist programming on early personal computers. Unlike compilers that produce standalone executables, these interpreters prioritize accessibility, enabling users to type commands directly into a prompt and see results instantly. This design stemmed from the need for simple, user-friendly tools in resource-constrained systems of the and . A foundational example is , introduced by in 1983 as the default interpreter for on non-IBM PCs. Derived from earlier implementations, it supported immediate mode for direct command entry, along with core statements such as PRINT for displaying output and for unstructured program branching, which were hallmarks of dialect flexibility in that era. GW-BASIC's disk-based loading and 40-column text display catered to the limitations of early PCs, influencing countless introductory programming experiences. In the home microcomputer revolution, BASIC interpreters were often embedded in ROM to provide instant usability. The Commodore 64, launched in 1982, integrated V2.0 directly into its 8KB ROM, allowing users to boot straight into an interactive prompt where they could enter line-numbered programs featuring PRINT for text and graphics output, and for navigation—features that powered simple games and utilities on the machine's 64KB RAM. This setup democratized programming, as the interpreter handled tokenization and execution without requiring additional software. Similarly, Sinclair BASIC on the (1982) used a 16KB ROM-based interpreter with tokenized keywords for efficient storage, supporting immediate mode operations like PRINT and to leverage the system's 48KB RAM for colorful, sound-enabled programs. Modern iterations continue this tradition of interactive runtime execution. Microsoft's Small Basic, released in 2008, is a lightweight interpreter designed for beginners, featuring a streamlined IDE with auto-completion and error highlighting, while retaining essential elements like PRINT for console output and structured alternatives to for flow control. It runs on Windows and emphasizes graphical extensions for engaging tutorials. QB64, an open-source project evolving from , includes an interpreter mode since its inception around 2006, permitting on-the-fly command execution and file loading without compilation, thus bridging legacy BASIC interactivity to 64-bit platforms with enhanced multimedia support.

Cross-Cutting Compiler Categories

CLI compilers

CLI compilers target the , an open specification standardized by that defines a runtime environment for executing managed code across multiple programming languages. These compilers generate Common Intermediate Language (CIL) code, formerly known as Microsoft Intermediate Language (MSIL), which is then executed by the or compatible implementations like Mono. The CLI enables seamless between languages by compiling to a common format, facilitating cross-language development in the .NET . Key examples include the Visual Basic .NET (VB.NET) compiler, which translates VB.NET source code into CIL assemblies using the command-line tool vbc.exe. Introduced with the .NET Framework in 2002, the VB.NET compiler supports features like conditional compilation directives and produces executable files compatible with the CLI runtime. Similarly, the F# compiler, first released in 2005 by Microsoft Research, compiles functional-first F# code to CIL, enabling integration with other .NET languages while preserving type safety and performance optimizations. F# originated from research into strongly-typed functional programming languages like OCaml and ML, adapted for the .NET platform. CIL assemblers, such as ilasm.exe, allow direct authoring of CIL code in textual form, assembling it into (PE) files for the CLI. This tool is essential for low-level manipulation of managed code, debugging, or creating custom assemblies without a high-level language frontend; it processes .il source files and outputs .exe or .dll binaries verifiable by the runtime. For cross-language scenarios, the compiler extension in Microsoft Visual C++ (MSVC) enables mixing native C++ with managed code, compiling to CIL via the /clr option for interoperability with other CLI languages like C# or VB.NET. The execution of CIL code typically involves just-in-time (JIT) compilation by the RyuJIT compiler, a high-performance next-generation JIT introduced in .NET Framework 4.6 for x64 architectures and later extended to .NET Core. RyuJIT optimizes CIL to native machine code at runtime, achieving up to twice the compilation speed of its predecessor while producing efficient code for modern hardware; it supports tiered compilation for faster startup and better throughput. In .NET 9, released in November 2024, Native Ahead-of-Time (AOT) compilation matured further, allowing CLI compilers to produce self-contained native executables without runtime JIT overhead, reducing startup time and memory usage for deployed applications while maintaining CLI compatibility. This AOT mode, enabled via publishing options, trims unused code and supports trimming-aware libraries for smaller binaries.

Source-to-source compilers

Source-to-source compilers, commonly referred to as transpilers, are specialized tools that translate source code written in one high-level programming language into source code in another high-level language, typically preserving the same level of abstraction without generating machine code. This process enables developers to leverage modern language features in environments that do not natively support them, such as older browsers or legacy systems, by producing readable, maintainable output code rather than binary executables. Unlike traditional compilers that target low-level instructions, transpilers focus on syntactic and semantic transformations to ensure compatibility and portability across similar abstraction layers. Prominent examples include Babel, which transpiles modern (ES6+) code into backward-compatible ES5 to support older runtime environments. , a typed superset of , uses its compiler (tsc) to transpile code—adding static types, interfaces, and modules—into plain while stripping type annotations for execution in any runtime. compiles C and C++ code into (or ), allowing high-performance applications originally designed for native platforms to run in web browsers by mapping low-level operations to browser APIs like and Web Audio. Other notable instances, such as , convert its concise, Ruby-inspired syntax into equivalent , facilitating cleaner code while relying on the ecosystem. The core mechanism of source-to-source compilers involves parsing the input into an (AST), a hierarchical representation of the code's structure that abstracts away details like whitespace and comments. Plugins or transformation passes then rewrite the AST—replacing nodes for unsupported syntax, optimizing expressions, or injecting compatibility shims—before code generation traverses the modified AST to emit the target . This approach incurs no additional runtime overhead, as the output is pure executable by the target environment's interpreter or compiler, distinguishing it from . Transpilers are frequently employed in conjunction with polyfills, which provide implementations for missing APIs (e.g., Array.prototype.includes in older browsers); together, they ensure comprehensive by addressing both syntactic differences and absent features. In 2025, advancements in AI-assisted transpilation have emerged, leveraging large language models (LLMs) to automate and refine the translation process, particularly for complex inter-language migrations. For instance, frameworks like SafeTrans use LLMs to transpile C code to Rust, iteratively detecting and correcting compilation or runtime errors through prompt-based refinements, achieving higher accuracy in type-safe conversions than rule-based methods alone. These AI-driven tools reduce manual intervention in legacy code modernization, with applications in performance-critical domains like systems programming, though they still require human oversight for edge cases and verification.

Research and experimental compilers

Research and experimental compilers encompass prototype systems developed primarily in academic and industrial settings to explore novel compilation techniques, intermediate representations, and optimizations for emerging paradigms such as domain-specific and heterogeneous hardware. These compilers often prioritize innovation over production stability, testing ideas like advanced loop transformations and runtime adaptations that may not yet be integrated into mainstream toolchains. Many originate from high-impact papers and remain in active development as open-source projects, enabling community experimentation while overlapping with free and open-source ecosystems for broader adoption. A key concept in this domain is polyhedral optimization, which models program loops as polyhedra to enable sophisticated transformations for parallelism and locality, particularly in scientific computing and workloads. This approach, formalized in foundational work, allows compilers to automatically schedule affine computations for multi-core and GPU targets, achieving significant speedups—up to 10x in benchmarks—on dense linear algebra kernels without manual tuning. Tools like , an extension, demonstrate its practical application in research prototypes. Another prominent technique is just-in-time () specialization, where compilers generate customized code at runtime based on observed values or traces, reducing overhead in dynamic languages and improving performance for data-dependent code paths. For instance, trace-based systems can specialize hot loops by inlining runtime constants, yielding 2-5x gains in dynamic workloads like simulations, as shown in empirical studies on and interpreters. Prominent examples include Terra, a multi-stage language embedded in for generating high-performance, domain-specific code that compiles to native binaries while leveraging Lua's for DSLs. Developed at Stanford, Terra enables seamless interoperability for low-level , with applications in and simulations, and remains an active research vehicle for generative techniques. , originating from MIT and , is a and for image processing pipelines, decoupling specification from optimization schedules to automatically produce portable, vectorized code for CPUs, GPUs, and FPGAs. It has demonstrated up to 4x performance improvements over hand-optimized implementations in tasks, influencing production tools while staying experimental in its core research extensions. MLIR (Multi-Level Intermediate Representation), a subproject of , provides a modular IR framework for composing domain-specific dialects, facilitating progressive lowering from high-level tensor operations to across heterogeneous accelerators like TPUs and NPUs. Introduced by and collaborators, MLIR supports over 50 dialects as of 2025, enabling scalable optimizations in compilers and reducing development time for new hardware backends by 50% in case studies. In quantum computing, Microsoft's Q# compiler, part of the Azure Quantum Development Kit, translates high-level quantum algorithms to executable circuits for simulators and hardware, incorporating error correction and hybrid classical-quantum optimizations.

Free and open-source compilers

Free and open-source compilers are software tools that translate into programs or intermediate representations, distributed under licenses that permit free use, modification, and redistribution, such as the GNU General Public License (GPL), , and Apache License 2.0. These compilers play a central role in , enabling collaborative contributions from global communities while ensuring accessibility for diverse platforms and languages. Unlike alternatives, FOSS compilers emphasize transparency, allowing users to inspect, audit, and extend the codebase, which fosters innovation in areas like and performance optimization. The GNU Compiler Collection (GCC) is a cornerstone of FOSS compilers, supporting languages including C, C++, Fortran, and Ada, under the GNU GPL version 3 license, which requires derivative works to adopt the same terms. Maintained by the Free Software Foundation (FSF) with contributions from a vast international community, GCC's development has evolved to accept code without mandatory copyright assignment to the FSF since 2021, broadening participation from individual developers and organizations. Its modular design supports extensive optimizations and targets numerous architectures, making it a default choice in Linux distributions. Forks like those integrating GCC with alternative backends, such as the now-archived DragonEgg project, demonstrate its extensibility. LLVM (Low Level Virtual Machine) and its frontend Clang form another pivotal FOSS ecosystem, licensed under the Apache License 2.0 with the LLVM Exception to permit linking with proprietary code without imposing copyleft restrictions. Developed as a collection of reusable compiler technologies, LLVM powers Clang's compilation for C, C++, Objective-C, and more, emphasizing fast compilation, detailed diagnostics, and modular optimizations. The project, hosted on GitHub with over 20,000 contributors as of 2025, operates under a clear developer policy that encourages upstreaming improvements and maintains a vibrant community through mailing lists, forums, and annual conferences like the European LLVM Assembly. Widely adopted in ecosystems like Apple's Xcode and Android's toolchain, LLVM's influence extends to numerous language implementations. Rust's compiler, rustc, is dual-licensed under the MIT and Apache 2.0 licenses, allowing flexible integration into both open and closed-source projects. As part of the Rust programming language ecosystem managed by the Rust Foundation, rustc focuses on safe systems programming with features like borrow checking and zero-cost abstractions, compiling to native code via LLVM backend. The project's governance, overseen by the Rust Leadership Council since its formation in 2023, saw updates in September 2025 emphasizing community representation and performance enhancements, with ongoing surveys addressing compiler speed and memory usage. Contributions from thousands of developers worldwide are coordinated through GitHub, with the 2025 compiler performance survey highlighting improvements in build times for large codebases. More recent entrants like the Zig compiler, initiated in 2016, operate under the , promoting a simple, explicit approach to with built-in cross-compilation and no hidden . Its decentralized community, facilitated through and forums like Ziggit, encourages contributions without formal barriers, resulting in rapid iterations such as the 0.13 release in 2024 focusing on stage2 compiler bootstrapping. Similarly, the compiler, also , targets efficient for applications from web to embedded systems, with its and compiler codebase maintained by a core team and open to pull requests. Nim has inspired forks like NimSkull (2022), which refactors the compiler for better maintainability while preserving language compatibility. These compilers are readily distributed through package managers like Homebrew on macOS and , simplifying installation; for instance, Homebrew formulae for GCC, , , Zig, and enable one-command setup with version pinning and dependency resolution, supporting cross-compilation to targets like Windows and . This accessibility has accelerated adoption in production environments, from embedded devices to infrastructure.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.