Recent from talks
Nothing was collected or created yet.
| curl | |
|---|---|
| Original author | Daniel Stenberg[1] |
| Developer | Contributors to the curl project |
| Initial release | 1996[2] |
| Stable release | 8.17.0[3] |
| Repository | |
| Written in | C |
| Platform | 29 platforms (see § libcurl for details) |
| Type | Web client (supports e.g. HTTPS, and FTP) |
| License | curl license[4][5] (inspired by the MIT License[5]) and a fraction uses the ISC |
| Website | curl |
cURL (pronounced like "curl",[6] /kɜːrl/) is a free and open source CLI app for uploading and downloading individual files. It can download a URL from a web server over HTTP, and supports a variety of other network protocols, URI schemes, multiple versions of HTTP, and proxying. The project consists of a library (libcurl) and command-line tool (curl), which have been widely ported to different computing platforms. It was created by Daniel Stenberg, who is still the lead developer of the project.
History
[edit]The software was first released in 1996,[7] originally named httpget and then became urlget, before adopting the current name of curl.[8][9] The name stands for "Client for URL".[10] The original author and lead developer is the Swedish developer Daniel Stenberg, who created curl to power part of an IRC bot, because he wanted to automatically provide currency exchange rates, fetched from a website, to users in an IRC chat room.[2]
Components
[edit]libcurl
[edit]libcurl is a client-side URL transfer library that powers curl.[11] It supports numerous internet protocols including DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS.
libcurl supports HTTP versions 0.9, 1.0, 1.1, HTTP/2 and HTTP/3 including h2c, prior-knowledge, dual-connect modes, and QUIC with 0-RTT handshakes.
The library provides features such as cookie handling, standard HTTP request methods (GET, POST, PUT, HEAD, multipart form uploads), and authentication mechanisms including Basic, Digest, NTLM, Negotiate, CRAM-MD5, SCRAM-SHA, Kerberos, Bearer tokens, AWS Sigv4, SASL, and reading credentials from .netrc.
libcurl supports a variety of security and transport features, including TLS 1.0-1.3, mutual authentication, STARTTLS, OCSP stapling, Encrypted Client Hello (ECH), False Start, key pinning, post-quantum readiness, session resumption, early data, session import/export, HSTS, Alt-Svc, Public Suffix List (PSL), entity tags (ETags), range requests, transfer compression (gzip, Brotli, zstd), custom headers, custom methods, and redirect following.
It also offers proxy and networking support, including SOCKS4, SOCKS5, HAProxy, and HTTP proxies with chaining and Unix domain sockets, as well as user-plus-password authentication[12]. Advanced name-resolution features include DNS-over-HTTPS, custom DNS servers, host/port mappings, and DNS caching.
Additional functionality includes file transfer resume, FTP uploading, form-based HTTP upload, HTTPS certificates, and mechanisms for controlling and monitoring transfers such as configurable timeouts, automatic retries, rate limiting, and detection of stalled connections. The library also provides enhanced reporting features, including JSON-formatted metadata, content-disposition handling, IDN hostname display, and customizable transfer information.
The libcurl library is portable, as it builds and works identically on most platforms, including:[13][14][15]
- AIX
- AmigaOS
- Android[citation needed]
- Azure Sphere OS
- BeOS
- BlackBerry Tablet OS and BlackBerry 10[16]
- Cesium
- Darwin
- DOS
- Deos
- FreeBSD
- FreeRTOS
- HP-UX
- HURD
- iOS
- IRIX
- Linux
- macOS
- NetBSD
- NetWare
- OpenBSD
- OpenHarmony
- OpenVMS
- OS/2
- QNX
- QNX Neutrino
- RISC OS
- RTEMS
- Solaris
- Symbian
- Tru64
- Ultrix
- UnixWare
- Windows
- VxWorks
- Zephyr
The libcurl library is thread-safe and IPv6 compatible. Bindings are available for more than 50 languages, including C, C++, Java, Julia (is bundled with), PHP and Python.[17]
The libcurl library supports SSL/TLS through GnuTLS, mbedTLS, SChannel (on Windows), OpenSSL, BoringSSL, AWS-LC, QuicTLS, LibreSSL, AmiSSL, wolfSSL and rustls.[18]
curl
[edit]curl is a command-line tool for getting or sending data, including files, using URL syntax. curl provides an interface to the libcurl library; it supports every protocol libcurl supports.[12]
curl supports HTTPS, and performs SSL or TLS certificate verification by default. When curl connects to a remote server via HTTPS, it will obtain the remote server certificate, then checks against its CA certificate store the validity of the remote server to ensure the remote server is the one it claims to be. Some curl packages are bundled with a CA certificate store file. There are several options to specify a CA certificate, such as --cacert and --capath. The --cacert option can be used to specify the location of the CA certificate store file.
Starting with Windows 10 version 1809, Windows ships with curl.exe.[15] On Microsoft Windows, if a CA certificate file is not specified, curl will look for the curl-ca-bundle.crt file in the following locations, in the order given:[19]
- App's folder (where
curl.exeis located) - Current working directory
C:\Windows\System32directoryC:\Windowsdirectory- Directories specified in the
PATHenvironment variable
curl will return an error message if the remote server is using a self-signed certificate, or if the remote server certificate is not signed by a CA listed in the CA cert file. -k or --insecure option can be used to skip certificate verification. Alternatively, if the remote server is trusted, the remote server CA certificate can be added to the CA certificate store file.
tiny-curl
[edit]tiny-curl is a lightweight version of libcurl developed by wolfSSL Inc. for embedded and resource-constrained devices. It implements HTTPS functionality in roughly 100 KB of code on typical 32-bit architectures.
Licensing
[edit]curl and libcurl are distributed under the MIT License. tiny-curl, a version of curl optimized for embedded systems and supported by wolfSSL, is available under both the GNU GPLv3-or-later and commercial licensing.
Rock-solid curl[20], the long-term support (LTS) edition, uses the same curl license by default, with an option for commercial licensing for organizations that require contractual support or warranty coverage.
See also
[edit]- curl-loader – an open-source testing tool based on curl
- libwww – an early library that comes with a command line interface
- PowerShell – the iwr (Invoke-WebRequest) Windows PowerShell had functionality similar to curl; class Web-client too.[21]
- Web crawler – an internet bot that can crawl the web
- Wget – similar command-line tool with no associated library but capable of recursive downloading
References
[edit]- ^ Stenberg, Daniel (20 March 2015). "curl, 17 years old today". daniel.haxx.se. Retrieved 20 March 2015.
- ^ a b "History of curl - How curl Became Like This". curl. Archived from the original on September 30, 2017. Retrieved November 17, 2016.
Daniel simply adopted an existing command-line open-source tool, httpget, that Brazilian Rafael Sagula had written and recently release version 0.1 of. After a few minor adjustments, it did just what he needed. [...] HttpGet 1.0 was released on April 8th 1997 with brand new HTTP proxy support. [...] Stenberg was spending time writing an IRC bot for an Amiga related channel on EFnet. He then came up with the idea to make currency-exchange calculations available to Internet Relay Chat (IRC) users.
- ^ Daniel Stenberg (5 November 2025). "curl 8.17.0". Retrieved 5 November 2025.
- ^ "curl License". spdx.org.
- ^ a b "curl - copyright". curl.se. Archived from the original on 2024-01-15. Retrieved 2024-01-17.
- ^ "curl - Frequently Asked Questions". curl.se.
- ^ "History of curl". fossies.org. Archived from the original on September 17, 2021. Retrieved May 11, 2021.
- ^ "Changelog". 4 January 2020. Retrieved 4 January 2020.
The first curl release. The tool was named urlget before this. And httpget before that.
- ^ Stenberg, Daniel (4 January 2020). "Restored complete curl changelog" (html). Haxx Se. Retrieved 2 January 2020.
- ^ Stenberg, Daniel. "Origin of the name". curl.se. Retrieved 2021-03-27.
- ^ Jones, M. Tim (8 September 2009). "Conversing through the Internet with cURL and libcurl - Using libcurl with C and Python". IBM Developerworks. Archived from the original on 14 April 2015. Retrieved 12 September 2018.
- ^ a b "curl - How To Use". curl.se.
- ^ "Third-party open-source software Curl". Gitee. OpenAtom OpenHarmony. Retrieved 17 March 2024.
- ^ "Third-party open-source software Curl". GitHub. OpenAtom OpenHarmony. Retrieved 17 March 2024.
- ^ a b Turner, Rich (18 January 2018). "Tar and Curl Come to Windows!". Windows Command Line. Microsoft.
- ^ "Open Source Components for the Native SDK for BlackBerry Tablet OS". Archived from the original on 2013-01-27. Retrieved 2017-09-19.
- ^ "libcurl bindings". curl.se.
- ^ "curl supports rustls | daniel.haxx.se". 9 February 2021. Retrieved 2022-01-01.
- ^ "curl - SSL CA Certificates". curl.se.
- ^ "Rock-solid curl".
- ^ Del, Ryan (2 March 2018). "Comandi equivalenti a cURL e Wget per Windows command-line con Powershell" [cURL and Wget equivalent commands for Windows command-line with Powershell] (html). Ryadel (in Italian). Retrieved 4 January 2020.
Per emulare il comportamento del comando Linux cURL, è sufficiente creare un file cURL.ps1 contenente la seguente riga di codice
External links
[edit]Introduction
Definition and Purpose
cURL is an open-source project that develops the curl command-line tool and the libcurl multiprotocol file transfer library, both focused on facilitating data transfers using URL syntax.[7] The primary purpose of cURL is to simplify the process of transferring data over networks, enabling tasks such as downloading files from remote servers, interacting with web APIs, and testing connectivity in scripts and applications.[2] By providing a straightforward interface for URL-based operations, cURL serves as a versatile utility for developers and system administrators handling network communications.[7] The name "cURL," coined in 1998, stands for "Client for URLs," with early documentation playfully referring to it as "see URL" to highlight its URL-centric design; it can also be interpreted as an abbreviation for "Client URL Request Library" or the recursive "cURL URL Request Library."[7] This etymology underscores its role as a client-side tool dedicated to URL requests. cURL has achieved widespread ubiquity in computing, powering network requests in command-line scripts, desktop and mobile applications, and embedded systems across devices like routers, smart TVs, and medical equipment, estimated to run in many billions of installations worldwide as of 2025.[8] Its reliability and portability make it a staple for everyday internet users and professionals alike.[2] As of November 2025, the latest stable release is version 8.17.0, issued on November 5, 2025, reflecting the project's commitment to regular monthly updates to address evolving network standards and security needs.[2]High-Level Architecture
cURL operates as a client-side URL transfer tool, centered on libcurl as its core engine—a portable library that handles the underlying network communications—and the curl command-line tool serving as a user-facing wrapper that leverages libcurl for direct interactions.[5] This modular structure allows libcurl to be embedded in diverse applications, while curl provides a straightforward interface for scripting and automation without requiring custom programming.[9] The design emphasizes portability across platforms such as Windows, Linux, macOS, and embedded systems, ensuring consistent behavior wherever it compiles, achieved through C89 compliance and minimal assumptions beyond basic POSIX features.[10] It supports both synchronous operations via the easy interface, suitable for simple sequential transfers, and asynchronous modes through the multi interface, enabling concurrent handling of multiple connections for improved efficiency in multi-threaded or event-driven environments.[9] Extensibility is facilitated by a flexible API that allows customization via callbacks for data processing, progress monitoring, and error handling, promoting integration into larger systems without tight coupling.[11] A typical request begins with URL parsing to identify the scheme, host, path, and parameters, followed by protocol selection based on the scheme to determine the appropriate backend handler.[9] Connection establishment then occurs, potentially involving DNS resolution, socket creation, and TLS negotiation if required, before data transfer proceeds in chunks via read/write callbacks.[11] Finally, resources are cleaned up, including connection closure and handle release, ensuring no lingering state.[9] Dependencies are integrated selectively to maintain a lightweight footprint; for instance, libcurl interfaces with system libraries like OpenSSL for TLS/SSL support, but users can configure builds to use alternatives or disable features entirely for minimalism.[12] This configurable approach contrasts with more specialized tools, as cURL prioritizes broad multi-protocol support—encompassing over 20 protocols including HTTP, FTP, and SMTP—for versatile, non-interactive batch processing in automation pipelines, rather than focusing solely on file retrieval like wget.[13]History
Origins and Early Development
cURL was conceived in late 1996 by Daniel Stenberg, a Swedish programmer, as a command-line tool to facilitate file transfers over the internet during his work on an IRC bot for an Amiga-related channel on EFnet.[14] Stenberg needed a simple way to automate the daily fetching of currency exchange rates from web pages to enhance the bot's services for chat room users, addressing the limitations of existing tools like httpget, which lacked sufficient flexibility for his requirements.[15] The tool focused on supporting HTTP and FTP protocols to handle URL-based downloads efficiently.[3] The first public release of cURL, version 4.0, occurred on March 20, 1998, comprising approximately 2,200 lines of code and marking its evolution from earlier prototypes named httpget and urlget.[14] This version emphasized portability and scriptability, positioning it as a lightweight alternative to contemporaries like wget by prioritizing single-shot URL transfers over recursive downloading.[16] Early adoption was driven by its open-source nature; released under the GNU General Public License initially, it transitioned to the Mozilla Public License (MPL) later in 1998, encouraging community involvement.[14] By late 1998, key enhancements included the addition of basic SSL support using the SSLeay library, enabling secure HTTPS transfers, and TELNET protocol compatibility.[3] Porting efforts quickly expanded its reach, with users creating Linux RPM packages and adaptations for Unix-like systems, fostering initial cross-platform use and contributions from early adopters.[3] These developments in 1998 and 1999 laid the groundwork for cURL's growth, with community feedback driving refinements before the turn of the millennium.[17]Major Releases and Milestones
In August 2000, with the release of version 7.1, cURL introduced libcurl as a standalone library, enabling its reuse in diverse applications beyond the command-line tool and marking a pivotal step toward broader ecosystem integration.[3] This separation facilitated programmatic access to cURL's transfer capabilities, contributing to its adoption in embedded systems and software libraries worldwide. In January 2001, the project adopted the permissive MIT license, further encouraging widespread adoption.[3] Key enhancements followed in subsequent years, including experimental HTTP/2 support introduced in version 7.33.0 on October 14, 2013, which enabled multiplexing multiple requests over a single connection to improve efficiency for modern web traffic. TLS 1.3 integration arrived in version 7.52.0, released December 21, 2016, offering faster handshakes and enhanced security without compatibility trade-offs when paired with supporting backends like OpenSSL 1.1.1. A major leap occurred in December 2020 with version 7.74.0, which added experimental support for HTTP/3 over QUIC, leveraging UDP for lower-latency transfers and better resilience to packet loss compared to traditional TCP-based protocols.[18] This milestone aligned cURL with emerging internet standards, paving the way for its use in high-performance environments like content delivery networks. The curl project, maintained by a global community under the leadership of Daniel Stenberg and hosted at curl.se since its early days, follows a rigorous release schedule with multiple updates annually, prioritizing security patches alongside feature additions. Governance emphasizes open-source collaboration via GitHub, ensuring transparency and rapid response to evolving web technologies. Up to 2025, developments have emphasized performance refinements, such as optimized handling of multiplexed connections in HTTP/2 and HTTP/3, alongside initial explorations into post-quantum cryptography integrations using hybrid algorithms to mitigate future quantum threats.[19] The latest stable release, version 8.17.0 on November 5, 2025, incorporates these ongoing improvements while maintaining backward compatibility. These milestones have solidified cURL's role in critical infrastructure, with libcurl embedded in operating systems like Linux distributions, macOS utilities, and even browser engines, facilitating billions of daily data transfers across global networks.Components
libcurl Library
libcurl is a powerful, portable, client-side URL transfer library written in the C programming language, designed for embedding network transfer capabilities directly into applications. It provides a straightforward API for performing transfers using various protocols, allowing developers to integrate features like HTTP requests, file uploads, and data retrieval without building low-level networking code from scratch. As the core engine powering the curl command-line tool, libcurl handles the complexities of protocol implementations, error management, and data formatting internally.[5] The library offers three primary interfaces to accommodate different use cases. The Easy interface is the simplest, enabling synchronous, single-transfer operations through a handle-based approach: developers initialize a handle withcurl_easy_init(), configure options using curl_easy_setopt(), execute the transfer via curl_easy_perform(), and clean up with curl_easy_cleanup(). This interface suits straightforward, blocking transfers in sequential applications. The Multi interface extends this for asynchronous and parallel operations, allowing multiple Easy handles to be managed within a single multi-handle context using functions like curl_multi_init(), curl_multi_add_handle(), and curl_multi_perform(); it supports non-blocking I/O via integration with select() or polling mechanisms, making it ideal for handling concurrent transfers in a single thread. Additionally, the Share interface facilitates resource sharing across multiple handles, such as DNS resolution caches or connection cookies, via curl_share_init() and related options, optimizing performance in scenarios with repeated connections to similar hosts.[20][21][9]
libcurl emphasizes portability and ease of integration across diverse environments, compiling and operating consistently on thousands of platforms including Unix-like systems (e.g., Linux, FreeBSD, Solaris), Windows, macOS, embedded systems, and even legacy architectures, thanks to its adherence to C89 standards and avoidance of platform-specific dependencies. Builds are configurable using tools like Autoconf for Unix environments or CMake for cross-platform development, with options such as --with-ssl to enable cryptographic support via libraries like OpenSSL or GnuTLS, allowing customization based on target system requirements. This flexibility ensures libcurl can be compiled for resource-constrained devices or high-performance servers alike, with minimal code changes needed for porting.[5][22][10]
Performance optimizations in libcurl include built-in connection pooling for reusing established TCP connections across transfers, reducing latency from repeated handshakes; support for HTTP pipelining and multiplexing (via HTTP/2 and HTTP/3); and configurable proxy handling for routing traffic efficiently. The library is thread-safe provided that easy handles are not used simultaneously by multiple threads and shared resources are protected with appropriate locking mechanisms. These features collectively minimize overhead, making libcurl suitable for high-throughput scenarios like web scraping or API interactions.[9][11]
libcurl is distributed under the curl license, a permissive open-source license derived from the MIT/X11 license, which grants users broad rights to use, modify, and redistribute the code in both open-source and proprietary software without requiring disclosure of modifications. This licensing model promotes widespread adoption, and libcurl is commonly packaged in development repositories such as curl-devel in Linux distributions (e.g., via yum or apt) for easy installation and linking into projects.[23][7]
curl Command-Line Tool
The curl command-line tool is a standalone executable program designed for transferring data to and from servers using various URL-based protocols, serving as an accessible interface that encapsulates the capabilities of the underlying libcurl library for users who are not developing custom applications.[1] It operates as a binary file namedcurl on Unix-like systems and curl.exe on Windows, enabling direct network interactions without the need for programming knowledge.[6] This tool is particularly valued for its simplicity and portability across operating systems, including Linux, macOS, Windows 10 version 1803 and later, and others such as Solaris and AIX.[1]
Invocation of the curl tool follows the basic syntax curl [options] [URL], where options and one or more URLs can be specified in any order, allowing flexible command construction.[1] By default, transferred data is output to standard output (stdout), facilitating easy piping to other commands or redirection to files; options like --output or --remote-name enable saving responses directly to specified or inferred filenames.[1] The tool supports sequential processing of multiple URLs unless parallel execution is explicitly enabled, making it suitable for batch operations.
Key built-in utilities enhance usability for diagnostic and interactive purposes, including the -v or --verbose option, which provides detailed logs of the connection process, request headers, and responses for troubleshooting.[1] Progress monitoring is available through default status displays or the --progress-bar option, which renders a graphical bar showing transfer advancement without verbose details.[1] For data submission, the --data option allows sending raw or URL-encoded payloads, such as in POST requests, while --form handles multipart form data uploads, supporting common web interactions.[1]
On various platforms, the curl executable is readily available through standard package managers, such as apt on Debian and Ubuntu-based Linux distributions (sudo apt install curl) and brew on macOS (brew install curl), simplifying installation and maintenance. For Windows, precompiled binaries are provided directly by the curl project. Its non-interactive nature, with options like --silent to suppress output, positions curl as an essential component for shell scripting, cron-scheduled tasks, and automated workflows that operate independently of graphical environments.[4]
Features
Supported Protocols
cURL supports the following protocols: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS, and WSS.[1][2] cURL primarily supports HTTP and HTTPS as its core protocols, enabling versatile data transfers over the web with built-in handling for secure connections via TLS.[1] These protocols form the backbone of most cURL usage, allowing downloads, uploads, and API interactions. Additionally, cURL handles file transfer protocols like FTP, FTPS, SFTP, SCP, SMB, and SMBS for anonymous or authenticated file operations on remote servers, supporting both active and passive modes where applicable.[1] For email-related tasks, cURL provides support for SMTP, IMAP, and POP3, including their secure variants (SMTPS, IMAPS, POP3S), facilitating client-side email sending, retrieval, and management without needing a full mail client.[1] TFTP is also supported for simple, lightweight file transfers in network booting scenarios, though it lacks authentication and is UDP-based.[1] Advanced protocol support extends cURL's utility to include WebDAV for collaborative web authoring and file management over HTTP, LDAP and LDAPS for directory queries, MQTT for lightweight messaging in IoT applications, RTSP for streaming media control, RTMP and RTMPS for real-time messaging protocol transfers, TELNET for remote terminal access, GOPHER and GOPHERS for accessing gopher menus, DICT for dictionary server queries, and FILE for local file operations.[1] Official support for WebSockets via WS and WSS protocols, enabling bidirectional communication over HTTP/HTTPS, was added in cURL 8.11 in November 2024.[24] Emerging protocols like HTTP/3 over QUIC are handled when built with compatible backends, offering improved performance and multiplexing.[25] Protocol selection occurs automatically based on the URL scheme provided, such ashttp:// for unencrypted HTTP or https:// for TLS-secured HTTPS, with cURL detecting and applying the appropriate backend.[1] Fallback mechanisms ensure compatibility, for instance, negotiating down from HTTP/3 to HTTP/2 or HTTP/1.1 if the server does not support the preferred version.[25]
cURL's extensibility allows integration of custom protocols through libcurl's URL API, where developers can implement backends or plugins to add support without modifying the core library.[26] However, cURL operates strictly as a client-side tool, lacking server-mode capabilities, and focuses on efficient data transfer rather than implementing complete protocol stacks or advanced server interactions.[5]
Key Options and Configurations
cURL provides a wide array of command-line options to customize data transfers, allowing users to control output, authentication, proxies, and more. These options are specified using short flags (e.g., -o) or long forms (e.g., --output), and can be combined in any order with URLs on the command line.[1] Among the common options, -o or --output directs the transfer output to a specified file rather than standard output, enabling users to save responses locally without displaying them in the terminal; for instance, it writes the server's response body to the named file.[27] The -H or --header option appends custom HTTP headers to the request, such as User-Agent or Authorization, which is essential for mimicking browser behavior or meeting API requirements.[28] For authentication, -u or --user supplies a username and optional password for basic HTTP or other protocol authentication, prompting for the password if omitted to avoid exposure in command history.[29] Additionally, --proxy establishes a connection through an intermediary proxy server, specified by host and port, supporting protocols like HTTP, HTTPS, or SOCKS for routing traffic.[30] Advanced configurations offer finer control over transfer behavior. The --limit-rate option throttles the upload or download speed to a specified rate (e.g., in bytes per second), useful for testing or bandwidth management without affecting the server's response.[31] --connect-timeout sets a maximum time limit for establishing the initial connection, preventing indefinite hangs on unresponsive hosts by aborting after the given seconds.[32] For secure connections, --cacert specifies a custom CA certificate file to verify the peer's certificate, overriding the system's default bundle to use a specific set of trusted authorities.[33] Handling payloads for requests like POST or PUT involves options such as --data-raw, which sends the provided data exactly as-is without adding newline characters or percent-encoding, ideal for raw JSON or binary content.[34] The -X or --request option overrides the default HTTP method (typically GET), allowing specification of methods like POST, PUT, or DELETE to perform the desired action on the resource.[35] cURL also respects environment variables for global settings. CURL_CA_BUNDLE defines the path to a CA certificate bundle file, which cURL uses for SSL/TLS verification if no other certificate option is provided.[1] CURL_HOME sets the user's home directory for locating configuration files, influencing where cURL searches for defaults.[1] Configuration files further streamline usage by storing default options. The .curlrc file, typically located in the user's home directory, contains lines of options that cURL reads and applies automatically unless overridden by command-line arguments, supporting persistent settings like proxy usage or verbose output.[1]Usage
Command-Line Examples
cURL's command-line tool offers versatile options for performing various network transfers directly from the shell. This section demonstrates common usage scenarios through practical examples, illustrating how to leverage key options for everyday tasks such as downloading files, interacting with APIs, handling authentication, managing proxies and redirects, and implementing basic error handling. Each example includes the command syntax and a brief explanation of its functionality. To manually check if a subdirectory exists using cURL, you can send a HEAD request with the commandcurl -I https://example.com/admin/. A 200 OK response indicates the subdirectory exists, while a 404 Not Found response indicates it does not. This method is useful for scripting and automation without downloading the full content.[1]
Basic File Download
To download a remote file and save it locally with its original filename, use the-O or --remote-name option. This instructs cURL to write the output to a file named like the remote resource. For instance, the following command retrieves file.txt from the specified URL and saves it as file.txt in the current directory:
curl -O https://example.com/file.txt
curl -O https://example.com/file.txt
https://example.com/path/to/file.txt, the file will still be saved as file.txt in the current directory unless -J is used for the full path. This approach is efficient for simple retrievals without needing to specify a local filename manually.[1]
API Interaction
cURL excels at sending HTTP requests to APIs, such as POST requests with JSON payloads. To perform a POST request, specify the method with-X POST, provide data using -d or --data, and set headers with -H or --header. The following example sends a JSON object to an API endpoint, setting the Content-Type header to application/json:
curl -X POST -d '{"key":"value"}' -H "Content-Type: application/json" [https](/page/HTTPS)://api.example.com/endpoint
curl -X POST -d '{"key":"value"}' -H "Content-Type: application/json" [https](/page/HTTPS)://api.example.com/endpoint
-d passes the JSON as the request body, and the header ensures the server interprets it correctly. For more complex data, the payload can be read from a file using -d @filename.json. This method is widely used for RESTful API testing and automation.[1][4]
Authentication
For accessing protected resources, cURL supports basic authentication via the--user option, which supplies a username and password. The command prompts for the password if not provided inline, but for scripting, include both separated by a colon. An example to fetch a protected page is:
curl --user username:[password](/page/Password) https://protected.site/resource
curl --user username:[password](/page/Password) https://protected.site/resource
Authorization: Basic header with the base64-encoded credentials. Note that for security, avoid embedding passwords in commands visible in process lists; consider using --netrc for file-based credentials instead.[1]
Proxy and Redirect Handling
To route traffic through a proxy server, use--proxy followed by the proxy URL, and combine it with -L or --location to follow HTTP redirects automatically. For a SOCKS5 proxy, the command might look like:
curl --proxy socks5://proxy:1080 -L https://redirecting.url
curl --proxy socks5://proxy:1080 -L https://redirecting.url
-L option enables automatic redirection up to a default of 50 times, preventing infinite loops. Specify the proxy protocol (e.g., HTTP, SOCKS5) if not the default. This setup is useful in environments requiring intermediary servers or when dealing with shortened URLs.[1]
Error Handling
To make scripts robust against server errors, employ--fail, which causes cURL to exit with a non-zero status code for HTTP response codes greater than 400, without outputting the error page. Combine it with other options for conditional success checks. For example:
curl --fail [https://example.com/status](/page/HTTPS)
curl --fail [https://example.com/status](/page/HTTPS)
Programmatic Integration
libcurl, the core library behind cURL, enables programmatic integration into applications by providing a C API for URL transfers, which can be directly used in C and C++ programs or wrapped via bindings in other languages. This allows developers to embed robust network functionality without relying on external processes, supporting features like protocol handling, authentication, and data streaming directly within application code.[5] In C and C++, integration typically involves initializing a handle withcurl_easy_init(), configuring options via curl_easy_setopt(), executing the transfer with curl_easy_perform(), and cleaning up resources with curl_easy_cleanup(). For example, a basic synchronous HTTP GET request might look like this:
#include <stdio.h>
#include <curl/curl.h>
int main(void) {
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
res = curl_easy_perform(curl);
if(res != CURLE_OK) {
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
}
curl_easy_cleanup(curl);
}
return 0;
}
#include <stdio.h>
#include <curl/curl.h>
int main(void) {
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
res = curl_easy_perform(curl);
if(res != CURLE_OK) {
fprintf(stderr, "curl_easy_perform() failed: %s\n",
curl_easy_strerror(res));
}
curl_easy_cleanup(curl);
}
return 0;
}
[curl_easy_strerror()](/page/simple) for decoding return codes.[36][11]
Language bindings extend libcurl's reach to higher-level environments. In Python, PycURL provides a direct interface, allowing URL fetches with similar option-setting patterns; a simple example retrieves content into a buffer:
import pycurl
from io import BytesIO
buffer = BytesIO()
c = pycurl.Curl()
c.setopt(c.[URL](/page/URL), 'http://[example.com](/page/Example.com)')
c.setopt(c.WRITEDATA, buffer)
c.perform()
c.close()
body = buffer.getvalue().decode('[utf-8](/page/UTF-8)')
import pycurl
from io import BytesIO
buffer = BytesIO()
c = pycurl.Curl()
c.setopt(c.[URL](/page/URL), 'http://[example.com](/page/Example.com)')
c.setopt(c.WRITEDATA, buffer)
c.perform()
c.close()
body = buffer.getvalue().decode('[utf-8](/page/UTF-8)')
curl_init(), curl_setopt(), curl_exec(), and curl_close() to perform transfers seamlessly within scripts. For instance:
$ch = curl_init('http://[example.com](/page/Example.com)');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
if (curl_error($ch)) {
echo 'Error: ' . curl_error($ch);
}
curl_close($ch);
echo $response;
$ch = curl_init('http://[example.com](/page/Example.com)');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
if (curl_error($ch)) {
echo 'Error: ' . curl_error($ch);
}
curl_close($ch);
echo $response;
const { Curl } = require('node-libcurl');
const curl = new Curl();
curl.setOpt('URL', 'http://example.com');
curl.setOpt('FOLLOWLOCATION', true);
curl.on('end', function (statusCode, data, headers) {
console.log(data);
this.close();
});
curl.on('error', curl.close.bind(curl));
curl.perform();
const { Curl } = require('node-libcurl');
const curl = new Curl();
curl.setOpt('URL', 'http://example.com');
curl.setOpt('FOLLOWLOCATION', true);
curl.on('end', function (statusCode, data, headers) {
console.log(data);
this.close();
});
curl.on('error', curl.close.bind(curl));
curl.perform();
curl-sys dependency, with the Easy struct for blocking requests. An example fetches and prints content:
use curl::easy::Easy;
let mut easy = Easy::new();
easy.url("https://www.rust-lang.org/").unwrap();
easy.write_function(|data| {
std::io::stdout().write_all(data).unwrap();
Ok(data.len())
}).unwrap();
easy.perform().unwrap();
use curl::easy::Easy;
let mut easy = Easy::new();
easy.url("https://www.rust-lang.org/").unwrap();
easy.write_function(|data| {
std::io::stdout().write_all(data).unwrap();
Ok(data.len())
}).unwrap();
easy.perform().unwrap();
curl_easy_setopt(), curl_easy_perform(), and other functions, using curl_easy_strerror() to interpret codes like CURLE_OK or network failures, and always invoking curl_easy_cleanup() to free handles and prevent memory leaks, even in error paths. However, mainstream practices in open-source projects, including official curl examples, Git, and Transmission, typically do not check the return value of each curl_easy_setopt() call, focusing instead on curl_easy_init() and curl_easy_perform(). For rigor in production code, it is recommended to check these returns using macros, goto statements, or asserts in debug builds to detect potential failures. Additionally, global initialization via curl_global_init() and cleanup with curl_global_cleanup() should bookend application use of libcurl to manage shared resources like DNS caches.[11][41][42][36][43][44]
For asynchronous scenarios, libcurl's multi interface enables concurrent requests in a single thread, ideal for event-driven applications. Developers create multiple easy handles, add them to a multi stack with curl_multi_add_handle(), poll for activity using curl_multi_fdset() or sockets with select(), and process completions via curl_multi_perform() and CURLMSG_DONE checks. This avoids blocking while handling multiple transfers efficiently.[21]
Notable integrations include Git, which uses libcurl for HTTP and HTTPS cloning operations to fetch repositories over the network. Similarly, certain web browsers, such as Lightpanda, embed libcurl for resource fetching to leverage its protocol versatility in rendering web content.[45][46]
Security Considerations
Known Vulnerabilities
cURL and its underlying library libcurl have accumulated 170 published Common Vulnerabilities and Exposures (CVEs) since 2000 as of November 2025, with the majority classified as low to medium severity due to the project's proactive security auditing and maintenance practices.[47] These vulnerabilities span various aspects of network protocol handling, but the development team has consistently addressed them through timely security releases, minimizing long-term exposure.[48] Common vulnerability types in cURL include buffer overflows, improper validation of certificates, and denial-of-service (DoS) conditions triggered by malformed inputs. Buffer overflows, often heap-based, arise from inadequate bounds checking in protocol handshakes or data parsing, potentially leading to crashes or remote code execution under specific conditions.[47] Improper certificate validation flaws can bypass security checks in TLS implementations, while DoS issues typically involve resource exhaustion from oversized or crafted inputs, such as excessively long hostnames or invalid WebSocket masks.[49] Credential leaks represent another frequent category, where sensitive authentication data is inadvertently exposed during redirects or file-based credential loading.[50] Notable vulnerabilities illustrate these patterns. In 2016, CVE-2016-8615 involved a cookie injection flaw in libcurl's cookie jar handling, allowing a malicious HTTP server to inject cookies for arbitrary domains if the jar file was read back for subsequent requests; this affected curl versions 7.19.0 through 7.51.0 and was fixed in curl 7.52.0.[51] More recently, CVE-2023-38545 was a high-severity heap buffer overflow in libcurl's SOCKS5 proxy handshake, exploitable when processing long hostnames during slow connections, impacting versions 7.69.0 to 8.3.0 and patched in curl 8.4.0.[52] CVE-2023-38546, also addressed in the same release, allowed cookie injection in libcurl when duplicating easy handles with cookies enabled and no cookie file specified, potentially loading cookies from a file named "none" if it exists, affecting libcurl versions since curl_easy_duphandle() was introduced.[53] In the TLS domain, vulnerabilities like CVE-2024-2466 have caused certificate check bypasses in certain backends such as mbedTLS when connecting via IP addresses, allowing potential man-in-the-middle attacks; it impacted versions 8.5.0 to 8.6.0 and was resolved in curl 8.7.1.[54] For 2024-2025, CVE-2024-11053 exposed a credential leak in libcurl when using .netrc files during HTTP redirects, sending passwords from the initial host to subsequent ones, fixed in curl 8.11.1 across versions 7.76.0 to 8.11.0.[47] Similarly, CVE-2025-0167 involved a default credential leak in the curl command-line tool when following redirects with .netrc authentication using a "default" entry, affecting versions prior to 8.12.0 and patched in curl 8.12.0.[50] More recently in 2025, CVE-2025-10966 addressed missing SFTP host verification with the wolfSSH backend, potentially allowing MITM attacks, fixed in the latest release.[55] These issues primarily affect libcurl, the core library used in applications, though some, like credential leaks, also impact the curl command-line tool due to its direct handling of user inputs and files.[48] The patch history demonstrates rapid response, with security advisories published on the official curl.se security page detailing affected versions, exploitation conditions, and fixes; for instance, multiple 2023 flaws were bundled into the curl 8.4.0 release on October 11, 2023, and 2025 issues like CVE-2025-0167 prompted immediate updates in subsequent versions.[47] This advisory process ensures transparency and encourages upstream vendors to apply patches promptly.[56]Best Practices for Secure Use
When using cURL for secure network operations, proper certificate handling is essential to prevent man-in-the-middle attacks. Always specify a trusted certificate authority bundle using the--cacert option to provide a custom CA certificate file or --capath for a directory of hashed CA certificates, ensuring that cURL verifies the server's certificate against a known set of trusted authorities rather than relying on system defaults, which may be outdated or compromised. For enhanced security in scenarios requiring strict verification, such as pinning to a specific server's public key, employ the --pinnedpubkey option to match the expected public key hash (e.g., SHA-256) of the server's certificate, mimicking HTTP Strict Transport Security (HSTS) pinning and mitigating risks from compromised certificate authorities.[57] Disabling certificate verification with --insecure or CURLOPT_SSL_VERIFYPEER set to false should never be used in production, as it exposes connections to interception and forgery.[58]
To mitigate server-side request forgery (SSRF) attacks, where malicious input could trick cURL into accessing internal or unauthorized resources, rigorously sanitize and validate all user-supplied URLs before passing them to cURL, restricting them to whitelisted domains or protocols and rejecting suspicious patterns like localhost or private IP addresses.[59] Additionally, limit the risk of redirect-based exploits by setting --max-redirs to a low threshold (e.g., 5) to cap the number of HTTP redirects followed, preventing infinite loops or unintended resource access through chained redirects.
For authentication, favor modern token-based mechanisms over legacy methods to reduce exposure of credentials. Use --oauth2-bearer to supply OAuth 2.0 bearer tokens, which provide short-lived access without transmitting usernames and passwords, aligning with secure authorization frameworks that avoid credential reuse.[4] Basic authentication via --user should be avoided where possible, as it encodes credentials in Base64 (easily reversible) and transmits them in every request unless combined with HTTPS, opting instead for Digest, NTLM, or Negotiate when HTTP authentication is necessary; never hardcode credentials in scripts or command lines, using environment variables or secure vaults for storage.[7]
In logging and auditing, enable verbose output with -v only during debugging, as it may disclose sensitive information like authentication tokens or response bodies in plain text.[58] For production monitoring, leverage --write-out (or -w) to extract non-sensitive metadata such as HTTP status codes (%{http_code}), response time (%{time_total}), or redirect count without dumping full request/response details, facilitating audits while minimizing data leakage.
Maintaining security requires regular updates to the latest cURL version to address known vulnerabilities, such as buffer overflows or improper certificate handling patched in recent releases; check the official security advisories for CVEs and upgrade promptly using package managers or direct builds.[47] For backend testing and auditing, utilize tools like --test-event in event-based modes to simulate and trace transfer events during development, helping identify potential security flaws before deployment.[60]