Recent from talks
Nothing was collected or created yet.
File Transfer Protocol
View on Wikipedia
| Communication protocol | |
| Purpose | File transfer |
|---|---|
| Developer(s) | Abhay Bhushan for RFC 114 |
| Introduction | April 16, 1971 |
| OSI layer | Application layer |
| Port(s) | 21 for control, 20 for data transfer |
| RFC(s) | 959 |
| Internet protocol suite |
|---|
| Application layer |
| Transport layer |
| Internet layer |
| Link layer |
| Internet history timeline |
|
Early research and development:
Merging the networks and creating the Internet:
Commercialization, privatization, broader access leads to the modern Internet:
Examples of Internet services:
|
The File Transfer Protocol (FTP) is a standard communication protocol used for the transfer of computer files from a server to a client on a computer network. FTP is built on a client–server model architecture using separate control and data connections between the client and the server.[1] FTP users may authenticate themselves with a plain-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. For secure transmission that protects the username and password, and encrypts the content, FTP is often secured with SSL/TLS (FTPS) or replaced with SSH File Transfer Protocol (SFTP).
The first FTP client applications were command-line programs developed before operating systems had graphical user interfaces, and are still shipped with most Windows, Unix, and Linux operating systems.[2][3] Many dedicated FTP clients and automation utilities have since been developed for desktops, servers, mobile devices, and hardware, and FTP has been incorporated into productivity applications such as HTML editors and file managers.
An FTP client used to be commonly integrated in web browsers, where file servers are browsed with the URI prefix "ftp://". In 2021, FTP support was dropped by Google Chrome and Firefox,[4][5] two major web browser vendors, due to it being superseded by the more secure SFTP and FTPS; although neither of them have implemented the newer protocols.[6][7]
History of FTP servers
[edit]The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as RFC 114 on 16 April 1971. Until 1980, FTP ran on NCP, the predecessor of TCP/IP.[2] The protocol was later replaced by a TCP/IP version, RFC 765 (June 1980) and RFC 959 (October 1985), the current specification. Several proposed standards amend RFC 959, for example RFC 1579 (February 1994) enables Firewall-Friendly FTP (passive mode), RFC 2228 (June 1997) proposes security extensions, RFC 2428 (September 1998) adds support for IPv6 and defines a new type of passive mode.[8]
Protocol overview
[edit]Communication and data transfer
[edit]
FTP may run in active or passive mode, which determines how the data connection is established.[9] (This sense of "mode" is different from that of the MODE command in the FTP protocol.)
- In active mode, the client starts listening for incoming data connections from the server on port M. It sends the FTP command PORT M to inform the server on which port it is listening. The server then initiates a data channel to the client from its port 20, the FTP server data port.
- In situations where the client is behind a firewall and unable to accept incoming TCP connections, passive mode may be used. In this mode, the client uses the control connection to send a PASV command to the server and then receives a server IP address and server port number from the server,[9] which the client then uses to open a data connection from an arbitrary client port to the server IP address and server port number received.[10]
Both modes were updated in September 1998 to support IPv6. Further changes were introduced to the passive mode at that time, updating it to extended passive mode.[11]
The server responds over the control connection with three-digit status codes in ASCII with an optional text message. For example, "200" (or "200 OK") means that the last command was successful. The numbers represent the code for the response and the optional text represents a human-readable explanation or request (e.g. <Need account for storing file>).[1] An ongoing transfer of file data over the data connection can be aborted using an interrupt message sent over the control connection.
FTP needs two ports (one for sending and one for receiving) because it was originally designed to operate on top of Network Control Protocol (NCP), which was a simplex protocol that utilized two port addresses, establishing two connections, for two-way communications. An odd and an even port were reserved for each application layer application or protocol. The standardization of TCP and UDP reduced the need for the use of two simplex ports for each application down to one duplex port,[12]: 15 but the FTP protocol was never altered to only use one port, and continued using two for backwards compatibility.
NAT and firewall traversal
[edit]FTP normally transfers data by having the server connect back to the client, after the PORT command is sent by the client. This is problematic for both NATs and firewalls, which do not allow connections from the Internet towards internal hosts.[13] For NATs, an additional complication is that the representation of the IP addresses and port number in the PORT command refer to the internal host's IP address and port, rather than the public IP address and port of the NAT.
There are two approaches to solve this problem. One is that the FTP client and FTP server use the PASV command, which causes the data connection to be established from the FTP client to the server.[13] This is widely used by modern FTP clients. Another approach is for the NAT to alter the values of the PORT command, using an application-level gateway for this purpose.[13]

Data types
[edit]While transferring data over the network, five data types are defined:[2][3][8]
- ASCII (TYPE A): Used for text. Data is converted, if needed, from the sending host's character representation to "8-bit ASCII" before transmission, and (again, if necessary) to the receiving host's character representation, including newlines. As a consequence, this mode is inappropriate for files that contain data other than ASCII.
- Image (TYPE I, commonly called Binary mode): The sending machine sends each file byte by byte, and the recipient stores the bytestream as it receives it. (Image mode support has been recommended for all implementations of FTP).
- EBCDIC (TYPE E): Used for plain text between hosts using the EBCDIC character set.
- Local (TYPE L n): Designed to support file transfer between machines which do not use 8-bit bytes, e.g. 36-bit systems such as DEC PDP-10s. For example, "TYPE L 9" would be used to transfer data in 9-bit bytes, or "TYPE L 36" to transfer 36-bit words. Most contemporary FTP clients/servers only support L 8, which is equivalent to I.
- Unicode text files using UTF-8 (TYPE U): defined in an expired Internet Draft[14] which never became an RFC, though it has been implemented by several FTP clients/servers.
Note these data types are commonly called "modes", although ambiguously that word is also used to refer to active-vs-passive communication mode (see above), and the modes set by the FTP protocol MODE command (see below).
For text files (TYPE A and TYPE E), three different format control options are provided, to control how the file would be printed:
- Non-print (TYPE A N and TYPE E N) – the file does not contain any carriage control characters intended for a printer
- Telnet (TYPE A T and TYPE E T) – the file contains Telnet (or in other words, ASCII C0) carriage control characters (CR, LF, etc.)
- ASA (TYPE A A and TYPE E A) – the file contains ASA carriage control characters
These formats were mainly relevant to line printers; most contemporary FTP clients/servers only support the default format control of N.
File structures
[edit]File organization is specified using the STRU command. The following file structures are defined in section 3.1.1 of RFC959:
- F or FILE structure (stream-oriented). Files are viewed as an arbitrary sequence of bytes, characters or words. This is the usual file structure on Unix systems and other systems such as CP/M, MS-DOS and Microsoft Windows. (Section 3.1.1.1)
- R or RECORD structure (record-oriented). Files are viewed as divided into records, which may be fixed or variable length. This file organization is common on mainframe and midrange systems, such as MVS, VM/CMS, OS/400 and VMS, which support record-oriented filesystems.
- P or PAGE structure (page-oriented). Files are divided into pages, which may either contain data or metadata; each page may also have a header giving various attributes. This file structure was specifically designed for TENEX systems, and is generally not supported on other platforms. RFC1123 section 4.1.2.3 recommends that this structure not be implemented.
Most contemporary FTP clients and servers only support STRU F. STRU R is still in use in mainframe and minicomputer file transfer applications.
Data transfer modes
[edit]Data transfer can be done in any of three modes:[1][2]
- Stream mode (MODE S): Data is sent as a continuous stream, relieving FTP from doing any processing. Rather, all processing is left up to TCP. No End-of-file indicator is needed, unless the data is divided into records.
- Block mode (MODE B): Designed primarily for transferring record-oriented files (STRU R), although can also be used to transfer stream-oriented (STRU F) text files. FTP puts each record (or line) of data into several blocks (block header, byte count, and data field) and then passes it on to TCP.[8]
- Compressed mode (MODE C): Extends MODE B with data compression using run-length encoding.
Most contemporary FTP clients and servers do not implement MODE B or MODE C; FTP clients and servers for mainframe and minicomputer operating systems are the exception to that.
Some FTP software also implements a DEFLATE-based compressed mode, sometimes called "Mode Z" after the command that enables it. This mode was described in an Internet Draft, but not standardized.[15]
GridFTP defines additional modes, MODE E[16] and MODE X,[17] as extensions of MODE B.
Additional commands
[edit]More recent implementations of FTP support the Modify Fact: Modification Time (MFMT) command, which allows a client to adjust that file attribute remotely, enabling the preservation of that attribute when uploading files.[18][19]
To retrieve a remote file timestamp, there's MDTM command. Some servers (and clients) support nonstandard syntax of the MDTM command with two arguments, that works the same way as MFMT[20]
Login
[edit]
FTP login uses normal username and password scheme for granting access.[2] The username is sent to the server using the USER command, and the password is sent using the PASS command.[2] This sequence is unencrypted "on the wire", so may be vulnerable to a network sniffing attack.[21] If the information provided by the client is accepted by the server, the server will send a greeting to the client and the session will commence.[2] If the server supports it, users may log in without providing login credentials, but the same server may authorize only limited access for such sessions.[2]
Anonymous FTP
[edit]A host that provides an FTP service may provide anonymous FTP access.[2] Users typically log into the service with an 'anonymous' (lower-case and case-sensitive in some FTP servers) account when prompted for user name. Although users are commonly asked to send their email address instead of a password,[3] no verification is actually performed on the supplied data.[22] Many FTP hosts whose purpose is to provide software updates will allow anonymous logins.[3]
Software support
[edit]
File managers
[edit]Many file managers tend to have FTP access implemented, such as File Explorer (formerly Windows Explorer) on Microsoft Windows. This client is only recommended for small file transfers from a server, due to limitations compared to dedicated client software.[23] It does not support SFTP.[24]
Both the native file managers for KDE on Linux (Dolphin and Konqueror) support FTP as well as SFTP.[25][26]

On Android, the My Files file manager on Samsung Galaxy has a built-in FTP and SFTP client.[27]
Web browser
[edit]For a long time, most common web browsers were able to retrieve files hosted on FTP servers, although not all of them had support for protocol extensions such as FTPS.[3][28] When an FTP—rather than an HTTP—URL is supplied, the accessible contents on the remote server are presented in a manner that is similar to that used for other web content.
Google Chrome removed FTP support entirely in Chrome 88, also affecting other Chromium-based browsers such as Microsoft Edge.[29] Firefox 88 disabled FTP support by default, with Firefox 90 dropping support entirely.[30][4]
FireFTP is a discontinued browser extension that was designed as a full-featured FTP client to be run within Firefox, but when Firefox dropped support for FTP the extension developer recommended using Waterfox.[31] Some browsers, such as the text-based Lynx, still support FTP.[32]
Syntax
[edit]FTP URL syntax is described in RFC 1738, taking the form: ftp://user:password@host:port/path. Only the host is required.
More details on specifying a username and password may be found in the browsers' documentation (e.g., Firefox[33] and Internet Explorer[34]). By default, most web browsers use passive (PASV) mode, which more easily traverses end-user firewalls.
Some variation has existed in how different browsers treat path resolution in cases where there is a non-root home directory for a user.[35]
Download manager
[edit]Most common download managers can receive files hosted on FTP servers, while some of them also give the interface to retrieve the files hosted on FTP servers. DownloadStudio allows not only download a file from FTP server but also view the list of files on a FTP server.[36]
Other
[edit]LibreOffice declared its FTP support deprecated from 7.4 release, this was later removed in 24.2 release.[37][38] Apache OpenOffice, another descent of OpenOffice.org, still supports FTP.[39][40][41]
Security
[edit]FTP was not designed to be a secure protocol, and has many security weaknesses.[42] In May 1999, the authors of RFC 2577 listed a vulnerability to the following problems:
- Brute-force attack
- FTP bounce attack
- Packet capture
- Port stealing (guessing the next open port and usurping a legitimate connection)
- Spoofing attack
- Username enumeration
- DoS or DDoS
FTP does not encrypt its traffic; all transmissions are in clear text, and usernames, passwords, commands and data can be read by anyone able to perform packet capture (sniffing) on the network.[2][42] This problem is common to many of the Internet Protocol specifications (such as SMTP, Telnet, POP and IMAP) that were designed prior to the creation of encryption mechanisms such as TLS or SSL.[8]
Common solutions to this problem include:
- Using the secure versions of the insecure protocols, e.g., FTPS instead of FTP and TelnetS instead of Telnet.
- Using a different, more secure protocol that can handle the job, e.g. SSH File Transfer Protocol or Secure Copy Protocol.
- Using a secure tunnel such as Secure Shell (SSH) or virtual private network (VPN).
FTP over SSH
[edit]FTP over SSH is the practice of tunneling a normal FTP session over a Secure Shell connection.[42] Because FTP uses multiple TCP connections (unusual for a TCP/IP protocol that is still in use), it is particularly difficult to tunnel over SSH. With many SSH clients, attempting to set up a tunnel for the control channel (the initial client-to-server connection on port 21) will protect only that channel; when data is transferred, the FTP software at either end sets up new TCP connections (data channels) and thus have no confidentiality or integrity protection.
Otherwise, it is necessary for the SSH client software to have specific knowledge of the FTP protocol, to monitor and rewrite FTP control channel messages and autonomously open new packet forwardings for FTP data channels. Software packages that support this mode include:
- Tectia ConnectSecure (Win/Linux/Unix)[43] of SSH Communications Security's software suite
FTP over SSH should not be confused with SSH File Transfer Protocol (SFTP).
Derivatives
[edit]FTPS
[edit]Explicit FTPS is an extension to the FTP standard that allows clients to request FTP sessions to be encrypted. This is done by sending the "AUTH TLS" command. The server has the option of allowing or denying connections that do not request TLS. This protocol extension is defined in RFC 4217. Implicit FTPS is an outdated standard for FTP that required the use of a SSL or TLS connection. It was specified to use different ports than plain FTP.
SSH File Transfer Protocol
[edit]The SSH file transfer protocol (chronologically the second of the two protocols abbreviated SFTP) transfers files and has a similar command set for users, but uses the Secure Shell protocol (SSH) to transfer files. Unlike FTP, it encrypts both commands and data, preventing passwords and sensitive information from being transmitted openly over the network. It cannot interoperate with FTP software, though some FTP client software offers support for the SSH file transfer protocol as well.
Trivial File Transfer Protocol
[edit]Trivial File Transfer Protocol (TFTP) is a simple, lock-step FTP that allows a client to get a file from or put a file onto a remote host. One of its primary uses is in the early stages of booting from a local area network, because TFTP is very simple to implement. TFTP lacks security and most of the advanced features offered by more robust file transfer protocols such as File Transfer Protocol. TFTP was first standardized in 1981 and the current specification for the protocol can be found in RFC 1350.
Simple File Transfer Protocol
[edit]Simple File Transfer Protocol (the first protocol abbreviated SFTP), as defined by RFC 913, was proposed as an (unsecured) file transfer protocol with a level of complexity intermediate between TFTP and FTP. It was never widely accepted on the Internet, and is now assigned Historic status by the IETF. It runs through port 115, and often receives the initialism of SFTP. It has a command set of 11 commands and support three types of data transmission: ASCII, binary and continuous. For systems with a word size that is a multiple of 8 bits, the implementation of binary and continuous is the same. The protocol also supports login with user ID and password, hierarchical folders and file management (including rename, delete, upload, download, download with overwrite, and download with append).
FTP commands
[edit]FTP reply codes
[edit]Below is a summary of FTP reply codes that may be returned by an FTP server. These codes have been standardized in RFC 959 by the IETF. The reply code is a three-digit value. The first digit is used to indicate one of three possible outcomes — success, failure, or to indicate an error or incomplete reply:
- 2yz – Success reply
- 4yz or 5yz – Failure reply
- 1yz or 3yz – Error or Incomplete reply
The second digit defines the kind of error:
- x0z – Syntax. These replies refer to syntax errors.
- x1z – Information. Replies to requests for information.
- x2z – Connections. Replies referring to the control and data connections.
- x3z – Authentication and accounting. Replies for the login process and accounting procedures.
- x4z – Not defined.
- x5z – File system. These replies relay status codes from the server file system.
The third digit of the reply code is used to provide additional detail for each of the categories defined by the second digit.
See also
[edit]- Comparison of FTP client software
- Comparison of FTP server software packages
- Comparison of file transfer protocols
- Curl-loader – FTP/S loading/testing open-source software
- DTXT
- File eXchange Protocol (FXP)
- File Service Protocol (FSP)
- FTAM
- FTPFS
- List of FTP commands
- List of FTP server return codes
- Managed File Transfer
- OBEX
- Shared file access
- TCP Wrapper
References
[edit]- ^ a b c Forouzan, B.A. (2000). TCP/IP: Protocol Suite (1st ed.). New Delhi, India: Tata McGraw-Hill Publishing Company Limited.
- ^ a b c d e f g h i j Kozierok, Charles M. (2005). "The TCP/IP Guide v3.0". Tcpipguide.com.
- ^ a b c d e Dean, Tamara (2010). Network+ Guide to Networks. Delmar. pp. 168–171.
- ^ a b Vonau, Manuel (7 July 2021). "Firefox follows in Chrome's footsteps and drops FTP support (APK Download)". Android Police. Retrieved 12 July 2021.
- ^ "Remove FTP support - Chrome Platform Status". www.chromestatus.com. Retrieved 2 September 2021.
- ^ by, Written (23 March 2020). "Firefox is dropping FTP support". Sophos News. Retrieved 13 October 2023.
- ^ Edwards, Benj (14 July 2022). "Chrome and Firefox Killed FTP Support: Here's an Easy Alternative". How-To Geek. Retrieved 13 October 2023.
- ^ a b c d Clark, M.P. (2003). Data Networks IP and the Internet (1st ed.). West Sussex, England: John Wiley & Sons Ltd.
- ^ a b "Active FTP vs. Passive FTP, a Definitive Explanation". Slacksite.com.
- ^ RFC 959 (Standard) File Transfer Protocol (FTP). Postel, J. & Reynolds, J. (October 1985).
- ^ RFC 2428 (Proposed Standard) Extensions for IPv6, NAT, and Extended Passive Mode. Allman, M. & Metz, C. & Ostermann, S. (September 1998).
- ^ Stevens, W. Richard (1994). TCP/IP Illustrated Volume I. Vol. 1. Reading, Massachusetts, USA: Addison-Wesley Publishing Company. ISBN 0-201-63346-9.
- ^ a b c Gleason, Mike (2005). "The File Transfer Protocol and Your Firewall/NAT". Ncftp.com.
- ^ Klensin, John. FTP TYPE Extension for Internationalized Text. I-D draft-klensin-ftpext-typeu-00. Retrieved 9 June 2020.
- ^ Preston, J. (January 2005). Deflate transmission mode for FTP. IETF. I-D draft-preston-ftpext-deflate-03. Retrieved 27 January 2016.
- ^ Allcock, W. (April 2003). "GridFTP: Protocol Extensions to FTP for the Grid" (PDF).
- ^ Mandrichenko, I. (4 May 2005). "GridFTP v2 Protocol Description" (PDF).
- ^ "MFMT FTP command". support.solarwinds.com. 11 October 2018.
- ^ "FTP Commands: DSIZ, MFCT, MFMT, AVBL, PASS, XPWD, XMKD | Serv-U". www.serv-u.com.
- ^ "MDTM FTP command". support.solarwinds.com. 11 October 2018.
- ^ Prince, Brian (24 January 2012). "Should Organizations Retire FTP for Security?". Security Week. Retrieved 14 September 2017.
- ^ RFC 1635 (Informational) How to Use Anonymous FTP. P. & Emtage, A. & Marine, A. (May 1994).
- ^ FTP Access through Windows Explorer
- ^ "CSC373/406: SSH [2011/03/27-29]". fpl.cs.depaul.edu. Retrieved 13 October 2023.
- ^ "FTP". docs.kde.org. Retrieved 13 October 2023.
- ^ Cohen, Brent (26 July 2023). "How To Connect to FTP/SFTP in Dolphin | DeviceTests". Archived from the original on 27 September 2023. Retrieved 13 October 2023.
- ^ Moyens Staff (28 February 2022). "Samsung My Files vs Google Files: Which File Manager is Better on Galaxy Phones". Moyens I/O. Retrieved 13 October 2023.
- ^ Matthews, J. (2005). Computer Networking: Internet Protocols in Action (1st ed.). Danvers, MA: John Wiley & Sons Inc.
- ^ Sneddon, Joey (26 January 2021). "Linux Release Roundup: GParted, Lightworks, Google Chrome + More". omgubuntu.co.uk. Retrieved 30 January 2021.
- ^ "See what's new in Firefox: 88.0 Firefox Release". mozilla.org. 19 April 2021. Retrieved 20 April 2021.
- ^ "FireFTP - The Free FTP Client for Waterfox". FireFTP.net. Archived from the original on 1 March 2022.
- ^ "URL Schemes Supported in Lynx". Lynx website. Retrieved 6 July 2023.
- ^ "Accessing FTP servers | How to | Firefox Help". Support.mozilla.com. 5 September 2012. Retrieved 16 January 2013.
- ^ "How to Enter FTP Site Password in Internet Explorer". Archived from the original on 2 July 2015. Retrieved 13 February 2020. Written for IE versions 6 and earlier. Might work with newer versions.
- ^ Jukka "Yucca" Korpela (18 September 1997). "FTP URLs". "IT and communication" (jkorpela.fi). Retrieved 26 January 2020.
- ^ "DownloadStudio - Internet Download Manager And Download Accelerator - Features". Conceiva. Archived from the original on 8 September 2021. Retrieved 19 October 2021.
- ^ "LibreOffice 7.4: Release Notes". The Document Foundation's Wiki. Retrieved 10 September 2022.
- ^ "ReleaseNotes/24.2". The Document Foundation's Wiki. Retrieved 24 March 2024.
- ^ "The FTP Content Provider". Apache OpenOffice Wiki. Retrieved 23 July 2025.
- ^ "Path Settings". Apache OpenOffice Wiki. Retrieved 23 July 2025.
- ^ "API/Samples/Java/Office/DocumentHandling". Apache OpenOffice Wiki. Retrieved 23 July 2025.
- ^ a b c "Securing FTP using SSH". Nurdletech.com.
- ^ "Components of the Information Assurance Platform (section Tectia ConnectSecure)". ssh.com. Archived from the original on 31 July 2020.
Further reading
[edit]- RFC 697 – CWD Command of FTP. July 1975.
- RFC 959 – (Standard) File Transfer Protocol (FTP). J. Postel, J. Reynolds. October 1985.
- RFC 1579 – (Informational) Firewall-Friendly FTP. February 1994.
- RFC 1635 – (Informational) How to Use Anonymous FTP. May 1994.
- RFC 1639 – FTP Operation Over Big Address Records (FOOBAR). June 1994.
- RFC 1738 – Uniform Resource Locators (URL). December 1994.
- RFC 2228 – (Proposed Standard) FTP Security Extensions. October 1997.
- RFC 2389 – (Proposed Standard) Feature negotiation mechanism for the File Transfer Protocol. August 1998.
- RFC 2428 – (Proposed Standard) Extensions for IPv6, NAT, and Extended passive mode. September 1998.
- RFC 2577 – (Informational) FTP Security Considerations. May 1999.
- RFC 2640 – (Proposed Standard) Internationalization of the File Transfer Protocol. July 1999.
- RFC 3659 – (Proposed Standard) Extensions to FTP. P. Hethmon. March 2007.
- RFC 5797 – (Proposed Standard) FTP Command and Extension Registry. March 2010.
- RFC 7151 – (Proposed Standard) File Transfer Protocol HOST Command for Virtual Hosts. March 2014.
- IANA FTP Commands and Extensions registry – The official registry of FTP Commands and Extensions
External links
[edit]
Communication Networks/File Transfer Protocol at Wikibooks- FTP Server Online Tester Authentication, encryption, mode and connectivity.
- Anonymous FTP Servers by Country Code TLD (2012): "Offbeat Internet - Public Access - FTP". www.jumpjet.info. 2012. Archived from the original on 28 March 2023. Retrieved 16 January 2020.
File Transfer Protocol
View on GrokipediaHistory
Origins and Development
The File Transfer Protocol (FTP) was initially developed in 1971 by Abhay Bhushan at MIT's Project MAC to enable standardized file sharing across the ARPANET.[2] This early version operated over the Network Control Protocol (NCP), the ARPANET's initial host-to-host communication standard, allowing users on diverse systems to access and manipulate remote file systems without custom adaptations for each host.[2] Bhushan's design drew from the need for a uniform interface amid the network's heterogeneous hardware, including systems with varying word sizes and data representations.[2] The protocol's creation addressed the inefficiencies of prior ad-hoc file transfer methods on the ARPANET, such as using TELNET for remote logins to manually copy files, which lacked reliability and portability across incompatible operating systems.[3] FTP provided a dedicated mechanism for efficient, reliable transfers, supporting both ASCII and binary data while shielding users from host-specific file representations.[2] Its development was influenced by earlier file handling concepts in systems like Multics, where Bhushan targeted initial implementations, building on that OS's hierarchical file structures to generalize access for network use.[2] Key early milestones included the first implementations on the TENEX operating system for PDP-10 computers, which facilitated practical testing on ARPANET hosts like those at MIT and BBN.[3] The protocol evolved through revisions such as RFC 354 (1972) and RFC 542 (1973), which refined commands and data handling. By the early 1980s, as the ARPANET transitioned from NCP to TCP/IP, FTP was adapted to the new stack, with RFC 765 in 1980 outlining the port assignments and connection handling necessary for TCP compatibility, paving the way for broader adoption.[4]Standardization and Evolution
The File Transfer Protocol (FTP) achieved its formal standardization with the publication of RFC 959 in October 1985, authored by Jon Postel and Joyce Reynolds of the University of Southern California's Information Sciences Institute. This document defined the core architecture, commands, and operational procedures for FTP, establishing it as the definitive specification and obsoleting earlier experimental and proposed standards, including RFC 765 from June 1980. RFC 959 emphasized reliability in heterogeneous network environments, mandating features like active and passive data connection modes to accommodate diverse implementations across ARPANET hosts. Subsequent evolutions addressed limitations in security, scalability, and compatibility. In September 1997, RFC 2228 introduced FTP security extensions, enabling authentication mechanisms such as Kerberos and the use of the AUTH command for protected sessions, marking a shift toward integrating cryptographic protections without altering the base protocol. File size constraints were mitigated in November 2003 through RFC 3659, which defined extensions for handling files larger than 2 GB, including the MLST and MLSD commands for standardized machine-readable listings and metadata exchange and the MFMT (modify time) command for timestamp synchronization, thereby supporting modern storage demands. Further refinements came in April 2010 with RFC 5797, which enhanced passive mode operations by standardizing the EPSV (extended passive) command and improving EPSV ALL responses to facilitate firewall traversal and NAT compatibility in contemporary networks. Adaptations for evolving internet infrastructure included support for IPv6 addressing in September 1998 via RFC 2428, which extended the PORT and PASV commands to handle 128-bit addresses through the EPRT and EPSV commands, ensuring FTP's viability in dual-stack environments. Internationalization efforts advanced in August 1999 with RFC 2640, specifying UTF-8 encoding for filenames and paths via the FEAT, OPTS, and LANG commands, allowing seamless handling of non-ASCII characters across global systems. Despite these iterative improvements, FTP's usage has declined since the early 2000s, supplanted by secure web-based alternatives like HTTP/HTTPS for file distribution, though it persists in enterprise automation, legacy industrial systems, and specialized applications requiring batch transfers.Protocol Overview
Connection and Session Management
The File Transfer Protocol (FTP) employs a dual-channel architecture to separate command exchanges from data transfers, ensuring reliable communication over TCP connections. The control connection operates on TCP port 21 by default, where the client initiates a full-duplex session to the server for sending commands and receiving responses. In parallel, a separate data connection handles the actual file transfers; in active mode, the server initiates this from TCP port 20 to a client-specified port, while in passive mode, the client initiates it to a server-selected port to accommodate network configurations like firewalls.[5][6] Session initiation begins when the client establishes the control connection to the server's port 21, followed by authentication commands to negotiate session parameters. To prepare for data transfer, the client issues the PORT command in active mode to inform the server of its listening port for incoming data connections, or the PASV command in passive mode, prompting the server to open and report a dynamic port for the client to connect to. These mechanisms allow the protocol to adapt to different network topologies while maintaining the separation of control and data flows.[7][8] FTP sessions operate in a non-persistent state by default, where the data connection is established on demand and automatically closed upon completion of a transfer to free resources. The ABOR command enables abrupt abortion of an ongoing data transfer by closing the data connection and restoring the control connection to its prior state, providing a mechanism for interruption without fully terminating the session. Session teardown occurs via the QUIT command, which prompts the server to close the control connection and end the session gracefully.[7][9] Servers supporting FTP are designed to handle multiple concurrent sessions, each managed through independent control connections from different clients, subject to implementation-defined resource limits such as maximum user connections to prevent overload. This concurrency allows efficient resource sharing among users while maintaining isolation between sessions.[10]Control and Data Channels
The File Transfer Protocol (FTP) employs a dual-channel architecture to separate session management from data transfer operations. The control channel serves as a bidirectional communication pathway for exchanging commands and responses between the client and server, utilizing the Telnet protocol over TCP port 21 by default.[4] This channel handles session control functions, such as authentication and directory navigation, but does not carry file data; commands are issued as ASCII text strings terminated by carriage return and line feed (Transfer Modes and Mechanisms
The File Transfer Protocol (FTP) supports three primary transfer modes to handle data transmission over the data connection, allowing flexibility in how files are structured and sent between client and server. The default and most commonly used mode is Stream mode, in which data is transmitted as a continuous sequence of bytes without explicit boundaries between records or files.[4] In this mode, end-of-record (EOR) and end-of-file (EOF) markers are indicated by specific two-byte control codes if needed, though EOF is typically signaled by closing the data connection. Stream mode is suitable for most modern transfers due to its simplicity and efficiency, supporting any representation type without imposing record structures.[4] Block mode structures data into fixed-size blocks, each preceded by a three-byte header containing an 8-bit descriptor code and a 16-bit byte count.[4] The descriptor provides metadata such as EOR (code 128), EOF (code 64), or restart markers (code 16), enabling better handling of record-oriented files and error recovery. This mode is useful for systems requiring explicit block boundaries but is less common today than Stream mode due to added overhead. Compressed mode, the least utilized of the three, transmits data in blocks similar to Block mode but incorporates compression techniques to reduce filler bytes and repetitions, using escape sequences for control information and a filler byte (such as space for text types or zero for binary).[4] It aims to optimize bandwidth for repetitive data but is rarely implemented in contemporary FTP clients and servers because of complexity and limited gains over modern compression alternatives. The transfer mode is negotiated using the MODE command, with Stream as the default.[4] Transfer mechanisms in FTP define how data is represented and converted during transmission, primarily through the TYPE command, which specifies the format to ensure compatibility between heterogeneous systems. ASCII mode, the default, transfers text files using 7-bit Network Virtual Terminal (NVT) ASCII characters, converting line endings to the standardData and File Representation
Supported Data Types
The File Transfer Protocol (FTP) supports several representation types for data transfer, specified via the TYPE command, which defines how data is interpreted and transmitted between client and server systems. The TYPE command uses a single-character code to select the type, optionally followed by a format or byte-size parameter, ensuring compatibility across diverse host environments. All FTP implementations must support the ASCII (A) and Image (binary, I) types, while EBCDIC (E) and Local byte (L) serve specialized or legacy needs.[15] The ASCII type (A) handles textual data using the Network Virtual Terminal (NVT-ASCII) standard, a 7-bit subset of ASCII extended to 8 bits for transmission. In this mode, end-of-line sequences are standardized to carriage return followed by line feed (CR-LF), with the sending host converting its internal representation to NVT-ASCII and the receiving host performing the reverse transformation to maintain portability. Non-printable characters, such as control codes, are transmitted without alteration in ASCII mode but are typically handled more robustly in binary mode to avoid corruption. This type is ideal for human-readable files like source code or configuration scripts, where line-ending consistency is crucial.[16][17] In contrast, the Image type (I), also known as binary mode, transfers data as a stream of contiguous 8-bit bytes without any modification, preserving the exact bit pattern of the original file. Padding with null bytes may occur to align byte boundaries, but the core content remains unchanged, making this mode suitable for non-textual files such as executables, compressed archives, images, and multimedia. Unlike ASCII mode, no character set conversions or line-ending adjustments are applied, which prevents issues like truncation or alteration of binary structures.[18][17] The EBCDIC type (E) provides support for systems using the Extended Binary Coded Decimal Interchange Code, primarily IBM mainframes, where data is transmitted in 8-bit EBCDIC characters and end-of-line is denoted by a newline (NL) character. This legacy type allows direct transfer without conversion for EBCDIC-native environments, though modern implementations often prefer binary mode for cross-platform compatibility. An optional format code, such as "N" for non-printable, can be specified with both A and E types to include control characters.[19][17] For non-standard byte sizes, the Local byte type (L) enables transfer in logical bytes of a specified length, given as a decimal integer parameter (e.g., "L 8" for 8-bit bytes or "L 36" for systems like TOPS-20). Data is packed contiguously into these bytes, with padding as needed, accommodating legacy or specialized hardware where standard 8-bit bytes do not apply. This type is rarely used today but remains part of the protocol for backward compatibility.[20][17][21]| Type Code | Description | Parameters | Primary Use Cases |
|---|---|---|---|
| A | ASCII (NVT-ASCII) | Optional: F (form, e.g., N for non-print) | Text files with line-ending normalization |
| I | Image (Binary) | None | Executables, images, archives (exact preservation) |
| E | EBCDIC | Optional: F (form, e.g., N for non-print) | IBM mainframe text data |
| L | Local byte size | Required: Byte size (e.g., 8, 36) | Non-8-bit systems, legacy hardware |
File and Directory Structures
In the File Transfer Protocol (FTP), files are represented through three primary structures defined to accommodate different access patterns and storage conventions across host systems. The file structure treats the file as a continuous sequence of data bytes, suitable for sequential access, and serves as the default mode for most transfers.[4] The record structure organizes data into discrete records of either fixed or variable length, enabling random access within the file, particularly for text-based or structured data formats.[4] Additionally, the page structure supports discontinuous access by dividing the file into independent, indexed pages, each with a header containing fields such as page length, index, data length, and type (e.g., last page or simple page), which was originally designed for systems like TOPS-20.[4] Directories in FTP are handled implicitly through navigation and manipulation commands rather than via an explicit directory structure command, allowing servers to manage hierarchical file systems in a system-dependent manner. Pathnames serve as the fundamental identifier for both files and directories, consisting of character strings that may include hierarchical elements like slashes to denote parent-child relationships, though the exact syntax varies by host operating system.[4] For instance, pathnames can be absolute (starting from the root) or relative to the current working directory, enabling operations on nested directory trees without requiring a standardized format beyond basic pathname conventions.[4] Navigation within the directory hierarchy is facilitated by core commands that adjust the client's perspective of the remote file system. The Change Working Directory (CWD) command shifts the current working directory to the specified pathname, while the Change to Parent Directory (CDUP) command moves up one level to the parent directory.[4] The Print Working Directory (PWD) command returns the absolute pathname of the current working directory, providing clients with a clear reference point for subsequent operations.[4] These commands support efficient traversal of hierarchical paths, with servers interpreting pathnames according to their local file system rules.[4] To enhance the representation of file and directory attributes beyond simple names, FTP extensions introduced in RFC 3659 provide machine-readable listings. The Modify Listing (MLST) command retrieves structured facts about a single file or directory, such as type (file or directory), size in octets, last modification time in YYYYMMDDHHMMSS format, and permissions (e.g., read, write, delete).[14] Similarly, the Modify Directory Listing (MLSD) command lists all entries in a directory, returning each with the same set of facts over a data connection, allowing clients to obtain detailed metadata liketype=dir;size=0;perm=adfr for directories or type=file;size=1024990;modify=19970214165800;perm=r for files.[14] These mechanisms standardize attribute reporting, improving interoperability by specifying facts in a semicolon-separated, extensible format that supports UTF-8 pathnames.[14]
Encoding and Formatting
The File Transfer Protocol (FTP) originally specifies the use of 7-bit US-ASCII as the default character encoding for commands, responses, and pathnames on the control connection, ensuring compatibility with the Network Virtual Terminal (NVT) standard from Telnet.[4] This 7-bit encoding limits support to basic English characters, with the most significant bit set to zero, and applies to text-based transfers in ASCII mode where end-of-line sequences are normalized to carriage return followed by line feed (CRLF).[4] To address internationalization needs, FTP was extended in 1999 to support Unicode through UTF-8 encoding, particularly for pathnames and filenames containing non-ASCII characters.[22] The OPTS UTF8 command enables this feature, allowing clients and servers to negotiate UTF-8 usage while maintaining backward compatibility with ASCII-only systems, as UTF-8 is a superset of US-ASCII.[22] Servers can advertise UTF-8 support via the FEAT command, which lists available protocol extensions, facilitating client detection of internationalization capabilities.[23] In binary (IMAGE) mode, 8-bit characters are transferred unaltered as a stream of bytes, preserving multibyte sequences without interpretation, which supports UTF-8 data files effectively once the control connection is UTF-8 enabled.[4] Directory listings returned by the LIST command exhibit varying formatting conventions across implementations, lacking a standardized structure in the core protocol.[4] Common formats include Unix-style listings with columns for permissions, owner, size, and timestamp (e.g., "-rw-r--r-- 1 user group 1024 Jan 1 12:00 file.txt"), while Windows-based servers often mimic MS-DOS styles with short filenames and basic attributes. This non-standardization poses parsing challenges for clients, requiring heuristic detection or server-specific logic to interpret fields like file sizes or dates reliably.[4] UTF-8 adoption since the early 2000s has been driven by the need for robust, synchronization-safe encoding in global file transfers.[22]Commands and Responses
Core Commands
The File Transfer Protocol (FTP) employs a set of core commands to facilitate basic file operations, authentication, and session management between client and server. These commands are transmitted over the control connection as case-insensitive ASCII strings, consisting of a four-character alphabetic command code followed optionally by a space-separated argument, and terminated by a carriage return and line feed (CRLF).[4] This format ensures reliable parsing, with the server responding via three-digit reply codes to indicate success, errors, or required follow-up actions.[4]Connection Management Commands
Core commands for establishing and terminating sessions include USER, which specifies the username to initiate login; it must typically be one of the first commands issued after connection and is followed by a server reply prompting for credentials.[4] PASS provides the corresponding password, completing the authentication if valid, and is handled sensitively by clients to avoid exposure in logs or displays.[4] ACCT supplies additional account information, such as billing details, which may be required after USER and PASS for certain systems or to grant specific access levels.[4] PORT specifies the client's address and port for the data connection in active mode, using a comma-separated list of six numbers (host IP bytes and port bytes), allowing the server to connect back to the client for transfers.[4] PASV requests the server to open a port for passive mode data connections, replying with the server's IP and port for the client to connect to, facilitating firewall traversal.[4] REIN reinitializes the connection, logging out the user and resetting the state without closing the control connection.[4] QUIT terminates the user session gracefully, prompting the server to close the control connection after sending a completion reply, though it does not interrupt ongoing data transfers.[4]File Transfer Commands
Commands for transferring and manipulating files form the protocol's primary function. RETR retrieves a specified file from the server, initiating a data connection to send the file contents to the client without altering the original on the server.[4] STOR uploads a file to the server, replacing any existing file with the same pathname or creating a new one, with the client pushing data over the established connection.[4] APPE appends data to an existing file at the specified pathname or creates a new file if none exists, allowing incremental updates without full replacement.[4] REST enables restarting interrupted transfers by setting a byte marker, after which a subsequent RETR, STOR, or APPE command resumes from that point to support reliable large-file handling.[4] DELE deletes the specified file from the server, removing it permanently if permissions allow.[4] For renaming, RNFR identifies the source pathname of the file or directory to rename, requiring an immediate follow-up RNTO command with the destination pathname to complete the operation atomically.[4]Directory Management Commands
Directory operations are handled by commands that navigate and modify the server's file system structure. CWD changes the current working directory to the specified pathname, enabling operations relative to that location without affecting the overall login context.[4] CDUP simplifies navigation by changing to the parent directory of the current one, using the same reply codes as CWD for consistency.[4] MKD creates a new directory at the given pathname, which can be absolute or relative to the current working directory, and returns the full pathname in its reply.[4] RMD removes an empty directory at the specified pathname, again supporting absolute or relative paths.[4] PWD queries the server for the current working directory pathname, which is returned in a dedicated reply format for client reference.[4] Listing commands include LIST, which sends a detailed server-specific listing of files and directories (optionally for a given pathname) over the data connection in the current transfer type, and NLST, which provides a simpler name-only list in the same manner, both defaulting to the current directory if no argument is supplied.[4]Other Core Commands
Additional essential commands configure the transfer environment. TYPE sets the data representation type, such as ASCII (A) for text, EBCDIC (E) for legacy systems, Image (I) for binary, or Local (L) with a byte size, defaulting to ASCII non-printable format to ensure accurate interpretation across systems.[4] MODE defines the transfer mode, with Stream (S) as the default for continuous byte streams, Block (B) for structured blocks with headers, or Compressed (C) for efficiency, influencing how data is packaged during transfers.[4] STRU specifies the file structure, defaulting to File (F) for unstructured streams, or alternatives like Record (R) or Page (P) for systems requiring delimited content.[4] SYST queries the server's operating system type, eliciting a reply with the system name (e.g., UNIX or TOPS-20) to allow clients to adapt to host-specific behaviors.[4] ABOR aborts the previously issued command, interrupting any ongoing data transfer and closing the data connection if active.[4]Reply Codes and Error Handling
The File Transfer Protocol (FTP) employs a three-digit numeric reply code system to communicate server responses to client commands, as defined in the protocol specification. Each reply code consists of three digits, where the first digit indicates the response category: 1xx for positive preliminary replies (signaling further action is needed), 2xx for positive completion (command accepted and action performed), 3xx for positive intermediate (command accepted but additional information required), 4xx for transient negative completion (temporary failure, action not taken but may succeed later), and 5xx for permanent negative completion (failure, action not taken and unlikely to succeed without change). The second digit specifies the functional group, such as x0x for syntax errors, x2x for connection management, x3x for authentication and accounting, and x5x for file system status. The third digit provides finer granularity within the group, allowing for specific error subtypes.[24] These codes enable structured communication over the control channel, with the server transmitting the code followed by a human-readable text explanation. For instance, code 220 ("Service ready for new user") is sent upon successful connection establishment to indicate the server is prepared to receive commands. Similarly, 331 ("User name okay, need password") confirms valid username input and prompts for credentials during login. If the provided password is correct and access is authorized, 230 ("User logged in, proceed") is returned; however, if authentication or authorization fails, 530 ("Not logged in") is commonly returned. In data transfer scenarios, 426 ("Connection closed; transfer aborted") signals an interruption, often due to network issues, while 550 ("Requested action not taken. File unavailable (e.g., file not found, no access)") denotes permanent failures like missing files or permission denials. These examples illustrate how codes guide client interpretation of server states across operations.[25] The reply code 530 is frequently encountered in authentication failures, particularly during anonymous login attempts. Anonymous logins conventionally use the username "anonymous" with either a blank password or an arbitrary string such as an email address. The 530 code indicates that the login was rejected. On Windows servers running Internet Information Services (IIS), common causes for this error during anonymous logins include an inaccessible home directory for the anonymous user account, misconfigured FTP user isolation, incorrect authorization rules, or insufficient permissions on the FTP root directory or related resources. In some configurations, particularly those involving Active Directory User Isolation, anonymous access is explicitly not supported, leading to 530 errors. In cybersecurity training labs and educational environments (such as those on platforms like Hack The Box or TryHackMe), participants commonly encounter 530 errors when attempting anonymous FTP logins to simulated Windows targets (e.g., using commands likeftp targetwindows01 or the target's IP address with username anonymous and a blank password), often due to these server-side configuration issues.[24][26]
Error handling in FTP relies on the reply code categories to facilitate recovery. Clients are expected to retry operations upon receiving 4xx transient errors, such as 421 ("Service not available, closing control connection") or 425 ("Can't open data connection"), as these indicate temporary conditions like resource unavailability that may resolve quickly. Permanent 5xx errors, like 500 ("Syntax error, command unrecognized") or the aforementioned 530 and 550, prompt clients to log the issue and cease retries for that specific action, escalating to user notification or session termination if persistent. For interrupted transfers, the REST (Restart) command allows resumption from a specified byte offset, with the server replying 350 ("Restarting at n. Send STORE or RETRIEVE to initiate transfer") to confirm the marker; this mechanism supports partial file recovery in stream mode without restarting from the beginning.[9][27]
Subsequent RFCs have extended the reply code framework while maintaining compatibility with RFC 959. For example, RFC 3659 introduces refined uses of existing codes for new commands like MDTM (modification time) and SIZE, where 213 returns numerical values on success, and 550 indicates unavailability; it also specifies 501 ("Syntax error in parameters or arguments") for invalid options in machine-readable listings (MLST/MLSD). Some FTP implementations incorporate additional reply codes beyond the standard, such as negative variants or vendor-specific subtypes (e.g., 5xx extensions for detailed diagnostics), but these must adhere to the core three-digit structure to ensure interoperability. Updates in later RFCs, including RFC 2228 for security extensions, refine error signaling without altering the foundational categories.[28]
Authentication and Access Control
Login Procedures
The login process in FTP commences upon establishment of the control connection, typically on TCP port 21. The server immediately issues a 220 "Service ready for new user" reply code to signal readiness for authentication.[27] The client responds by sending the USER command, specifying the username as a Telnet string. The server validates the username and replies with 331 "User name okay, need password" if acceptable, or 530 "Not logged in" if invalid or unauthorized.[29] Following a 331 response, the client transmits the PASS command with the corresponding password, also as a Telnet string. Successful verification yields 230 "User logged in, proceed", granting session access; failure results in 530 "Not logged in", while 332 "Need account for login" indicates a requirement for additional accounting details.[29] In cases of a 332 reply, the client may then send the optional ACCT command providing accounting information, such as billing data, after which the server issues 230 upon completion or 530/532 if unsuccessful.[29] Usernames and passwords are sent in plaintext over the unencrypted control channel, exposing them to potential eavesdropping.[30] Server-side validation occurs against local user databases like /etc/passwd or via Pluggable Authentication Modules (PAM), which support integration with external systems such as SQL databases or LDAP for credential checks.[31][32] The 230 response confirms authentication success and initializes the user session, enabling subsequent commands for file operations. To enhance security, many servers apply post-login restrictions, such as chroot jails that confine the user to their home directory or a virtual root, preventing access to the broader filesystem.[27][32] FTP servers commonly implement configurable idle timeouts to terminate inactive sessions and conserve resources; for instance, a default of 300 seconds without commands often triggers disconnection.[32][33]Anonymous and Restricted Access
Anonymous FTP provides a mechanism for public access to files without requiring authenticated user credentials, allowing general users to retrieve resources from archive sites. It operates by permitting login with the username "anonymous" or "ftp", followed by a password that is typically an email address, though some implementations accept "guest" or any string.[34] This setup grants read-only access to designated public directories, enabling users to list contents and download files but prohibiting uploads or modifications unless explicitly configured otherwise.[34] Since the 1980s, anonymous FTP has been widely used for software distribution and sharing public information across the early Internet, such as GNU project releases.[35] In cybersecurity training environments, such as those offered by platforms including Hack The Box, TryHackMe, or educational programs like PLTW, anonymous FTP access is commonly demonstrated on target systems to illustrate configuration and potential issues. A typical example involves connecting from a client workstation to a Windows-based FTP server (e.g., named TargetWindows01) using the command-line client withftp targetwindows01 (or the server's IP address if hostname resolution fails). The username is entered as anonymous, and the password field is left blank or filled with any string (conventionally an email address). Upon successful authentication, the user gains read-only access to permitted directories.
Common issues encountered in such setups include hostname resolution failures, which can be resolved by using the IP address directly; the 530 "Not logged in" reply code (indicating login failure) often caused by an inaccessible home directory for the anonymous user, misconfigured FTP user isolation, restrictive authorization rules, or incorrect permissions on the directory—particularly in Microsoft Internet Information Services (IIS) FTP implementations; improper password handling; or firewalls blocking incoming connections on port 21.[4][26] These pitfalls emphasize the need for careful configuration to ensure secure anonymous access.
Restricted access in FTP implementations limits user privileges to enhance security and prevent unauthorized system exploration. Chroot jails confine users to a specific subdirectory by changing the root directory during login, effectively isolating them from the broader filesystem; for example, in vsftpd, the chroot_local_user=YES directive applies this to local users by defaulting to their home directories. Virtual users operate without corresponding system accounts in /etc/passwd, authenticating via separate databases like PAM modules, and can be assigned privileges akin to anonymous or local users through options like virtual_use_local_privs=YES.[36] Guest accounts map non-anonymous logins to a fixed system user, such as "ftp", providing predefined privileges without granting full user access; this is enabled via guest_enable=YES and guest_username=ftp.
Server configuration for these features involves specific directives to balance accessibility and restriction. For anonymous FTP, anonymous_enable=YES permits logins, while anon_upload_enable=NO (default) blocks uploads to maintain read-only status, though enabling it requires careful permission setup on the anon_root directory.[36] Misconfiguration, such as allowing writable chroot directories without proper isolation, can enable privilege escalation or escapes from the jail, underscoring the need for non-writable roots in chroot setups.[36] Lists like /etc/[vsftpd](/page/Vsftpd)/chroot_list allow selective application of restrictions to specific users.
The usage of anonymous FTP for public file distribution has declined since the 1990s, largely replaced by HTTP-based web servers, which offer simpler integration with browsers and better support for diverse content types without dedicated FTP clients.[37]
Security Issues
Common Vulnerabilities
The File Transfer Protocol (FTP) transmits usernames, passwords, and file data in plaintext, exposing them to eavesdropping attacks where network traffic can be intercepted and analyzed using tools like Wireshark.[38] This vulnerability stems from the original protocol design in RFC 959, which lacks any encryption mechanisms for control or data connections. As a result, attackers on the same network segment or those performing man-in-the-middle intercepts can capture sensitive credentials and content without detection.[38] In active mode, FTP's use of port 20 for data connections enables risks such as port scanning for backdoors, where attackers probe for open services on the client side.[38] A more severe issue is the FTP bounce attack, exploited via the PORT command, which allows an attacker to instruct the FTP server to connect to arbitrary hosts and ports on behalf of the client, potentially bypassing firewalls or scanning internal networks.[38] This protocol flaw, identified in CVE-1999-0017, turns the FTP server into an unwitting proxy for reconnaissance or denial-of-service attempts.[39] Directory traversal vulnerabilities arise from path manipulation in FTP commands like CWD or RETR, where insufficient input validation in servers allows attackers to access files outside the intended root directory using sequences like "../".[40] This risk is inherent to the protocol's flexible path handling but is exacerbated in implementations that fail to enforce strict boundaries.[41] Buffer overflows in legacy FTP servers, such as those in public domain daemons from the late 1990s, enable remote code execution when processing oversized inputs in commands like USER or PASS.[42] These flaws, common in older software like wu-ftpd versions prior to 2.6.1, allowed attackers to overflow stack buffers and inject malicious code.[42] Standard FTP provides no built-in mechanisms for verifying data integrity during transfer, leaving files susceptible to undetected tampering or corruption en route.[43] Misconfigurations in FTP server setups, particularly in Microsoft Internet Information Services (IIS) implementations, represent a significant source of security risks. Enabling anonymous FTP access without proper restrictions can expose files to unauthorized reading or writing, allowing potential data leakage or malicious file uploads. Conversely, overly restrictive configurations—such as improper user isolation, inaccessible home directories, incorrect authorization rules, or insufficient file system permissions—frequently result in login failures with 530 "User cannot log in" errors, even when using the standard anonymous username "anonymous" with a blank or arbitrary password (e.g., an email address). These issues are commonly encountered in Windows-based FTP deployments and in cybersecurity training or penetration testing environments where anonymous access is intentionally enabled for demonstration.[26][44] Historical exploits targeting FTP servers proliferated from the 1980s through the 2000s, with attackers using buffer overflows in daemons like wu-ftpd to gain persistent access on Unix systems.[42] In modern contexts, legacy FTP implementations continue to support malware persistence, as unpatched servers remain common in various systems.[41] As of 2025, recent vulnerabilities in FTP server software, such as authentication bypass and remote code execution in CrushFTP (CVE-2024-4040, CVE-2025-54309) and post-authentication RCE in Wing FTP Server (CVE-2025-47812), have been actively exploited, underscoring ongoing risks in contemporary deployments.[45][46][47]Mitigation Strategies
To mitigate the inherent security risks of FTP, such as unencrypted transmissions and susceptibility to eavesdropping or brute-force attacks, organizations can implement network-level controls to limit exposure. One effective approach is to restrict FTP access to trusted IP addresses or networks using TCP Wrappers, which integrate with servers like vsftpd to deny connections from unauthorized sources based on host lists in files like /etc/hosts.allow and /etc/hosts.deny.[48] Additionally, tunneling FTP traffic over a VPN encrypts the entire session, preventing interception on untrusted networks, as recommended for protecting legacy protocols in storage infrastructures.[49] For deployments requiring passive mode to facilitate data connections through firewalls, configure a narrow range of high ports (e.g., 49152–65534) on the server and explicitly allow only those ports in firewall rules, while blocking active mode to avoid inbound connection risks from clients.[50] Server hardening focuses on minimizing the attack surface through configuration and maintenance. Disable anonymous access by default in vsftpd.conf with settings like anonymous_enable=NO, unless explicitly needed for public file distribution, and restrict uploads in any anonymous directories to write-only mode (e.g., chmod 730 on /var/ftp/pub/upload) to prevent reading or execution of malicious files.[48] Enable detailed logging of connections, transfers, and authentication attempts via vsftpd's xferlog_enable=YES and log_ftp_protocol=YES options, directing output to a secure, centralized log server for analysis, and implement rate limiting on login attempts using iptables rules to throttle excessive connections from single IPs.[51] Regularly apply security patches and updates to the FTP software, such as those addressing buffer overflows in vsftpd from vendors like Red Hat, and test configurations in a non-production environment before deployment.[51][48] Ongoing monitoring enhances detection and response to potential compromises. Deploy host-based intrusion detection systems to scan FTP logs for anomalies, such as repeated failed logins or unusual transfer patterns, with automated alerts configured for thresholds like five invalid attempts within a minute.[51] For adding encryption without altering the core protocol, use TLS wrappers like stunnel to proxy FTP connections over SSL/TLS, ensuring certificates are valid and renewed periodically.[49] As a broader best practice, avoid deploying traditional FTP for sensitive data transfers, as it transmits passwords and files without encryption, exposing credentials and content to interception; instead, prefer FTPS (FTP over SSL/TLS), which can use free certificates from authorities like Let's Encrypt, or for greater security via a different protocol, SFTP implemented over SSH.[53] In legacy environments requiring compatibility, TLS proxy setups via tools like stunnel provide a transitional layer of protection while maintaining FTP syntax.[49] Disable anonymous access entirely if not in use, as it poses a high risk of unauthorized file access and should be confined to read-only directories with strict permissions.[54]Implementations and Software
Client Applications
Client applications for the File Transfer Protocol (FTP) enable users to initiate connections to remote servers, authenticate, and manage file transfers through intuitive interfaces or command-line tools. These applications handle the FTP control and data channels, supporting operations such as uploading, downloading, renaming, and deleting files across local and remote systems. Built-in and third-party clients vary in complexity, from basic interactive shells to feature-rich graphical user interfaces (GUIs) that incorporate drag-and-drop functionality and multi-protocol support, including extensions like FTPS and SFTP for enhanced security. Command-line FTP clients provide a lightweight, scriptable means for file transfers, often integrated directly into operating systems. Theftp command, built into Unix-like systems such as Linux and macOS, as well as Windows, allows interactive sessions for connecting to servers, navigating directories with commands like ls and cd, and transferring files using get and put in either ASCII or binary modes.[55][56] It supports batch mode for automated transfers via scripts, making it suitable for simple, unattended operations without additional installations. For more advanced scripting, lftp offers enhanced reliability with features like automatic retries, segmented downloads for resuming interrupted transfers, and parallel file handling across multiple connections.[57] Its built-in mirror command facilitates directory synchronization by recursively copying files and subdirectories, while bookmarks and queuing support complex workflows, such as bandwidth-limited transfers in shell scripts.[58]
Graphical FTP clients prioritize user-friendliness with visual file explorers and streamlined workflows. FileZilla, a cross-platform open-source application for Windows, Linux, and macOS, features a dual-pane interface for simultaneous local and remote file browsing, enabling drag-and-drop transfers and directory comparison for easy synchronization.[59] It includes a site manager to store connection profiles with credentials and settings, transfer queues for managing multiple uploads/downloads sequentially or in parallel, and filters to exclude specific file types during operations. WinSCP, tailored for Windows users, integrates SFTP alongside FTP for secure transfers and provides scripting capabilities through its .NET assembly for automation.[60] Its synchronization tools allow one-way or two-way mirroring of directories, while an integrated text editor supports in-place file modifications without separate applications. Cyberduck, optimized for macOS with Windows support, extends FTP functionality to cloud services like Amazon S3 and Backblaze B2 via a bookmark-based connection system.[61] It offers drag-and-drop uploads, queue management for batched transfers, and synchronization options that detect changes for efficient updates across remote storage.
Common features across modern FTP clients enhance usability and efficiency. Site managers in tools like FileZilla and Cyberduck allow saving multiple server configurations, including host details, port numbers, and authentication methods, reducing setup time for frequent connections.[62] Queueing systems, as implemented in WinSCP and lftp, permit scheduling and prioritizing transfers, with progress tracking and pause/resume capabilities to handle large datasets without interruption.[63] Synchronization tools, such as directory mirroring in lftp and WinSCP, compare timestamps and sizes to transfer only modified files, minimizing bandwidth usage in repetitive tasks like backups.
Open-source FTP clients dominate the landscape due to their accessibility, community-driven updates, and compatibility with diverse protocols, with applications like FileZilla and WinSCP consistently ranking among the most downloaded options.[64] Mobile adaptations extend this trend; for instance, AndFTP on Android supports FTP, FTPS, SFTP, and SCP with resume-enabled uploads/downloads and folder synchronization, allowing on-the-go file management via touch interfaces.[65]
Server Implementations
FTP server implementations vary widely, encompassing both open-source and proprietary software designed to handle file transfers efficiently and securely. These servers typically operate as daemons listening on TCP port 21 for control connections and dynamically assigned ports for data transfers, supporting concurrent sessions through various architectural models. Popular implementations are chosen based on factors like operating system compatibility, performance requirements, and administrative ease, with many integrating into broader hosting environments. Among open-source options, vsftpd (Very Secure FTP Daemon) stands out for its lightweight design, emphasizing speed, stability, and security on Linux and other UNIX-like systems. It is particularly favored in enterprise Linux distributions due to its minimal resource footprint and built-in protections against common exploits, such as chroot jails for user isolation.[66][67] ProFTPD offers a modular architecture inspired by Apache's configuration model, allowing administrators to extend functionality through loadable modules for features like virtual hosting and authentication backends. Its configuration files use a directive-based syntax similar to httpd.conf, enabling fine-grained control over server behavior without recompilation.[68][69] Pure-FTPd provides a simple, single-process implementation optimized for ease of setup and support for virtual users, which map to non-system accounts stored in a Berkeley DB or PAM for isolated access management. This approach simplifies administration in multi-tenant environments by avoiding direct ties to host user databases.[70] On the proprietary side, Microsoft's IIS FTP service integrates natively with Windows Server, leveraging the Internet Information Services (IIS) framework for seamless management within the Windows ecosystem. It supports site isolation and integration with Active Directory for authentication, making it suitable for enterprise Windows deployments.[71] Serv-U, developed by SolarWinds, is a commercial FTP server with robust auditing capabilities, including detailed logging of transfers, user actions, and access attempts that can be archived for compliance purposes. It caters to businesses needing advanced reporting and integration with external monitoring tools.[72][73] FTP server architectures commonly employ forking or preforking models to manage concurrency. In the forking model, the parent process spawns a new child process for each incoming connection, which handles the session independently but incurs overhead from repeated process creation. Preforking, by contrast, pre-creates a pool of worker processes at startup, with the parent dispatching connections to idle workers, reducing latency under high load at the cost of idle resource usage.[74][75] IPv6 support has been standardized in FTP servers since the late 1990s, with RFC 2428 defining extensions for IPv6 addresses and NAT traversal, enabling dual-stack operation without protocol modifications. By the 2000s, major implementations like vsftpd and ProFTPD incorporated these features, ensuring compatibility with modern networks.[76][77] Deployment of FTP servers is prevalent in web hosting scenarios, where they facilitate file uploads for website management alongside HTTP services. For scalability, containerization with Docker has become common, allowing isolated FTP instances via images like those based on vsftpd, which can be orchestrated in multi-container setups for high-availability hosting.[78][79]Integration in Browsers and Tools
Web browsers historically provided built-in support for accessing FTP servers through theftp:// URL scheme, allowing users to browse and download files directly from the address bar. For example, Google Chrome supported FTP URLs until version 88 in January 2021, when the feature was fully removed due to its lack of encryption (FTPS) support and proxy compatibility, as well as declining usage rates.[80] Similarly, Mozilla Firefox fully removed FTP support in version 90 in July 2021.[81] The standard FTP URL syntax, as defined in RFC 1738, follows the format ftp://[user:password@]host[:port]/path, enabling direct authentication within the URL, but this approach exposes credentials in plain text, exacerbating security risks.
Download managers and command-line utilities have long integrated FTP capabilities for efficient file retrieval, often extending beyond basic browser functionality. GNU Wget, a non-interactive download tool, supports FTP protocol for both single-file and recursive downloads, allowing users to mirror entire directory hierarchies from remote servers.[82] Similarly, curl provides FTP support for transfers, including features like connection reuse and active mode, though it requires scripting for recursive operations unlike Wget.[83] Graphical download managers like Internet Download Manager (IDM) incorporate FTP handling with advanced features such as dynamic segmentation for acceleration and seamless resume of interrupted transfers, supporting protocols including HTTP, HTTPS, and FTP.[84]
FTP integration extends to integrated development environments (IDEs) and operating system file managers, enabling seamless file operations within productivity workflows. In Eclipse IDE, FTP access is facilitated through plugins like the Target Management project's Remote System Explorer (RSE), which supports FTP alongside SSH and Telnet for remote file browsing, editing, and synchronization. The GNOME file manager Nautilus offers native FTP connectivity via its "Connect to Server" feature, where users enter an ftp:// URL to mount remote directories as virtual file systems, supporting drag-and-drop transfers without additional software.[85]
Due to inherent security vulnerabilities in FTP, such as unencrypted data transmission, its integration in browsers and tools is increasingly phased out in favor of secure alternatives like WebDAV, which provides HTTP-based file management with built-in authentication and encryption options.[86] This shift reflects broader industry trends toward protocols that align with modern web security standards, reducing exposure to interception and credential theft.[43]
Variants and Derivatives
Secure FTP Extensions
File Transfer Protocol Secure (FTPS) extends the standard FTP by integrating Transport Layer Security (TLS) or its predecessor Secure Sockets Layer (SSL) to encrypt both control and data channels, thereby protecting against eavesdropping and tampering inherent in FTP's plaintext transmission.[87] This addresses core FTP vulnerabilities such as unencrypted credentials and data exposure during transfer.[88] FTPS operates in two primary modes: explicit and implicit. In explicit mode, as standardized in RFC 4217, the connection begins on the default FTP port 21 in an unencrypted state, after which the client issues the AUTH TLS command to negotiate TLS encryption; the server responds with code 234 to confirm, upgrading the session to a protected state.[87] Implicit mode, while not formally defined in the same RFC, assumes encryption from the outset without negotiation commands, typically using port 990 for the control channel and port 989 for data, making it suitable for environments requiring immediate security but less flexible for mixed connections.[89] Key features of FTPS include configurable channel protection levels via the PROT command, inherited from FTP security extensions in RFC 2228: Clear (C) for unprotected transmission, Safe (S) for integrity protection without confidentiality, and Private (P) for full confidentiality and integrity using TLS encryption.[88] Authentication supports X.509 certificates for both server verification and optional client authentication, enabling mutual trust without relying solely on usernames and passwords.[87] The foundations of FTPS trace to RFC 2228 in 1997, which introduced general FTP security mechanisms like protection buffers, and were later specialized for TLS in RFC 4217 published in 2005.[88][87] Adoption surged in enterprise settings during the 2000s, driven by regulatory demands for data protection in sectors like finance and healthcare, where FTPS servers became standard for secure bulk transfers.[90] In contrast to vanilla FTP, FTPS mandates encrypted channels post-negotiation in explicit mode or from connection start in implicit mode, eliminating plaintext fallbacks that could expose sessions.[87] It also introduces challenges with intermediary proxies, where re-encryption for inspection requires custom proxy certificates, often complicating deployment in firewalled networks.[91]Lightweight Alternatives
Lightweight alternatives to the full File Transfer Protocol (FTP) emerged to address scenarios requiring minimal overhead, such as resource-constrained environments or automated booting processes, where the complexity of FTP's features like extensive directory navigation and authentication were unnecessary.[92] These protocols strip down core file transfer mechanics, often omitting security and advanced operations to prioritize simplicity and speed, though they inherit FTP's vulnerabilities to interception due to lack of encryption.[93] The Trivial File Transfer Protocol (TFTP), defined in RFC 1350 in 1992, exemplifies this approach as a UDP-based protocol designed for basic, unauthenticated file transfers without session management or error correction beyond UDP's checksums.[93] It supports only essential operations: reading a file from a server via a Read Request (RRQ), writing a file to a server via a Write Request (WRQ), and acknowledging data blocks in a lock-step manner using fixed-size packets, typically 512 bytes.[93] Lacking user authentication, directory listings, or rename capabilities in its core specification, TFTP relies on the underlying network for reliability, making it unsuitable for lossy connections.[93] TFTP found primary use in diskless workstation booting and network device configuration, where clients download boot images or firmware over local networks without needing persistent connections.[94] A key application is in Preboot Execution Environment (PXE) booting, where TFTP serves as the transport for initial bootloaders and operating system images after DHCP discovery, enabling automated OS deployments in enterprise environments like data centers.[95] Network devices, such as routers and switches, also leverage TFTP for lightweight firmware updates due to its low resource footprint, often in trusted LANs where security is handled separately. However, its insecurity—no encryption or access controls—limits it to isolated networks, and the absence of retransmission mechanisms can lead to incomplete transfers on unreliable links.[93] To enhance flexibility without overcomplicating the protocol, RFC 2347 in 1998 introduced an option negotiation extension for TFTP, allowing clients and servers to agree on parameters like block size before transfer begins, potentially increasing throughput by supporting larger packets up to 65464 bytes via RFC 2348. This evolution addressed scalability for larger files in booting scenarios but did not add security features, preserving TFTP's lightweight nature while mitigating some performance bottlenecks. Another early lightweight variant is the Simple File Transfer Protocol (SFTP), outlined in RFC 913 from 1984, which provides a minimal superset of TFTP functionalities while remaining easier to implement than full FTP.[92] Operating over TCP for reliable delivery, it includes basic user authentication via USER and PASS commands, along with limited file operations such as retrieval (RETR), sending (STOR), renaming (RNFR/RNTO), and deletion (DELE), but omits advanced directory management beyond simple listing (LIST) and changing (CWD).[92] Designed for environments needing more utility than TFTP—such as basic access control—without FTP's full command set, SFTP supports directory listings and changes but avoids complex features like account management or structured replies, reducing implementation complexity to under full FTP's scope.[92] SFTP's use cases center on constrained systems requiring straightforward, authenticated transfers, such as early embedded devices or simple client-server setups where full FTP overhead was prohibitive, though its adoption waned with the rise of more robust protocols.[92] Like TFTP, it lacks encryption, relying on TCP for integrity but exposing transfers to eavesdropping, and its minimal error handling suits only stable networks.[92]Modern Replacements
The SSH File Transfer Protocol (SFTP) has emerged as a primary modern replacement for FTP, providing secure file operations over an SSH connection on port 22 using a single channel for both commands and data. Defined as part of the SSH protocol suite in draft-ietf-secsh-filexfer (initially published in 2001 and widely adopted by the mid-2000s), SFTP supports comprehensive file access, transfer, and management capabilities, including authentication via public-key cryptography or passwords, directory navigation, and permission handling. It operates as a subsystem within SSH, leveraging the transport layer for encryption and integrity protection, which addresses FTP's vulnerabilities such as unencrypted transmissions.[96] Popular implementations include the OpenSSH client, which provides thesftp command for interactive and batch file transfers.
Other notable replacements include the Secure Copy Protocol (SCP), which uses SSH to copy files between hosts with built-in encryption and authentication, though it lacks SFTP's full interactive file management features.[97] SCP, integrated into tools like OpenSSH, enables simple, secure one-way transfers via commands such as scp source destination. Web Distributed Authoring and Versioning (WebDAV), specified in RFC 4918 (2007), extends HTTP to support collaborative file editing, locking, and versioning over standard web ports (80 or 443), facilitating web-integrated transfers without dedicated FTP infrastructure.[98]
These protocols offer key advantages over FTP, including end-to-end encryption of both commands and data (using algorithms like AES) and integrity verification through mechanisms such as checksums and message authentication codes, ensuring files remain unaltered during transit.[96] SFTP's integration as an SSH subsystem further enhances security by reusing established SSH sessions for multiple operations, reducing overhead while maintaining firewall compatibility via a single port.[96]
The transition to these replacements reflects FTP's deprecation in modern systems; for instance, Google Chrome removed native FTP support in version 95 (2021) due to low usage and security concerns, prompting reliance on secure alternatives like SFTP for browser-integrated or programmatic file operations.[99] SFTP has become the de facto standard for secure file transfers since the early 2000s, widely adopted in enterprise environments and operating systems for its robustness.[96]References
- /page/Encryption
