Hubbry Logo
UsenetUsenetMain
Open search
Usenet
Community hub
Usenet
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Usenet
Usenet
from Wikipedia

A 2004 discussion in the Usenet group comp.text.tex
A diagram of Usenet servers and clients. The coloured dots on the servers represent the newsgroups they carry. Coloured arrows between servers indicate newsgroup content exchanges (news feeds). Arrows between clients and servers indicate that a user is subscribed to a certain newsgroup and reads or submits articles there.

Notably, clients never connect with each other, but still have access to each other's posts even when they also never connect to the same server.
Internet history timeline

Early research and development:

Merging the networks and creating the Internet:

Commercialization, privatization, broader access leads to the modern Internet:

Examples of Internet services:

Usenet (/ˈjznɛt/),[1] a portmanteau of User's Network,[1] is a worldwide distributed discussion system available on computers. It was developed from the general-purpose Unix-to-Unix Copy (UUCP) dial-up network architecture. Tom Truscott and Jim Ellis conceived the idea in 1979, and it was established in 1980.[2] Users read and post messages (called articles or posts, and collectively termed news) to one or more topic categories, known as newsgroups. Usenet resembles a bulletin board system (BBS) in many respects and is the precursor to the Internet forums that have become widely used. Discussions are threaded, as with web forums and BBSes, though posts are stored on the server sequentially.[3][4]

A major difference between a BBS or web message board and Usenet is the absence of a central server and dedicated administrator or hosting provider. Usenet is distributed among a large, constantly changing set of news servers that store and forward messages to one another via "news feeds". Individual users may read messages from and post to a local (or simply preferred) news server, which can be operated by anyone, and those posts will automatically be forwarded to any other news servers peered with the local one, while the local server will receive any news its peers have that it currently lacks. This results in the automatic proliferation of content posted by any user on any server to any other user subscribed to the same newsgroups on other servers.

As with BBSes and message boards, individual news servers or service providers are under no obligation to carry any specific content, and may refuse to do so for many reasons: a news server might attempt to control the spread of spam by refusing to accept or forward any posts that trigger spam filters, or a server without high-capacity data storage may refuse to carry any newsgroups used primarily for file sharing, limiting itself to discussion-oriented groups. However, unlike BBSes and web forums, the dispersed nature of Usenet usually permits users who are interested in receiving some content to access it simply by choosing to connect to news servers that carry the feeds they want.

Usenet is culturally and historically significant in the networked world, having given rise to, or popularized, many widely recognized concepts and terms such as "FAQ", "flame", "sockpuppet", and "spam".[5] In the early 1990s, shortly before access to the Internet became commonly affordable, Usenet connections via FidoNet's dial-up BBS networks made long-distance or worldwide discussions and other communication widespread.[6]

The name Usenet comes from the term "users' network".[3] The first Usenet group was NET.general, which quickly became net.general.[7] The first commercial spam on Usenet was from immigration attorneys Canter and Siegel advertising green card services.[7]

On the Internet, Usenet is transported via the Network News Transfer Protocol (NNTP) on Transmission Control Protocol (TCP) port 119 for standard, unprotected connections, and on TCP port 563 for Secure Sockets Layer (SSL) encrypted connections.

Introduction

[edit]

Usenet was conceived in 1979 and publicly established in 1980, at the University of North Carolina at Chapel Hill and Duke University,[8][2] over a decade before the World Wide Web went online (and thus before the general public received access to the Internet), making it one of the oldest computer network communications systems still in widespread use. It was originally built on the "poor man's ARPANET", employing UUCP as its transport protocol to offer mail and file transfers, as well as announcements through the newly developed news software such as A News. The name "Usenet" emphasizes its creators' hope that the USENIX organization would take an active role in its operation.[9]

The articles that users post to Usenet are organized into topical categories known as newsgroups, which are themselves logically organized into hierarchies of subjects. For instance, sci.math and sci.physics are within the sci.* hierarchy. Or, talk.origins and talk.atheism are in the talk.* hierarchy. When a user subscribes to a newsgroup, the news client software keeps track of which articles that user has read.[10]

In most newsgroups, the majority of the articles are responses to some other article. The set of articles that can be traced to one single non-reply article is called a thread. Most modern newsreaders display the articles arranged into threads and subthreads. For example, in the wine-making newsgroup rec.crafts.winemaking, someone might start a thread called; "What's the best yeast?" and that thread or conversation might grow into dozens of replies long, by perhaps six or eight different authors. Over several days, that conversation about different wine yeasts might branch into several sub-threads in a tree-like form.

When a user posts an article, it is initially only available on that user's news server. Each news server talks to one or more other servers (its "newsfeeds") and exchanges articles with them. In this fashion, the article is copied from server to server and should eventually reach every server in the network. The later peer-to-peer networks operate on a similar principle, but for Usenet it is normally the sender, rather than the receiver, who initiates transfers. Usenet was designed under conditions when networks were much slower and not always available. Many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out.[11] This is largely because the POTS network was typically used for transfers, and phone charges were lower at night.

The format and transmission of Usenet articles is similar to that of Internet e-mail messages. The difference between the two is that Usenet articles can be read by any user whose news server carries the group to which the message was posted, as opposed to email messages, which have one or more specific recipients.[12]

Today, Usenet has diminished in importance with respect to Internet forums, blogs, mailing lists and social media. Usenet differs from such media in several ways: Usenet requires no personal registration with the group concerned; information need not be stored on a remote server; archives are always available; and reading the messages does not require a mail or web client, but a news client. However, it is now possible to read and participate in Usenet newsgroups to a large degree using ordinary web browsers since most newsgroups are now copied to several web sites.[13] The groups in alt.binaries are still widely used for data transfer.

ISPs, news servers, and newsfeeds

[edit]
Usenet Provider Map
Usenet Provider Map

Many Internet service providers, and many other Internet sites, operate news servers for their users to access. ISPs that do not operate their own servers directly will often offer their users an account from another provider that specifically operates newsfeeds. In early news implementations, the server and newsreader were a single program suite, running on the same system. Today, one uses separate newsreader client software, a program that resembles an email client but accesses Usenet servers instead.[14]

Not all ISPs run news servers. A news server is one of the most difficult Internet services to administer because of the large amount of data involved, small customer base (compared to mainstream Internet service), and a disproportionately high volume of customer support incidents (frequently complaining of missing news articles). Some ISPs outsource news operations to specialist sites, which will usually appear to a user as though the ISP itself runs the server. Many of these sites carry a restricted newsfeed, with a limited number of newsgroups. Commonly omitted from such a newsfeed are foreign-language newsgroups and the alt.binaries hierarchy which largely carries software, music, videos and images, and accounts for over 99 percent of article data.[citation needed]

There are also Usenet providers that offer a full unrestricted service to users whose ISPs do not carry news, or that carry a restricted feed.[citation needed]

Newsreaders

[edit]

Newsgroups are typically accessed with newsreaders: applications that allow users to read and reply to postings in newsgroups. These applications act as clients to one or more news servers. Historically, Usenet was associated with the Unix operating system developed at AT&T, but newsreaders were soon available for all major operating systems.[15] Email client programs and Internet suites of the late 1990s and 2000s often included an integrated newsreader. Newsgroup enthusiasts often criticized these as inferior to standalone newsreaders that made correct use of Usenet protocols, standards and conventions.[16]

With the rise of the World Wide Web (WWW), web front-ends (web2news) have become more common. Web front ends have lowered the technical entry barrier requirements to that of one application and no Usenet NNTP server account. There are numerous websites now offering web based gateways to Usenet groups, although some people have begun filtering messages made by some of the web interfaces for one reason or another.[17][18] Google Groups[19] is one such web based front end and some web browsers can access Google Groups via news: protocol links directly.[20]

Moderated and unmoderated newsgroups

[edit]

A minority of newsgroups are moderated, meaning that messages submitted by readers are not distributed directly to Usenet, but instead are emailed to the moderators of the newsgroup for approval. The moderator is to receive submitted articles, review them, and inject approved articles so that they can be properly propagated worldwide. Articles approved by a moderator must bear the Approved: header line. Moderators ensure that the messages that readers see in the newsgroup conform to the charter of the newsgroup, though they are not required to follow any such rules or guidelines.[21] Typically, moderators are appointed in the proposal for the newsgroup, and changes of moderators follow a succession plan.[22]

Historically, a mod.* hierarchy existed before Usenet reorganization.[23] Now, moderated newsgroups may appear in any hierarchy, typically with .moderated added to the group name.

Usenet newsgroups in the Big-8 hierarchy are created by proposals called a Request for Discussion, or RFD. The RFD is required to have the following information: newsgroup name, checkgroups file entry, and moderated or unmoderated status. If the group is to be moderated, then at least one moderator with a valid email address must be provided. Other information which is beneficial but not required includes: a charter, a rationale, and a moderation policy if the group is to be moderated.[24] Discussion of the new newsgroup proposal follows, and is finished with the members of the Big-8 Management Board making the decision, by vote, to either approve or disapprove the new newsgroup.

Unmoderated newsgroups form the majority of Usenet newsgroups, and messages submitted by readers for unmoderated newsgroups are immediately propagated for everyone to see. Minimal editorial content filtering vs propagation speed form one crux of the Usenet community. One little cited defense of propagation is canceling a propagated message, but few Usenet users use this command and some news readers do not offer cancellation commands, in part because article storage expires in relatively short order anyway. Almost all unmoderated Usenet groups tend to receive large amounts of spam.[25][26][27]

Technical details

[edit]

Usenet is a set of protocols for generating, storing and retrieving news "articles" (which resemble Internet mail messages) and for exchanging them among a readership which is potentially widely distributed. These protocols most commonly use a flooding algorithm which propagates copies throughout a network of participating servers. Whenever a message reaches a server, that server forwards the message to all its network neighbors that haven't yet seen the article. Only one copy of a message is stored per server, and each server makes it available on demand to the (typically local) readers able to access that server. The collection of Usenet servers has thus a certain peer-to-peer character in that they share resources by exchanging them, the granularity of exchange however is on a different scale than a modern peer-to-peer system and this characteristic excludes the actual users of the system who connect to the news servers with a typical client-server application, much like an email reader.

RFC 850 was the first formal specification of the messages exchanged by Usenet servers. It was superseded by RFC 1036 and subsequently by RFC 5536 and RFC 5537.

In cases where unsuitable content has been posted, Usenet has support for automated removal of a posting from the whole network by creating a cancel message, although due to a lack of authentication and resultant abuse, this capability is frequently disabled. Copyright holders may still request the manual deletion of infringing material using the provisions of World Intellectual Property Organization treaty implementations, such as the United States Online Copyright Infringement Liability Limitation Act, but this would require giving notice to each individual news server administrator.

On the Internet, Usenet is transported via the Network News Transfer Protocol (NNTP) on TCP Port 119 for standard, unprotected connections and on TCP port 563 for SSL encrypted connections.

Organization

[edit]
The "Big Nine" hierarchies of Usenet

The major set of worldwide newsgroups is contained within nine hierarchies, eight of which are operated under consensual guidelines that govern their administration and naming. The current Big Eight are:

  • comp.* – computer-related discussions (comp.software, comp.sys.amiga)
  • humanities.*fine arts, literature, and philosophy (humanities.classics, humanities.design.misc)
  • misc.* – miscellaneous topics (misc.education, misc.forsale, misc.kids)
  • news.* – discussions and announcements about news (meaning Usenet, not current events) (news.groups, news.admin)
  • rec.* – recreation and entertainment (rec.music, rec.arts.movies)
  • sci.* – science related discussions (sci.psychology, sci.research)
  • soc.* – social discussions (soc.college.org, soc.culture.african)
  • talk.* – talk about various controversial topics (talk.religion, talk.politics, talk.origins)

The alt.* hierarchy is not subject to the procedures controlling groups in the Big Eight, and it is as a result less organized. Groups in the alt.* hierarchy tend to be more specialized or specific—for example, there might be a newsgroup under the Big Eight which contains discussions about children's books, but a group in the alt hierarchy may be dedicated to one specific author of children's books. Binaries are posted in alt.binaries.*, making it the largest of all the hierarchies.

Many other hierarchies of newsgroups are distributed alongside these. Regional and language-specific hierarchies such as japan.*, malta.* and ne.* serve specific countries and regions such as Japan, Malta and New England. Companies and projects administer their own hierarchies to discuss their products and offer community technical support, such as the historical gnu.* hierarchy from the Free Software Foundation. Microsoft closed its newsserver in June 2010, providing support for its products over forums now.[28] Some users prefer to use the term "Usenet" to refer only to the Big Eight hierarchies; others include alt.* as well. The more general term "netnews" incorporates the entire medium, including private organizational news systems.

Informal sub-hierarchy conventions also exist. *.answers are typically moderated cross-post groups for FAQs. An FAQ would be posted within one group and a cross post to the *.answers group at the head of the hierarchy seen by some as a refining of information in that news group. Some subgroups are recursive—to the point of some silliness in alt.*[citation needed].

Binary content

[edit]
A visual example of the many complex steps required to prepare data to be uploaded to Usenet newsgroups. These steps must be done again in reverse to download data from Usenet.

Usenet was originally created to distribute text content encoded in the 7-bit ASCII character set. With the help of programs that encode 8-bit values into ASCII, it became practical to distribute binary files as content. Binary posts, due to their size and often-dubious copyright status, were in time restricted to specific newsgroups, making it easier for administrators to allow or disallow the traffic.

The oldest widely used encoding method for binary content is uuencode, from the Unix UUCP package. In the late 1980s, Usenet articles were often limited to 60,000 characters, and larger hard limits exist today. Files are therefore commonly split into sections that require reassembly by the reader.

With the header extensions and the Base64 and Quoted-Printable MIME encodings, there was a new generation of binary transport. In practice, MIME has seen increased adoption in text messages, but it is avoided for most binary attachments. Some operating systems with metadata attached to files use specialized encoding formats. For Mac OS, both BinHex and special MIME types are used. Other lesser known encoding systems that may have been used at one time were BTOA, XX encoding, BOO, and USR encoding.

In an attempt to reduce file transfer times, an informal file encoding known as yEnc was introduced in 2001. It achieves about a 30% reduction in data transferred by assuming that most 8-bit characters can safely be transferred across the network without first encoding into the 7-bit ASCII space. The most common method of uploading large binary posts to Usenet is to convert the files into RAR archives and create Parchive files for them. Parity files are used to recreate missing data when not every part of the files reaches a server.

Binary newsgroups can be used to distribute files, and, as of 2022, some remain popular as an alternative to BitTorrent to share and download files.[29]

Binary retention time

[edit]
October 2020 screenshot showing 60 PB of usenet group data.[30]

Each news server allocates a certain amount of storage space for content in each newsgroup. When this storage has been filled, each time a new post arrives, old posts are deleted to make room for the new content. If the network bandwidth available to a server is high but the storage allocation is small, it is possible for a huge flood of incoming content to overflow the allocation and push out everything that was in the group before it. The average length of time that posts are able to stay on the server before being deleted is commonly called the retention time.

Binary newsgroups are only able to function reliably if there is sufficient storage allocated to handle the amount of articles being added. Without sufficient retention time, a reader will be unable to download all parts of the binary before it is flushed out of the group's storage allocation. This was at one time how posting undesired content was countered; the newsgroup would be flooded with random garbage data posts, of sufficient quantity to push out all the content to be suppressed. This has been compensated by service providers allocating enough storage to retain everything posted each day, including spam floods, without deleting anything.

Modern Usenet news servers have enough capacity to archive years of binary content even when flooded with new data at the maximum daily speed available.

In part because of such long retention times, as well as growing Internet upload speeds, Usenet is also used by individual users to store backup data.[31] While commercial providers offer easier to use online backup services, storing data on Usenet is free of charge (although access to Usenet itself may not be). The method requires the uploader to cede control over the distribution of the data; the files are automatically disseminated to all Usenet providers exchanging data for the news group it is posted to. In general the user must manually select, prepare and upload the data. The data is typically encrypted because it is available to anyone to download the backup files. After the files are uploaded, having multiple copies spread to different geographical regions around the world on different news servers decreases the chances of data loss.

Major Usenet service providers have a retention time of more than 12 years.[32] This results in more than 60 petabytes (60000 terabytes) of storage (see image). When using Usenet for data storage, providers that offer longer retention time are preferred to ensure the data will survive for longer periods of time compared to services with lower retention time.

[edit]

While binary newsgroups can be used to distribute completely legal user-created works, free software, and public domain material, some binary groups are used to illegally distribute proprietary software, copyrighted media, and pornographic material.

ISP-operated Usenet servers frequently block access to all alt.binaries.* groups to both reduce network traffic and to avoid related legal issues. Commercial Usenet service providers claim to operate as a telecommunications service, and assert that they are not responsible for the user-posted binary content transferred via their equipment. In the United States, Usenet providers can qualify for protection under the DMCA Safe Harbor regulations, provided that they establish a mechanism to comply with and respond to takedown notices from copyright holders.[33]

Removal of copyrighted content from the entire Usenet network is a nearly impossible task, due to the rapid propagation between servers and the retention done by each server. Petitioning a Usenet provider for removal only removes it from that one server's retention cache, but not any others. It is possible for a special post cancellation message to be distributed to remove it from all servers, but many providers ignore cancel messages by standard policy, because they can be easily falsified and submitted by anyone.[34][35] For a takedown petition to be most effective across the whole network, it would have to be issued to the origin server to which the content has been posted, before it has been propagated to other servers. Removal of the content at this early stage would prevent further propagation, but with modern high speed links, content can be propagated as fast as it arrives, allowing no time for content review and takedown issuance by copyright holders.[36]

Establishing the identity of the person posting illegal content is equally difficult due to the trust-based design of the network. Like SMTP email, servers generally assume the header and origin information in a post is true and accurate. However, as in SMTP email, Usenet post headers are easily falsified so as to obscure the true identity and location of the message source.[37] In this manner, Usenet is significantly different from modern P2P services; most P2P users distributing content are typically immediately identifiable to all other users by their network address, but the origin information for a Usenet posting can be completely obscured and unobtainable once it has propagated past the original server.[38]

Also unlike modern P2P services, the identity of the downloaders is hidden from view. On P2P services a downloader is identifiable to all others by their network address. On Usenet, the downloader connects directly to a server, and only the server knows the address of who is connecting to it. Some Usenet providers do keep usage logs, but not all make this logged information casually available to outside parties such as the Recording Industry Association of America.[39][40][41] The existence of anonymising gateways to USENET also complicates the tracing of a postings true origin.

History

[edit]
UUCP/Usenet Logical Map  —   June 1, 1981 / mods by S. McGeady November 19, 1981

            (ucbvax)
+=+===================================+==+
| |                                   |  |
| |                wivax              |  |
| |                  |                |  |
| |         microsoft| uiucdcs        |  |
| |  genradbo      | | |  |           |  |           (Tektronix)
| |     |          | | |  | purdue    |  |
| decvax+===+=+====+=+=+  | |         |  |
|       |   | |      |    | | pur-phy |  |                        tekmdp
|       |   | |      |    | |     |   |  |                           |
+@@@@@@cca  | |      |    | |     |   |  |                           |
|       |   | |  +=pur-ee=+=+=====+===+  |                           |
|    csin   | |  |   |                   |                           |
|           | +==o===+===================+==+========+=======+====teklabs=+
|           |    |                                                        |
|           |    |                    pdp phs   grumpy  wolfvax           |
|           |    |                     |   |      |        |              |
|           | cincy                unc=+===+======+========+              |
|           |   |        bio       |                                      |
|           |   |  (Misc) |        |            (Misc)                    |
|           |   | sii  reed        |    dukgeri duke34  utzoo             |
|           |   |  |    |          |         |   |       |                |
|      +====+=+=+==+====++======+==++===duke=+===+=======+==+=========+   |
|      |      |    |     |      |   |                       |         |   | u1100s
|    bmd70  ucf-cs ucf   | andiron  |                       |         |   |   |
|                        |          |                       |         |   |   |
|                  red   |          |                       |         |   | pyuxh
|                   |    |          |     zeppo             |         |   |   |
|       psupdp---psuvax  |          |       |               |         |   |   |
|                   |    |          | alice |   whuxlb      | utah-cs |   | houxf
|                allegra |          | |     |     |         |   |     |   |   |
|                     |  |          | |     |     |         |   |  +--chico---+
|                 +===+=mhtsa====research   |   /=+=======harpo=+==+     |    |
|                 |   |  |  |               |  /            |            |    |
|               hocsr |  |  +=+=============+=/           cbosg---+      |    |
|    ucbopt           |  |    |                             |     |   esquire |
|       :             |  |    |                           cbosgd  |           |
|       :             |  |    |                                   |           |
|    ucbcory          |  | eagle==+=====+=====+=====+=====+       |           |
|       :             |  |  |     |     |     |     |     |       |  +-uwvax--+
|       :             |  |  |   mhuxa mhuxh mhuxj mhuxm mhuxv     |  |
|       :             |  |  |                                     |  |
|       :             |  |  |        +----------------------------o--+
|       :             |  |  |        |                            |
|    ucbcad           |  |  |      ihpss    mh135a                |
|       :             |  |  |        |         |                  |
|       :             \--o--o------ihnss----vax135----cornell     |
|       :                |  |        |         |                  |
+=+==ucbvax==========+===+==+=+======+=======+=+========+=========+
  (UCB) :            |        |              |          | (Silicon Valley)
     ucbarpa      cmevax      |              |        menlo70--hao
        :                     |              |        |    |
     ucbonyx                  |              |        |   sri-unix
                              |           ucsfcgl     |
                              |              |        |
Legend:                       |              |      sytek====+========+
-------                       |              |               |        |
- | / \ + = Uucp           sdcsvax=+=======+=+======+     intelqa   zehntel
=           "Bus"                  |       |        |
o           jumps               sdcarl  phonlab  sdcattb
:           Berknet
@           Arpanet
UUCP/Usenet Logical Map, original by Steven McGeady.
Copyright© 1981, 1996

Bruce Jones, Henry Spencer, David Wiseman. Copied with permission from

The Usenet Oldnews Archive: Compilation.[42]

Newsgroup experiments first occurred in 1979. Tom Truscott and Jim Ellis of Duke University came up with the idea as a replacement for a local announcement program, and established a link with nearby University of North Carolina using Bourne shell scripts written by Steve Bellovin. The public release of news was in the form of conventional compiled software, written by Steve Daniel and Truscott.[8][43] In 1980, Usenet was connected to ARPANET through UC Berkeley, which had connections to both Usenet and ARPANET. Mary Ann Horton, the graduate student who set up the connection, began "feeding mailing lists from the ARPANET into Usenet" with the "fa" ("From ARPANET"[44]) identifier.[45] Usenet gained 50 member sites in its first year, including Reed College, University of Oklahoma, and Bell Labs,[8] and the number of people using the network increased dramatically; however, it was still a while longer before Usenet users could contribute to ARPANET.[46]

Network

[edit]

UUCP networks spread quickly due to the lower costs involved, and the ability to use existing leased lines, X.25 links or even ARPANET connections. By 1983, thousands of people participated from more than 500 hosts, mostly universities and Bell Labs sites but also a growing number of Unix-related companies; the number of hosts nearly doubled to 940 in 1984. More than 100 newsgroups existed, more than 20 devoted to Unix and other computer-related topics, and at least a third to recreation.[47][8] As the mesh of UUCP hosts rapidly expanded, it became desirable to distinguish the Usenet subset from the overall network. A vote was taken at the 1982 USENIX conference to choose a new name. The name Usenet was retained, but it was established that it only applied to news.[48] The name UUCPNET became the common name for the overall network.

In addition to UUCP, early Usenet traffic was also exchanged with FidoNet and other dial-up BBS networks. By the mid-1990s there were almost 40,000 FidoNet systems in operation, and it was possible to communicate with millions of users around the world, with only local telephone service. Widespread use of Usenet by the BBS community was facilitated by the introduction of UUCP feeds made possible by MS-DOS implementations of UUCP, such as UFGATE (UUCP to FidoNet Gateway), FSUUCP and UUPC. In 1986, RFC 977 provided the Network News Transfer Protocol (NNTP) specification for distribution of Usenet articles over TCP/IP as a more flexible alternative to informal Internet transfers of UUCP traffic. Since the Internet boom of the 1990s, almost all Usenet distribution is over NNTP.[49]

Software

[edit]

Early versions of Usenet used Duke's A News software, designed for one or two articles a day. Matt Glickman and Horton at Berkeley produced an improved version called B News that could handle the rising traffic (about 50 articles a day as of late 1983).[8] With a message format that offered compatibility with Internet mail and improved performance, it became the dominant server software. C News, developed by Geoff Collyer and Henry Spencer at the University of Toronto, was comparable to B News in features but offered considerably faster processing. In the early 1990s, InterNetNews by Rich Salz was developed to take advantage of the continuous message flow made possible by NNTP versus the batched store-and-forward design of UUCP. Since that time INN development has continued, and other news server software has also been developed.[50]

Public venue

[edit]

Usenet was the first Internet community and the place for many of the most important public developments in the pre-commercial Internet. It was the place where Tim Berners-Lee announced the launch of the World Wide Web,[51] where Linus Torvalds announced the Linux project,[52] and where Marc Andreessen announced the creation of the Mosaic browser and the introduction of the image tag,[53] which revolutionized the World Wide Web by turning it into a graphical medium.

Internet jargon and history

[edit]

Many jargon terms now in common use on the Internet originated or were popularized on Usenet.[54] Likewise, many conflicts which later spread to the rest of the Internet, such as the ongoing difficulties over spamming, began on Usenet.[55]

"Usenet is like a herd of performing elephants with diarrhea. Massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it."

— Gene Spafford, 1992

Decline

[edit]

Sascha Segan of PC Magazine said in 2008 that "Usenet has been dying for years".[56] He argued that it was dying by the late 1990s, when large binary files became a significant proportion of Usenet traffic, and Internet service providers "sensibly started to wonder why they should be reserving big chunks of their own disk space for pirated movies and repetitive porn."

AOL discontinued Usenet access in 2005. In May 2010, Duke University, whose implementation had started Usenet more than 30 years earlier, decommissioned its Usenet server, citing low usage and rising costs.[57][58] On February 4, 2011, the Usenet news service link at the University of North Carolina at Chapel Hill (news.unc.edu) was retired after 32 years.[citation needed]

In response, John Biggs of TechCrunch said "As long as there are folks who think a command line is better than a mouse, the original text-only social network will live on".[59] While there are still some active text newsgroups on Usenet, the system is now primarily used to share large files between users, and the underlying technology of Usenet remains unchanged.[60]

Usenet traffic changes

[edit]

Over time, the amount of Usenet traffic has steadily increased. As of 2010 the number of all text posts made in all Big-8 newsgroups averaged 1,800 new messages every hour, with an average of 25,000 messages per day.[61] However, these averages are minuscule in comparison to the traffic in the binary groups.[62] Much of this traffic increase reflects not an increase in discrete users or newsgroup discussions, but instead the combination of massive automated spamming and an increase in the use of .binaries newsgroups[61] in which large files are often posted publicly. A small sampling of the change (measured in feed size per day) follows:

Source: altopia.com[63]
Daily volume Daily posts Date
4.5 GiB 1996 Dec
9 GiB 1997 Jul
12 GiB 554 k 1998 Jan
26 GiB 609 k 1999 Jan
82 GiB 858 k 2000 Jan
181 GiB 1.24 M 2001 Jan
257 GiB 1.48 M 2002 Jan
492 GiB 2.09 M 2003 Jan
969 GiB 3.30 M 2004 Jan
1.52 TiB 5.09 M 2005 Jan
2.27 TiB 7.54 M 2006 Jan
2.95 TiB 9.84 M 2007 Jan
3.07 TiB 10.13 M 2008 Jan
4.65 TiB 14.64 M 2009 Jan
5.42 TiB 15.66 M 2010 Jan
7.52 TiB 20.12 M 2011 Jan
9.29 TiB 23.91 M 2012 Jan
11.49 TiB 28.14 M 2013 Jan
14.61 TiB 37.56 M 2014 Jan
17.87 TiB 44.19 M 2015 Jan
23.87 TiB 55.59 M 2016 Jan
27.80 TiB 64.55 M 2017 Jan
37.35 TiB 73.95 M 2018 Jan
60.38 TiB 104.04 M 2019 Jan
62.40 TiB 107.49 M 2020 Jan
100.71 TiB 171.86 M 2021 Jan
220.00 TiB[64] 279.16 M 2023 Aug
274.49 TiB 400.24 M 2024 Feb

In 2008, Verizon Communications, Time Warner Cable and Sprint Nextel signed an agreement with Attorney General of New York Andrew Cuomo to shut down access to sources of child pornography.[65] Time Warner Cable stopped offering access to Usenet. Verizon reduced its access to the "Big 8" hierarchies. Sprint stopped access to the alt.* hierarchies. AT&T stopped access to the alt.binaries.* hierarchies. Cuomo never specifically named Usenet in his anti-child pornography campaign. David DeJean of PC World said that some worry that the ISPs used Cuomo's campaign as an excuse to end portions of Usenet access, as it is costly for the Internet service providers and not in high demand by customers. In 2008 AOL, which no longer offered Usenet access, and the four providers that responded to the Cuomo campaign were the five largest Internet service providers in the United States; they had more than 50% of the U.S. ISP market share.[66] On June 8, 2009, AT&T announced that it would no longer provide access to the Usenet service as of July 15, 2009.[67]

AOL announced that it would discontinue its integrated Usenet service in early 2005, citing the growing popularity of weblogs, chat forums and on-line conferencing.[68] The AOL community had a tremendous role in popularizing Usenet some 11 years earlier.[69]

In August 2009, Verizon announced that it would discontinue access to Usenet on September 30, 2009.[70][71] JANET announced it would discontinue Usenet service, effective July 31, 2010, citing Google Groups as an alternative.[72] Microsoft announced that it would discontinue support for its public newsgroups (msnews.microsoft.com) from June 1, 2010, offering web forums as an alternative.[73]

Primary reasons cited for the discontinuance of Usenet service by general ISPs include the decline in volume of actual readers due to competition from blogs, along with cost and liability concerns of increasing proportion of traffic devoted to file-sharing and spam on unused or discontinued groups.[74][75]

Some ISPs did not include pressure from Cuomo's campaign against child pornography as one of their reasons for dropping Usenet feeds as part of their services.[76] ISPs Cox and Atlantic Communications resisted the 2008 trend but both did eventually drop their respective Usenet feeds in 2010.[77][78][79]

Archives

[edit]

Public archives of Usenet articles have existed since the early days of Usenet, such as the system created by Kenneth Almquist in late 1982.[80][81] Distributed archiving of Usenet posts was suggested in November 1982 by Scott Orshan, who proposed that "Every site should keep all the articles it posted, forever."[82] Also in November of that year, Rick Adams responded to a post asking "Has anyone archived netnews, or does anyone plan to?"[83] by stating that he was, "afraid to admit it, but I started archiving most 'useful' newsgroups as of September 18."[84] In June 1982, Gregory G. Woodbury proposed an "automatic access to archives" system that consisted of "automatic answering of fixed-format messages to a special mail recipient on specified machines."[85]

In 1985, two news archiving systems and one RFC were posted to the Internet. The first system, called keepnews, by Mark M. Swenson of the University of Arizona, was described as "a program that attempts to provide a sane way of extracting and keeping information that comes over Usenet." The main advantage of this system was to allow users to mark articles as worthwhile to retain.[86] The second system, YA News Archiver by Chuq Von Rospach, was similar to keepnews, but was "designed to work with much larger archives where the wonderful quadratic search time feature of the Unix ... becomes a real problem."[87] Von Rospach in early 1985 posted a detailed RFC for "archiving and accessing usenet articles with keyword lookup." This RFC described a program that could "generate and maintain an archive of Usenet articles and allow looking up articles based on the article-id, subject lines, or keywords pulled out of the article itself." Also included was C code for the internal data structure of the system.[88]

The desire to have a full text search index of archived news articles is not new either, one such request having been made in April 1991 by Alex Martelli who sought to "build some sort of keyword index for [the news archive]."[89] In early May, Martelli posted a summary of his responses to Usenet, noting that the "most popular suggestion award must definitely go to 'lq-text' package, by Liam Quin, recently posted in alt.sources."[90]

The Alt Sex Stories Text Repository (ASSTR) site archived and indexed erotic and pornographic stories posted to the Usenet group alt.sex.stories.[91]

The archiving of Usenet has led to fears of loss of privacy.[92] An archive simplifies ways to profile people. This has partly been countered with the introduction of the X-No-Archive: Yes header, which is itself controversial.[93]

Archives by Deja News and Google Groups

[edit]

Web-based archiving of Usenet posts began in March 1995 at Deja News with a very large, searchable database. In February 2001, this database was acquired by Google;[94] Google had begun archiving Usenet posts for itself starting in the second week of August 2000.

Google Groups hosts an archive of Usenet posts dating back to May 1981. The earliest posts, which date from May 1981 to June 1991, were donated to Google by the University of Western Ontario with the help of David Wiseman and others,[95] and were originally archived by Henry Spencer at the University of Toronto's Zoology department.[96] The archives for late 1991 through early 1995 were provided by Kent Landfield from the NetNews CD series[97] and Jürgen Christoffel from GMD.[98]

Google has been criticized by Vice and Wired contributors as well as former employees for its stewardship of the archive and for breaking its search functionality.[99][100][101]

As of January 2024, Google Groups carries a header notice, saying:

Effective from 22 February 2024, Google Groups will no longer support new Usenet content. Posting and subscribing will be disallowed, and new content from Usenet peers will not appear. Viewing and searching of historical data will still be supported as it is done today.

An explanatory page adds:[102]

In addition, Google’s Network News Transfer Protocol (NNTP) server and associated peering will no longer be available, meaning Google will not support serving new Usenet content or exchanging content with other NNTP servers. This change will not impact any non-Usenet content on Google Groups, including all user and organization-created groups.

See also

[edit]

Usenet newsreaders

[edit]

Usenet/newsgroup service providers

[edit]

Usenet history

[edit]

Usenet administrators

[edit]

Usenet had administrators on a server-by-server basis, not as a whole. A few famous administrators:

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Usenet is a decentralized, distributed system for asynchronous text-based discussions organized into hierarchical newsgroups, originally implemented over Unix-to-Unix Copy () networks and later standardized via the Network News Transfer Protocol (NNTP). Conceived in 1979 by graduate students Tom Truscott and Jim Ellis at as a means to link Unix systems for posting and exchanging messages, it enabled early forms of online communities without central moderation. By the early 1980s, Usenet had expanded to hundreds of hosts, primarily universities and research institutions, fostering global conversations on diverse topics from to through threaded articles propagated server-to-server. Its allowed unrestricted participation, which spurred innovations like moderated groups and distribution, though the latter transformed many newsgroups into de facto file-sharing repositories, contributing to legal controversies over copyrighted material. The system's resilience is evident in its continued operation, with modern providers offering extensive article retention exceeding decades, far surpassing typical web forum archives. Usenet's cultural impact includes pioneering internet etiquette and facing seminal challenges like the 1993 "," when mass influxes from commercial providers such as overwhelmed traditional user norms, alongside rampant spam that necessitated cancellation mechanisms and policy debates. Despite competition from web-based forums and , Usenet persists as a high-retention platform for niche discussions and large-scale data exchange, underscoring its role as one of the 's foundational distributed networks.

Overview

Definition and Core Principles

Usenet, a portmanteau of "users' network," constitutes a worldwide distributed discussion system comprising hierarchically organized collections of newsgroups for exchanging threaded messages and files among participants. Initially implemented via the on dial-up connections, it enabled asynchronous communication across interconnected Unix systems as an accessible means for posting and retrieving articles beyond the scope of ARPANET's email lists. Articles, the fundamental units of content, include headers specifying subjects, authors, dates, and references to prior messages, facilitating the formation of conversation threads that users navigate chronologically or topically. At its core, Usenet embodies through a federated model of independent servers that exchange articles via newsfeeds, eschewing any central authority or single point of control over content dissemination. This mechanism—wherein servers incoming articles to their configured peers—ensures broad replication and resilience against individual server failures, as no database or host dictates or universally. Newsgroups adhere to a hierarchical , such as comp.sys.mac for topics in Macintosh , which partitions discussions by broad categories (e.g., comp for computers) into subtopics, promoting topical focus while allowing alternative hierarchies for specialized communities. Empirically, this structure contrasts with centralized client-server paradigms, like those in web-based forums, where a singular manages persistence and access; in Usenet, article visibility depends on feed policies and retention durations across servers, yielding potential inconsistencies such as delayed or selective omissions by operators, yet fostering robustness through . Threading relies on explicit headers linking replies to antecedents, enabling readers to reconstruct discussions without reliance on server-side indexing, a that underscores Usenet's emphasis on self-organizing, user-driven over administered curation.

Key Components and Decentralized Nature

Usenet's core components comprise news servers responsible for storing articles and forwarding them across the network, that provide user interfaces for accessing and posting to newsgroups, and news feeds that enable the transfer of articles between interconnected servers. News servers operate independently, maintaining local repositories typically in directories like /var/spool/news, while connect via protocols such as NNTP to retrieve content without direct server-to-server dependency for user access. The decentralized nature of Usenet arises from its propagation model, where servers selectively subscribe to specific newsgroups and exchange articles through configured feeds rather than relying on a central hub. This lack of a global authority or unified index means that article availability varies, with servers forming partial mirrors of the full corpus, and users pulling content from their local server, which may not hold all posts. Feed policies, often defined using batch files ( files) in early implementations, dictate what articles are pushed to downstream peers, allowing operators to and scope autonomously. This has supported over 100,000 newsgroups historically, fostering resilience and autonomy but introducing challenges like inconsistent delays—typically resolving within hours as articles disseminate via flooding algorithms—and variable retention periods determined by individual server storage policies. relies on queued or immediate feeds, with delays stemming from and operator configurations rather than centralized scheduling.

Technical Architecture

Protocols and News Propagation

The Network News Transfer Protocol (NNTP), defined in RFC 3977, serves as the primary application-layer protocol for distributing Usenet articles between news servers and facilitating client-server interactions. Originally specified in RFC 977 in March 1986, NNTP enables efficient transmission over reliable full-duplex channels, supporting commands for posting articles, retrieving lists of newsgroups, and transferring articles via modes like IHAVE and SEND. Server-to-server feeds typically use NNTP over IP connections in modern implementations, replacing earlier UUCP-based batch transfers for real-time propagation. Usenet employs a flood-fill mechanism where articles are injected into the network at a local server and then disseminated to servers. Upon receipt of an article—identified uniquely by its header—a server checks for duplicates before forwarding it to its peers, ensuring across the decentralized network without central coordination. This process relies on server agreements, with articles pushed via NNTP feeds; delays depend on and density, often completing globally within hours. Retention periods vary by server policy and content type: text articles are typically held for days to weeks, while binary content on commercial providers can persist for years, with some offering over 16 years (approximately 5,800+ days) as of 2025 to support archival access. Usenet articles conform to a structured format outlined in RFC 1036, comprising headers and a body separated by a blank line. Essential headers include From (author), Newsgroups (target hierarchy), Subject, Date, Message-ID (unique identifier formatted as unique@domain), and Path (propagation trace with site names separated by '!'). Threading is maintained through References and In-Reply-To headers, which list Message-IDs of parent messages, allowing newsreaders to reconstruct discussions hierarchically. To mitigate spam, Usenet supports cancellation control messages, which instruct servers to remove specified articles by referencing their Message-ID. These are processed locally if authenticated—originally via approved sender lists or later cancel locks (e.g., cryptographic hashes)—and propagate similarly to regular articles, though adoption varies as some servers ignore unauthenticated cancels to prevent abuse. Empirical evidence from early spam incidents, such as the 1994 Canterbury Dreamware flood, demonstrates cancellations' role in rapid content removal, though incomplete propagation can leave remnants on distant servers.

Newsgroups: Structure and Moderation

Newsgroups in Usenet are organized into hierarchical categories prefixed by topical domains, facilitating structured navigation across diverse discussions. The primary hierarchies, known as the Big Eight—comprising comp. (computing), humanities. (arts and literature), misc. (miscellaneous), news. (Usenet administration), (recreation), (science), soc. (social issues), and (debate)—are managed by a volunteer Big-8 Management Board that oversees creation through formal proposals and community voting processes to ensure relevance and sustainability. In contrast, alternative hierarchies such as alt. permit vote-free creation via control messages issued by any user, enabling rapid proliferation without centralized approval and reflecting Usenet's decentralized ethos, though this often resulted in fragmented or short-lived groups. By the late 1990s, Usenet encompassed over 100,000 newsgroups across these and other hierarchies, driven by in user participation, though active groups numbered in the tens of thousands. Moderation operates on a spectrum, with the majority of newsgroups unmoderated, allowing direct propagation of posts from users to servers without intermediary review, which promotes immediacy and unrestricted exchange but exposes groups to spam, off-topic content, and abuse. Moderated newsgroups, such as comp.risks (focused on computing safety incidents), route submissions via to designated moderators who evaluate and approve posts for and quality before propagation, aiming to maintain focused and filter low-value contributions. This approach yields benefits like reduced noise and higher signal-to-noise ratios, as evidenced by sustained participation in long-standing moderated groups, but introduces drawbacks including processing delays—sometimes days or weeks—and risks of moderator bias or overreach, potentially suppressing dissenting views under the guise of . Empirical observations from Usenet operators indicate that moderation demands ongoing volunteer effort, with bottlenecks emerging in high-volume groups, while unmoderated forums rely on self-policing through norms like follow-ups and critiques. Newsgroup lifecycle governance occurs via the control pseudo-newsgroup, where control messages propose creations, renamings, or deletions, processed by server software to issue commands like "newgroup" or "rmgroup." For Big Eight hierarchies, these proposals undergo board-vetted voting requiring majority support from discussants, enforcing communal consensus without mandatory server compliance, as propagation depends on individual site policies. Alternative hierarchies bypass such votes, allowing forking through unilateral control messages, which proliferated groups but also sparked "newsgroup wars" over legitimacy and disputes among backbone providers. This model underscores Usenet's lack of central , with site administrators retaining to carry or reject groups based on local resources and community input, preventing any single entity from dictating global structure.

Access Tools: Newsreaders and Servers

Usenet access requires specialized client software known as newsreaders, which connect to servers via the Network News Transfer Protocol (NNTP) to retrieve and post articles. Command-line newsreaders such as tin and nn provide efficient, text-based interfaces suitable for Unix-like systems, enabling local or remote reading of newsgroups with features like threaded article navigation and header caching for speed. Graphical user interface (GUI) newsreaders, including integrations in email clients like Mozilla Thunderbird, offer point-and-click usability for subscribing to newsgroups, viewing threads, and composing posts, making them accessible to users less familiar with terminal commands. A key feature distinguishing traditional from web-based alternatives is the implementation of scoring filters, which allow users to assign numerical scores to articles based on criteria such as author, subject keywords, or posting patterns, thereby personalizing feeds by promoting or hiding content algorithmically. This enables power users to manage high-volume discussions effectively, reducing noise in unmoderated groups, whereas web interfaces often prioritize simplicity over such granular control, potentially limiting customization for advanced filtering needs. Usenet servers store and propagate articles, with access historically provided through internet service providers (ISPs) offering free NNTP feeds, though retention periods were typically limited to days or weeks. Following the , many ISPs discontinued complimentary Usenet services due to escalating bandwidth and storage costs driven by binary content proliferation, prompting a shift toward commercial providers. Paid servers, such as those from Newshosting, maintain extensive binary retention exceeding 6,200 days as of 2025, ensuring availability of historical archives via subscription-based access with enhanced completion rates and speeds. Users connect to these servers using credentials, bypassing ISP limitations for reliable, high-retention Usenet interaction.

Handling Binary and Multimedia Content

Usenet, designed primarily for text-based articles, requires binary and files to be encoded into text format for transmission via the NNTP protocol. Early methods included uuencode, which converts to printable ASCII characters but introduces approximately 35% overhead due to escaping non-ASCII bytes. This encoding ensures compatibility with text-only servers, though it increases transmission size and processing demands. The yEnc scheme, introduced in the late 1990s, became the dominant encoding for binaries by offering superior efficiency, with encoded data expanding to only 1-2% above the original binary size. yEnc achieves this through minimal escaping and CRC-32 checksums for error detection, reducing bandwidth usage and decoding time compared to uuencode or MIME base64, which can add 33% overhead. Large files are typically split into multiple articles, each encoded separately and posted sequentially in binary newsgroups like those under alt.binaries hierarchies. To facilitate retrieval of multipart binaries scattered across articles, index files—XML documents containing pointers and metadata—enable to automate downloading and reassembly. Users generate NZBs from indexers that scan newsgroup headers, allowing efficient fetching without manual header downloads. Retention policies differ markedly between text and binary content due to storage and bandwidth constraints. Text articles, being smaller, often retain for thousands of days on many servers, while binaries demand more resources, leading providers to prioritize shorter or tiered retention. In 2025, premium providers maintain binary retention exceeding 5,000 days (over 13 years), supported by extensive across backbones. Free or ISP servers typically offer days to weeks for binaries versus longer for text, reflecting cost-based trade-offs. The high volume of binary traffic prompted server policies strictly segregating content: binaries are confined to designated groups to prevent flooding text discussions, as disguised binary posts inflate sizes and enable spam proliferation by evading filters. This separation mitigates bandwidth overload, with many operators enforcing rules or automated removal of binaries in text hierarchies to preserve usability.

Historical Development

Origins in ARPANET-Era Experimentation (1979–1985)

Usenet originated as an experimental distributed discussion system designed to circumvent the bandwidth and policy restrictions of the , which prohibited non-research communications and favored dedicated leased lines unsuitable for many academic sites. In late 1979, graduate students Tom Truscott and Jim Ellis at conceived the idea of leveraging the Unix-to-Unix Copy () protocol—a store-and-forward mechanism using dial-up phone lines—to exchange files and messages between Unix systems, enabling asynchronous, low-cost information sharing among universities lacking ARPANET access. The initial implementation involved shell scripts written by Steve Bellovin at the (UNC), which connected and UNC for the first exchanges; the earliest documented article, posted in December 1979, discussed programming techniques. By early 1980, these scripts evolved into compiled software dubbed "A News," developed by Steve Daniel and distributed publicly to handle growing traffic on links. This volunteer-driven effort, without central funding or administration, relied on site operators manually configuring batch transfers via modems, typically nightly, to propagate articles across connected hosts. Early adoption was fueled by the proliferation of Unix systems in academia and research labs, expanding from two initial sites ( and UNC) to about 15 by the end of 1980 and 150 by 1981, as universities like the , and joined via feeds. By 1982, participation reached around 400 sites, and empirical logs indicate hundreds more by 1985, sustained by organic propagation without formal governance—operators shared software updates and moderated content locally to manage volume. This decentralized model emphasized resilience over speed, with articles batched into files for transfer, reflecting first-principles adaptations to constrained telephony infrastructure rather than real-time networking.

Expansion and Institutional Adoption (1986–1993)

In 1983, B News software, developed by Mark Horton and Matt Glickman at , superseded the original A News implementation, introducing improved article threading, storage efficiency, and batching capabilities that facilitated larger-scale propagation over networks. This upgrade addressed limitations in handling growing volumes of posts, enabling Usenet to scale beyond initial university sites. The introduction of the Network News Transfer Protocol (NNTP) in March 1986, as specified in RFC 977, marked a pivotal shift to TCP/IP-based transmission, allowing direct integration with and emerging infrastructure. NNTP supported client-server access to remote news servers, reducing reliance on transfers and enabling real-time querying, which accelerated adoption among academic institutions connected via NSFNET. By leveraging NSFNET's backbone, Usenet expanded from hundreds of sites in the early to widespread institutional use, with propagation efficiency improving causal connectivity across research networks. The emerged in the late , initiated through alternative creation processes like those for , providing a decentralized to moderated hierarchies and fostering unmoderated discussions on diverse topics. Commercialization began with providers like offering paid Usenet feeds by 1990, including dial-up access that extended availability beyond academia. Concurrently, the practice of posting Frequently Asked Questions (FAQs) files gained traction in the late , standardizing information dissemination and reducing repetitive queries in high-traffic groups. These developments underscored Usenet's transition to a robust, multi-stakeholder system by 1993.

Peak Usage and "Eternal September" (1994–1999)

During the mid-1990s, Usenet experienced its zenith of participation, fueled by the expansion of commercial service providers that integrated gateways to the network. , which began offering Usenet access in September 1993, saw its subscriber base surge from approximately 2 million in 1993 to over 5 million by 1995, channeling a massive wave of non-technical users into Usenet groups and amplifying traffic volumes. Similarly, Microsoft Network (MSN) and other dial-up services introduced gateways, broadening access beyond academic and technical enclaves to mainstream audiences seeking discussion forums on diverse topics from to hobbies. This era solidified the "," a term originating from the perpetual influx of novices that eroded Usenet's self-policing culture, as the one-time annual onboarding of university freshmen—accustomed to learning via frequently asked questions (FAQs) and netiquette—gave way to unending arrivals lacking such preparation. By 1994–1995, the phenomenon persisted, with veterans reporting heightened disruption from off-topic posts, flame wars, and failure to adhere to group norms, transforming transient September overloads into a chronic state. The of Usenet, reliant on voluntary server , buckled under this scale, as exponential message propagation strained bandwidth and encouraged excessive crossposting—early harbingers of spam—without centralized to curb abuse. Participation peaked with an estimated several million regular readers worldwide by the late , coinciding with the proliferation of over 40,000 newsgroups by mid-decade, many in the spawned by unmoderated creation scripts. Tools like kill files, which allowed users to programmatically filter authors, subjects, or keywords, gained widespread adoption as a pragmatic response to noise from unskilled posters, enabling experienced users to curate feeds amid the deluge. Web-based interfaces, such as Deja News launched in 1995, further democratized access by enabling browser-based searching of archives without native newsreader software, inadvertently commodifying discussions while exposing them to broader scrutiny and incursions. This accessibility, however, exacerbated cultural fractures, as commercial incentives prioritized volume over the meritocratic ethos that had sustained Usenet's earlier coherence.

Decline and Fragmentation (2000–2010)

During the , Usenet experienced a sharp reduction in mainstream usage, driven primarily by escalating spam volumes, the resource-intensive distribution of binary files, and the emergence of more user-friendly alternatives like web-based forums. Spam proliferation, which intensified after early incidents such as the 1994 "" advertisement cross-posted to thousands of newsgroups by lawyers , overwhelmed discussion hierarchies with off-topic commercial and abusive messages, eroding signal-to-noise ratios and deterring participants. This unmoderated chaos contrasted with the structured moderation of emerging platforms, contributing to user migration rather than any single commercial pivot. In response to deteriorating quality, the Usenet II initiative launched in as a peered network among select "sound" sites adhering to strict anti-spam policies, effectively fragmenting the ecosystem by excluding high-volume or unreliable peers to preserve discussion integrity. However, adoption remained limited, as the original Usenet backbone continued to propagate vast binary content via alt.binaries.* groups, which ballooned storage and bandwidth demands—often exceeding terabytes daily for full feeds—prompting many ISPs to curtail or eliminate free access. For instance, terminated its Usenet service for customers in September 2008, citing voluntary compliance with efforts to curb illegal content distribution amid New York Attorney General Cuomo's campaign against child exploitation material in binaries. The rise of (P2P) networks like , gaining traction from the early 2000s, further eroded Usenet's role in binary sharing by offering decentralized, metadata-efficient file distribution without reliance on news servers. Concurrently, web forums such as (launched 1997) provided browser-accessible threading, built-in search, and reduced setup barriers compared to dedicated newsreaders, attracting users seeking convenience over Usenet's decentralized but cumbersome propagation model. These factors compounded, leading to a qualitative collapse in active readership; backbone operators reported sustained drops in text-based traffic as communities splintered or dissolved, though binary retention persisted in niche paid services.

Cultural and Social Impact

Community Norms and Usenet Jargon

Usenet participants established informal community norms, collectively termed netiquette, to promote and efficient communication amid the system's decentralized nature. These guidelines emphasized plain-text posting, trimming excessive quoted material from prior messages to reduce redundancy, and restricting cross-posting to relevant newsgroups to avoid cluttering unrelated discussions. Signatures, or sigs, were limited to brief blocks of 4-6 lines containing personal identifiers or disclaimers, appended automatically to posts to maintain readability. Such practices arose organically in the as user volumes grew, predating formal codification, and were reinforced by the term "netiquette" first appearing in a Usenet posting. Enforcement relied on social mechanisms rather than technical controls, with violators often facing public rebuke through —heated, insulting rebuttals—or exclusion via user-configured killfiles that filtered unwanted content. Hierarchies like alt.* operated under implicit charters, where persistent off-topic posting or failure to heed frequently asked questions (FAQs) invited collective , preserving group cohesion without centralized . Research on Usenet norms highlights both explicit rules (e.g., posted group guidelines) and implicit expectations (e.g., to expertise), socialized through and peer feedback, which sustained participation until external pressures like spam eroded adherence. Usenet jargon encapsulated these dynamics, with terms like denoting aggressive, exchanges that emerged in the early 1980s as a response to perceived breaches of . The word , originating around 1990 in the newsgroup, described deliberate provocative posts designed to "troll for newbies" by eliciting outraged replies, exploiting to test or disrupt community patience. Pseudonymous posting fostered candid, unfiltered debate but amplified vitriol, as from unverified identities enabled behaviors rarer on modern platforms requiring real-name verification. By the mid-1990s, RFC 1855 formalized select norms, advising against "heated messages" and urging conservatism in transmission to mitigate such conflicts.

Contributions to Internet Culture and Innovation

Usenet facilitated early collaborative by providing a decentralized platform for technical discussions and code sharing. Linus Torvalds announced his project on August 25, 1991, via a posting to the comp.os. newsgroup, seeking feedback and contributors, which spurred global participation and evolved into dedicated forums like comp.os. for ongoing development and distribution. This model exemplified peer-driven innovation, where participants freely exchanged patches and ideas without central authority, laying groundwork for the open-source movement's emphasis on communal improvement over proprietary control. The threaded conversation format pioneered in Usenet directly influenced the architecture of subsequent online discussion systems, including web forums and comment sections. By organizing replies in hierarchical trees attached to original posts, Usenet enabled scalable, context-preserving debates that avoided the linear limitations of earlier systems. This structure promoted efficient information flow in technical and hobbyist communities, fostering norms of asynchronous, merit-based engagement that decentralized authority and prioritized substantive contributions—contrasting with later centralized platforms while serving as their conceptual precursor. Usenet exported key jargon into broader internet lexicon, notably popularizing "spam" for abusive bulk postings. The term, drawn from a Monty Python sketch depicting repetitive intrusion, first gained traction on March 31, 1993, when Usenet users applied it to describe the Canter and Siegel lawyers' cross-posted advertisements flooding hundreds of newsgroups, marking an early consensus on network etiquette violations. In parallel, groups like comp.lang.c hosted unfiltered exchanges on programming that reinforced a of open information sharing and rigorous critique, while alt.tasteless advanced irreverent, boundary-testing humor through antics such as the 1994 coordinated "invasion" of rec.pets.cats, which highlighted emergent norms of playful disruption in digital spaces. These elements underscored Usenet's role as an incubator for resilient, self-regulating cultural practices rather than mere anarchy.

Social Dynamics: Collaboration vs. Conflict

Usenet's decentralized and largely unmoderated framework enabled collaborative achievements in specialized communities, particularly prior to the , when participant pools were small and expertise-driven. By , approximately 140,000 active users engaged in niche discussions across roughly 11,000 connected systems, yielding high signal-to-noise ratios in groups like those in the sci.* hierarchy, where scientists and researchers exchanged technical insights through threaded, iterative critiques resembling informal . Moderated newsgroups further enhanced this quality by filtering submissions, as evidenced by empirical observations from 1987 showing consistently superior content relevance compared to unmoderated counterparts, which supported focused cross-disciplinary projects such as early software and protocol refinements shared across hierarchies. Conflicts arose inherently from the system's openness, manifesting in flame wars—intense, personal exchanges of insults—and floods that disrupted discourse. Unmoderated groups' immediacy allowed rapid idea propagation but invited escalations, with the (April 1996–circa 1998) exemplifying this: hundreds of users across over 80 newsgroups bombarded threads with repetitive "meow" posts and cultural references, sustaining chaos for 45 weeks and highlighting how anonymous provocation could hijack collective attention. Post-1994 influxes amplified such issues, as broader access introduced casual disruptions, eroding pre-existing norms in unmoderated spaces while moderated groups preserved coherence longer through gatekeeping. Pseudonymous posting facilitated raw, adversarial , empowering users to challenge orthodoxies and expose flaws via unvarnished —a causal driver of breakthroughs absent in identity-enforcing platforms—while shielding participants from real-world backlash in contentious fields. However, this equally empowered disruptors, enabling sustained abuse by insulating bad actors from accountability and intensifying conflicts beyond productive contention. Unmoderated dynamics thus traded moderated stability for velocity in knowledge exchange, with empirical trade-offs evident in moderated groups' enduring focus versus unmoderated ones' volatility.

Controversies and Criticisms

Spam Proliferation and Network Overload

The proliferation of spam on Usenet originated from sporadic cross-postings in the early but escalated into systematic abuse with the advent of automated bulk messaging. On April 12, 1994, immigration lawyers initiated the first large-scale commercial spam by posting advertisements for U.S. lottery services to over 5,000 newsgroups, exploiting Usenet's decentralized propagation to reach millions without incurring marginal distribution costs. This "Green Card Spam" triggered immediate backlash, including server blacklisting of the perpetrators' sites, but demonstrated the vulnerability of Usenet's broadcast model to low-cost replication, paving the way for subsequent floods of advertisements, chain letters, and make-money-fast schemes. Spam volume surged through the mid-1990s, with automated scripts enabling rapid multiplication of identical or variant messages across hierarchies, overwhelming storage and bandwidth. Providers reported exponential growth in unwanted traffic, as each article propagated identically to all connected servers, amplifying the load from even modest posting volumes; by the late 1990s, operational costs for disk space and transit escalated, prompting many institutions to curtail access. The decentralized architecture, lacking a central authority to enforce propagation rules, allowed spammers to target high-visibility groups while evading uniform filtering, resulting in network overload where legitimate discourse was drowned out and server maintenance became unsustainable for smaller operators. In response, Usenet administrators deployed cancel messages—control articles requesting deletion of spam—and automated cancelbots to detect and purge bulk postings based on criteria like crossposting thresholds or keyword patterns. Additional measures included the Usenet Death Penalty, whereby backbone providers severed feeds from egregious abusers, and informal blacklists coordinated via meta-groups. However, enforcement faltered due to Usenet's federation; site operators retained autonomy to honor or ignore cancels, often prioritizing local user demands over collective norms, which fragmented countermeasures and enabled spam resurgence from rogue servers. This dynamic exemplified a , wherein individual incentives to post freely eroded the shared resource's viability, as spammers externalized costs onto while unmoderated groups lacked scalable incentives for restraint. The absence of controls or mandatory —unlike emerging web forums—accelerated degradation, with overload not solely attributable to external but to inherent flaws in voluntary cooperation among autonomous nodes, ultimately driving provider attrition and reduced participation by the early . Binary files, such as software, images, and media, were distributed on Usenet by encoding them into text format suitable for transmission over text-only protocols, primarily within the alt.binaries.* newsgroup hierarchy that developed in the early 1990s. This hierarchy included subgroups like alt.binaries.warez.* dedicated to sharing cracked software and games, enabling users to upload and download large files split across multiple posts. Early encodings like uuencode incurred high overhead, but the yEnc scheme, introduced around 1998, optimized binary-to-text conversion by minimizing padding and escaping, reducing file sizes by up to 30-40% compared to predecessors and facilitating faster, more efficient transfers. The surge in binary postings, particularly copyrighted material, positioned Usenet as a key platform for digital piracy, with alt.binaries.* groups accounting for significantly higher data volumes than text-based discussions—often estimated at 10 times the rest of Usenet traffic by the late 1990s. Legal scrutiny intensified as binary distribution enabled unauthorized sharing of commercial software, music, films, and other protected works, prompting copyright holders to target both individual distributors and service providers. The of 1998 allowed rights holders to issue takedown notices, requiring Usenet providers to remove specific infringing posts from their archives, though the decentralized propagation across servers complicated complete eradication. In the mid-2000s, the and pressured ISPs to block access to alt.binaries.* groups, citing liability risks; by 2007, this culminated in the RIAA's lawsuit against Usenet.com, alleging the provider induced infringement by offering unlimited access to copyrighted recordings, resulting in a 2009 court ruling against the service for failing DMCA safe harbor protections due to inadequate repeat-infringer policies. Arrests of Usenet users involved in distribution occurred amid broader crackdowns on organized groups that relied on the network for rapid releases; for instance, in 2003, a technology manager pleaded guilty to distributing pirated software, games, and media via online methods including Usenet postings, facing up to 10 years in prison under laws. While binaries occasionally preserved rare or files for archival purposes, empirical patterns showed predominant use for illegal copying, fueling a stigma that contributed to free ISPs dropping Usenet support. In paid provider ecosystems, binary retention sustains the system—comprising the vast majority of stored due to high-volume media uploads—but exposes operators to ongoing DMCA compliance burdens and potential secondary liability, contrasting with torrents' peer yet mirroring challenges in enforcing decentralized distribution. This dynamic preserved Usenet's viability post-text decline by catering to file-sharing demands, albeit under heightened legal constraints that favored compliant, indexed services over anonymous free access.

Free Speech, Anonymity, and Abuse

Usenet's decentralized architecture permitted users to post messages under pseudonyms without requiring personal accounts or verification, enabling a high degree of that facilitated open discourse on sensitive topics. This feature positioned Usenet as a refuge for expression, notably during the mid-1990s conflict in alt.religion.scientology, where participants leaked internal documents, including the "," despite legal efforts by the organization to suppress them via claims and site operator pressure. The resulting "alt.scientology.war" highlighted Usenet's resistance to centralized , as posts proliferated across servers even after targeted removals, disseminating critiques that challenged institutional control over information. However, anonymity also amplified abusive behaviors, including coordinated campaigns known as " wars," where pseudonymous users engaged in prolonged, vitriolic attacks without real-world , a dynamic later termed the . Such incidents were exacerbated in unmoderated hierarchies like alt., which emerged in 1992 as a bypassing the moderated Big Eight's creation guidelines, allowing rapid proliferation of off-topic, inflammatory, or rule-violating content that the structured hierarchies sought to contain through volunteer oversight. Usenet's lax controls further enabled the distribution of illegal materials, including imagery in certain alt.binaries. subgroups during the and , prompting international efforts by organizations like the to monitor and report such content to providers. By , major ISPs such as Verizon and Sprint began filtering approximately 0.5% of active alt. discussion groups to excise these materials, reflecting how growth in user-generated binaries—correlating with overall network expansion from thousands to millions of daily posts—outpaced voluntary self-regulation and led to systemic overload from unchecked harmful uploads. Empirical patterns indicate that while Usenet's model debunked assumptions of effective top-down by sustaining unfiltered , it underscored the limits of decentralized self-policing at scale: abuse metrics, including spam volume and illegal postings, escalated alongside participation surges, as documented in provider logs and forensic analyses, rather than stemming from any baseline "toxicity" in the medium itself. This tension revealed that anonymity's virtues in shielding dissent coexisted with vulnerabilities to exploitation, where low amplified both innovative discourse and predatory actions without inherent mechanisms for resolution.

Current Status and Legacy

Ongoing Usage and Provider Ecosystem (2010s–2025)

In the 2010s and continuing into 2025, Usenet has maintained a niche but persistent user base, with millions of messages posted daily across over 100,000 newsgroups, though activity is heavily skewed toward binary content rather than text discussions. Text-based hierarchies like the Big-8 sustain limited engagement, with the management board tracking fewer than 300 actively moderated groups as of mid-2024, focusing on topics such as , sciences, and recreation. Binary newsgroups, particularly in the alt.binaries.* , dominate traffic due to their role in long-term file archiving and distribution, supported by commercial providers offering retention periods exceeding 6,000 days—equivalent to over 16 years of stored articles. The provider ecosystem has evolved into a commercial model reliant on dedicated backbones and resellers, compensating for the withdrawal of free access from most ISPs in the early 2010s. Major backbones, including those from Eweka and Newshosting's tier-1 infrastructure, peer directly to ensure high completion rates above 99% and unlimited bandwidth for subscribers. Providers like UsenetServer and Pure Usenet maintain petabyte-scale storage, with retention metrics such as 5 PB+ for binaries, enabling reliable access for privacy-focused users who pair services with VPNs to mitigate logging risks. This paid structure has causally sustained binary viability by incentivizing infrastructure investment, contrasting with the fragmentation of free text feeds. Recent enhancements include widespread adoption of NNTP over TLS encryption for secure connections, standard across top providers since the mid-2010s, alongside community resources like Reddit's r/usenet for indexing tools and setup guides. Usenet thus endures for specialized applications in niches, academic , and anonymous , where its decentralized retention outperforms ephemeral web alternatives.

Archival Efforts and Accessibility

Google Groups serves as the most extensive centralized archive of Usenet content, incorporating the Deja News collection acquired by in February 2001, which originated in March 1995 and extends back to postings from 1981. This archive enables across pre-1990s material, though Google discontinued support for new Usenet posting, subscription, and real-time viewing in February 2024, retaining only historical searchability. Alternatives include partial dumps on the , such as collections of alt.* hierarchies in mbox format, and community-hosted repositories like UsenetArchives.com, which index hundreds of millions of posts dating to the . Preservation efforts emphasize text-based newsgroups, particularly the Big-8 hierarchies (comp., humanities., misc., news., rec., sci., soc., talk.), with initiatives like BlueWorld Hosting's public archive providing over 20 years of retention and the Free Usenet Text Archive offering ad-free access to approximately 300 million posts in about 300 GB. Community-driven projects, including those by , involve scraping and mirroring to counter risks, though these remain fragmented and focused on non-binary content to avoid legal issues. Archiving faces inherent challenges from Usenet's distributed model via NNTP, where posts do not universally reach all servers, resulting in incomplete captures dependent on individual server logs and retention policies. Proliferation of spam, which escalated in the and constitutes a significant portion of later volumes—often over half in some groups—further pollutes datasets, complicating curation without native filtering tools. In 2025, historical access relies on for search or dedicated web interfaces, supplemented by paid NNTP readers from providers with extended retention (up to decades in some cases) or web proxies for browsing, yet these cannot fully replicate original threading and context preserved in live newsreaders. This decentralization, while resilient for ongoing use, precludes a singular comprehensive archive akin to web crawls, rendering originals irreplaceable for scholarly or contextual analysis.

Comparisons to Modern Alternatives and Lessons Learned

Usenet's decentralized, server-federated model differs markedly from the centralized architectures of contemporary platforms such as and X (formerly ). 's subreddit system imposes hierarchical moderation by volunteer or appointed administrators, enabling targeted but concentrating power in few hands and facilitating algorithmic promotion of popular content over substantive depth. In contrast, Usenet's threading mechanism supported persistent, hierarchical discussions across independent servers, preserving for complex topics like , though without built-in tools this exposed networks to unchecked spam and off-topic flooding absent in 's upvote/downvote curation. X emphasizes ephemeral, real-time posting with character limits and verified accounts for visibility, prioritizing virality over Usenet's archival permanence, which allowed long-term reference but scaled poorly as traffic grew without proprietary algorithms to filter noise. These differences underscore decentralization's empirical trade-offs: Usenet demonstrated how protocol-based systems can drive by enabling pseudonymous, borderless without corporate gatekeeping, as evidenced by its in early open-source prior to centralized repositories. Yet, causal factors in its marginalization include the absence of user retention incentives—like personalized feeds or —coupled with spam's , which overwhelmed voluntary self-policing and deterred mainstream adoption as the web offered frictionless alternatives by the mid-1990s. Modern platforms' algorithmic , while mitigating such overload, often veers into over-correction via opaque content suppression, amplifying concerns over viewpoint bias in centralized . Key lessons for truth-seeking systems emphasize balancing unmoderated openness, which yielded Usenet's breakthroughs in technical discourse, against scalable defenses against abuse; pure falters at volume without hybrid incentives, as unfiltered invites manipulation rivaling platform algorithms' flaws. In 2025, Usenet's inspires the —networks like —where instance-level policies approximate Usenet's server autonomy while incorporating federation protocols to evade single-point control, though adoption lags due to hurdles mirroring Usenet's interface rigidity. This debunks uncritical : raw access propelled early progress, but sustained viability demands evolved mechanisms beyond either extreme centralization or unchecked distribution.

References

  1. https://wiki.gentoo.org/wiki/Usenet
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.