Recent from talks
Contribute something
Nothing was collected or created yet.
Usenet
View on Wikipedia


Notably, clients never connect with each other, but still have access to each other's posts even when they also never connect to the same server.
| Internet history timeline |
|
Early research and development:
Merging the networks and creating the Internet:
Commercialization, privatization, broader access leads to the modern Internet:
Examples of Internet services:
|
Usenet (/ˈjuːznɛt/),[1] a portmanteau of User's Network,[1] is a worldwide distributed discussion system available on computers. It was developed from the general-purpose Unix-to-Unix Copy (UUCP) dial-up network architecture. Tom Truscott and Jim Ellis conceived the idea in 1979, and it was established in 1980.[2] Users read and post messages (called articles or posts, and collectively termed news) to one or more topic categories, known as newsgroups. Usenet resembles a bulletin board system (BBS) in many respects and is the precursor to the Internet forums that have become widely used. Discussions are threaded, as with web forums and BBSes, though posts are stored on the server sequentially.[3][4]
A major difference between a BBS or web message board and Usenet is the absence of a central server and dedicated administrator or hosting provider. Usenet is distributed among a large, constantly changing set of news servers that store and forward messages to one another via "news feeds". Individual users may read messages from and post to a local (or simply preferred) news server, which can be operated by anyone, and those posts will automatically be forwarded to any other news servers peered with the local one, while the local server will receive any news its peers have that it currently lacks. This results in the automatic proliferation of content posted by any user on any server to any other user subscribed to the same newsgroups on other servers.
As with BBSes and message boards, individual news servers or service providers are under no obligation to carry any specific content, and may refuse to do so for many reasons: a news server might attempt to control the spread of spam by refusing to accept or forward any posts that trigger spam filters, or a server without high-capacity data storage may refuse to carry any newsgroups used primarily for file sharing, limiting itself to discussion-oriented groups. However, unlike BBSes and web forums, the dispersed nature of Usenet usually permits users who are interested in receiving some content to access it simply by choosing to connect to news servers that carry the feeds they want.
Usenet is culturally and historically significant in the networked world, having given rise to, or popularized, many widely recognized concepts and terms such as "FAQ", "flame", "sockpuppet", and "spam".[5] In the early 1990s, shortly before access to the Internet became commonly affordable, Usenet connections via FidoNet's dial-up BBS networks made long-distance or worldwide discussions and other communication widespread.[6]
| Part of a series on |
| File sharing |
|---|
The name Usenet comes from the term "users' network".[3] The first Usenet group was NET.general, which quickly became net.general.[7] The first commercial spam on Usenet was from immigration attorneys Canter and Siegel advertising green card services.[7]
On the Internet, Usenet is transported via the Network News Transfer Protocol (NNTP) on Transmission Control Protocol (TCP) port 119 for standard, unprotected connections, and on TCP port 563 for Secure Sockets Layer (SSL) encrypted connections.
Introduction
[edit]Usenet was conceived in 1979 and publicly established in 1980, at the University of North Carolina at Chapel Hill and Duke University,[8][2] over a decade before the World Wide Web went online (and thus before the general public received access to the Internet), making it one of the oldest computer network communications systems still in widespread use. It was originally built on the "poor man's ARPANET", employing UUCP as its transport protocol to offer mail and file transfers, as well as announcements through the newly developed news software such as A News. The name "Usenet" emphasizes its creators' hope that the USENIX organization would take an active role in its operation.[9]
The articles that users post to Usenet are organized into topical categories known as newsgroups, which are themselves logically organized into hierarchies of subjects. For instance, sci.math and sci.physics are within the sci.* hierarchy. Or, talk.origins and talk.atheism are in the talk.* hierarchy. When a user subscribes to a newsgroup, the news client software keeps track of which articles that user has read.[10]
In most newsgroups, the majority of the articles are responses to some other article. The set of articles that can be traced to one single non-reply article is called a thread. Most modern newsreaders display the articles arranged into threads and subthreads. For example, in the wine-making newsgroup rec.crafts.winemaking, someone might start a thread called; "What's the best yeast?" and that thread or conversation might grow into dozens of replies long, by perhaps six or eight different authors. Over several days, that conversation about different wine yeasts might branch into several sub-threads in a tree-like form.
When a user posts an article, it is initially only available on that user's news server. Each news server talks to one or more other servers (its "newsfeeds") and exchanges articles with them. In this fashion, the article is copied from server to server and should eventually reach every server in the network. The later peer-to-peer networks operate on a similar principle, but for Usenet it is normally the sender, rather than the receiver, who initiates transfers. Usenet was designed under conditions when networks were much slower and not always available. Many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out.[11] This is largely because the POTS network was typically used for transfers, and phone charges were lower at night.
The format and transmission of Usenet articles is similar to that of Internet e-mail messages. The difference between the two is that Usenet articles can be read by any user whose news server carries the group to which the message was posted, as opposed to email messages, which have one or more specific recipients.[12]
Today, Usenet has diminished in importance with respect to Internet forums, blogs, mailing lists and social media. Usenet differs from such media in several ways: Usenet requires no personal registration with the group concerned; information need not be stored on a remote server; archives are always available; and reading the messages does not require a mail or web client, but a news client. However, it is now possible to read and participate in Usenet newsgroups to a large degree using ordinary web browsers since most newsgroups are now copied to several web sites.[13] The groups in alt.binaries are still widely used for data transfer.
ISPs, news servers, and newsfeeds
[edit]
Many Internet service providers, and many other Internet sites, operate news servers for their users to access. ISPs that do not operate their own servers directly will often offer their users an account from another provider that specifically operates newsfeeds. In early news implementations, the server and newsreader were a single program suite, running on the same system. Today, one uses separate newsreader client software, a program that resembles an email client but accesses Usenet servers instead.[14]
Not all ISPs run news servers. A news server is one of the most difficult Internet services to administer because of the large amount of data involved, small customer base (compared to mainstream Internet service), and a disproportionately high volume of customer support incidents (frequently complaining of missing news articles). Some ISPs outsource news operations to specialist sites, which will usually appear to a user as though the ISP itself runs the server. Many of these sites carry a restricted newsfeed, with a limited number of newsgroups. Commonly omitted from such a newsfeed are foreign-language newsgroups and the alt.binaries hierarchy which largely carries software, music, videos and images, and accounts for over 99 percent of article data.[citation needed]
There are also Usenet providers that offer a full unrestricted service to users whose ISPs do not carry news, or that carry a restricted feed.[citation needed]
Newsreaders
[edit]Newsgroups are typically accessed with newsreaders: applications that allow users to read and reply to postings in newsgroups. These applications act as clients to one or more news servers. Historically, Usenet was associated with the Unix operating system developed at AT&T, but newsreaders were soon available for all major operating systems.[15] Email client programs and Internet suites of the late 1990s and 2000s often included an integrated newsreader. Newsgroup enthusiasts often criticized these as inferior to standalone newsreaders that made correct use of Usenet protocols, standards and conventions.[16]
With the rise of the World Wide Web (WWW), web front-ends (web2news) have become more common. Web front ends have lowered the technical entry barrier requirements to that of one application and no Usenet NNTP server account. There are numerous websites now offering web based gateways to Usenet groups, although some people have begun filtering messages made by some of the web interfaces for one reason or another.[17][18] Google Groups[19] is one such web based front end and some web browsers can access Google Groups via news: protocol links directly.[20]
Moderated and unmoderated newsgroups
[edit]A minority of newsgroups are moderated, meaning that messages submitted by readers are not distributed directly to Usenet, but instead are emailed to the moderators of the newsgroup for approval. The moderator is to receive submitted articles, review them, and inject approved articles so that they can be properly propagated worldwide. Articles approved by a moderator must bear the Approved: header line. Moderators ensure that the messages that readers see in the newsgroup conform to the charter of the newsgroup, though they are not required to follow any such rules or guidelines.[21] Typically, moderators are appointed in the proposal for the newsgroup, and changes of moderators follow a succession plan.[22]
Historically, a mod.* hierarchy existed before Usenet reorganization.[23] Now, moderated newsgroups may appear in any hierarchy, typically with .moderated added to the group name.
Usenet newsgroups in the Big-8 hierarchy are created by proposals called a Request for Discussion, or RFD. The RFD is required to have the following information: newsgroup name, checkgroups file entry, and moderated or unmoderated status. If the group is to be moderated, then at least one moderator with a valid email address must be provided. Other information which is beneficial but not required includes: a charter, a rationale, and a moderation policy if the group is to be moderated.[24] Discussion of the new newsgroup proposal follows, and is finished with the members of the Big-8 Management Board making the decision, by vote, to either approve or disapprove the new newsgroup.
Unmoderated newsgroups form the majority of Usenet newsgroups, and messages submitted by readers for unmoderated newsgroups are immediately propagated for everyone to see. Minimal editorial content filtering vs propagation speed form one crux of the Usenet community. One little cited defense of propagation is canceling a propagated message, but few Usenet users use this command and some news readers do not offer cancellation commands, in part because article storage expires in relatively short order anyway. Almost all unmoderated Usenet groups tend to receive large amounts of spam.[25][26][27]
Technical details
[edit]Usenet is a set of protocols for generating, storing and retrieving news "articles" (which resemble Internet mail messages) and for exchanging them among a readership which is potentially widely distributed. These protocols most commonly use a flooding algorithm which propagates copies throughout a network of participating servers. Whenever a message reaches a server, that server forwards the message to all its network neighbors that haven't yet seen the article. Only one copy of a message is stored per server, and each server makes it available on demand to the (typically local) readers able to access that server. The collection of Usenet servers has thus a certain peer-to-peer character in that they share resources by exchanging them, the granularity of exchange however is on a different scale than a modern peer-to-peer system and this characteristic excludes the actual users of the system who connect to the news servers with a typical client-server application, much like an email reader.
RFC 850 was the first formal specification of the messages exchanged by Usenet servers. It was superseded by RFC 1036 and subsequently by RFC 5536 and RFC 5537.
In cases where unsuitable content has been posted, Usenet has support for automated removal of a posting from the whole network by creating a cancel message, although due to a lack of authentication and resultant abuse, this capability is frequently disabled. Copyright holders may still request the manual deletion of infringing material using the provisions of World Intellectual Property Organization treaty implementations, such as the United States Online Copyright Infringement Liability Limitation Act, but this would require giving notice to each individual news server administrator.
On the Internet, Usenet is transported via the Network News Transfer Protocol (NNTP) on TCP Port 119 for standard, unprotected connections and on TCP port 563 for SSL encrypted connections.
Organization
[edit]
The major set of worldwide newsgroups is contained within nine hierarchies, eight of which are operated under consensual guidelines that govern their administration and naming. The current Big Eight are:
- comp.* – computer-related discussions (comp.software, comp.sys.amiga)
- humanities.* – fine arts, literature, and philosophy (humanities.classics, humanities.design.misc)
- misc.* – miscellaneous topics (misc.education, misc.forsale, misc.kids)
- news.* – discussions and announcements about news (meaning Usenet, not current events) (news.groups, news.admin)
- rec.* – recreation and entertainment (rec.music, rec.arts.movies)
- sci.* – science related discussions (sci.psychology, sci.research)
- soc.* – social discussions (soc.college.org, soc.culture.african)
- talk.* – talk about various controversial topics (talk.religion, talk.politics, talk.origins)
The alt.* hierarchy is not subject to the procedures controlling groups in the Big Eight, and it is as a result less organized. Groups in the alt.* hierarchy tend to be more specialized or specific—for example, there might be a newsgroup under the Big Eight which contains discussions about children's books, but a group in the alt hierarchy may be dedicated to one specific author of children's books. Binaries are posted in alt.binaries.*, making it the largest of all the hierarchies.
Many other hierarchies of newsgroups are distributed alongside these. Regional and language-specific hierarchies such as japan.*, malta.* and ne.* serve specific countries and regions such as Japan, Malta and New England. Companies and projects administer their own hierarchies to discuss their products and offer community technical support, such as the historical gnu.* hierarchy from the Free Software Foundation. Microsoft closed its newsserver in June 2010, providing support for its products over forums now.[28] Some users prefer to use the term "Usenet" to refer only to the Big Eight hierarchies; others include alt.* as well. The more general term "netnews" incorporates the entire medium, including private organizational news systems.
Informal sub-hierarchy conventions also exist. *.answers are typically moderated cross-post groups for FAQs. An FAQ would be posted within one group and a cross post to the *.answers group at the head of the hierarchy seen by some as a refining of information in that news group. Some subgroups are recursive—to the point of some silliness in alt.*[citation needed].
Binary content
[edit]Usenet was originally created to distribute text content encoded in the 7-bit ASCII character set. With the help of programs that encode 8-bit values into ASCII, it became practical to distribute binary files as content. Binary posts, due to their size and often-dubious copyright status, were in time restricted to specific newsgroups, making it easier for administrators to allow or disallow the traffic.
The oldest widely used encoding method for binary content is uuencode, from the Unix UUCP package. In the late 1980s, Usenet articles were often limited to 60,000 characters, and larger hard limits exist today. Files are therefore commonly split into sections that require reassembly by the reader.
With the header extensions and the Base64 and Quoted-Printable MIME encodings, there was a new generation of binary transport. In practice, MIME has seen increased adoption in text messages, but it is avoided for most binary attachments. Some operating systems with metadata attached to files use specialized encoding formats. For Mac OS, both BinHex and special MIME types are used. Other lesser known encoding systems that may have been used at one time were BTOA, XX encoding, BOO, and USR encoding.
In an attempt to reduce file transfer times, an informal file encoding known as yEnc was introduced in 2001. It achieves about a 30% reduction in data transferred by assuming that most 8-bit characters can safely be transferred across the network without first encoding into the 7-bit ASCII space. The most common method of uploading large binary posts to Usenet is to convert the files into RAR archives and create Parchive files for them. Parity files are used to recreate missing data when not every part of the files reaches a server.
Binary newsgroups can be used to distribute files, and, as of 2022, some remain popular as an alternative to BitTorrent to share and download files.[29]
Binary retention time
[edit]
Each news server allocates a certain amount of storage space for content in each newsgroup. When this storage has been filled, each time a new post arrives, old posts are deleted to make room for the new content. If the network bandwidth available to a server is high but the storage allocation is small, it is possible for a huge flood of incoming content to overflow the allocation and push out everything that was in the group before it. The average length of time that posts are able to stay on the server before being deleted is commonly called the retention time.
Binary newsgroups are only able to function reliably if there is sufficient storage allocated to handle the amount of articles being added. Without sufficient retention time, a reader will be unable to download all parts of the binary before it is flushed out of the group's storage allocation. This was at one time how posting undesired content was countered; the newsgroup would be flooded with random garbage data posts, of sufficient quantity to push out all the content to be suppressed. This has been compensated by service providers allocating enough storage to retain everything posted each day, including spam floods, without deleting anything.
Modern Usenet news servers have enough capacity to archive years of binary content even when flooded with new data at the maximum daily speed available.
In part because of such long retention times, as well as growing Internet upload speeds, Usenet is also used by individual users to store backup data.[31] While commercial providers offer easier to use online backup services, storing data on Usenet is free of charge (although access to Usenet itself may not be). The method requires the uploader to cede control over the distribution of the data; the files are automatically disseminated to all Usenet providers exchanging data for the news group it is posted to. In general the user must manually select, prepare and upload the data. The data is typically encrypted because it is available to anyone to download the backup files. After the files are uploaded, having multiple copies spread to different geographical regions around the world on different news servers decreases the chances of data loss.
Major Usenet service providers have a retention time of more than 12 years.[32] This results in more than 60 petabytes (60000 terabytes) of storage (see image). When using Usenet for data storage, providers that offer longer retention time are preferred to ensure the data will survive for longer periods of time compared to services with lower retention time.
Legal issues
[edit]While binary newsgroups can be used to distribute completely legal user-created works, free software, and public domain material, some binary groups are used to illegally distribute proprietary software, copyrighted media, and pornographic material.
ISP-operated Usenet servers frequently block access to all alt.binaries.* groups to both reduce network traffic and to avoid related legal issues. Commercial Usenet service providers claim to operate as a telecommunications service, and assert that they are not responsible for the user-posted binary content transferred via their equipment. In the United States, Usenet providers can qualify for protection under the DMCA Safe Harbor regulations, provided that they establish a mechanism to comply with and respond to takedown notices from copyright holders.[33]
Removal of copyrighted content from the entire Usenet network is a nearly impossible task, due to the rapid propagation between servers and the retention done by each server. Petitioning a Usenet provider for removal only removes it from that one server's retention cache, but not any others. It is possible for a special post cancellation message to be distributed to remove it from all servers, but many providers ignore cancel messages by standard policy, because they can be easily falsified and submitted by anyone.[34][35] For a takedown petition to be most effective across the whole network, it would have to be issued to the origin server to which the content has been posted, before it has been propagated to other servers. Removal of the content at this early stage would prevent further propagation, but with modern high speed links, content can be propagated as fast as it arrives, allowing no time for content review and takedown issuance by copyright holders.[36]
Establishing the identity of the person posting illegal content is equally difficult due to the trust-based design of the network. Like SMTP email, servers generally assume the header and origin information in a post is true and accurate. However, as in SMTP email, Usenet post headers are easily falsified so as to obscure the true identity and location of the message source.[37] In this manner, Usenet is significantly different from modern P2P services; most P2P users distributing content are typically immediately identifiable to all other users by their network address, but the origin information for a Usenet posting can be completely obscured and unobtainable once it has propagated past the original server.[38]
Also unlike modern P2P services, the identity of the downloaders is hidden from view. On P2P services a downloader is identifiable to all others by their network address. On Usenet, the downloader connects directly to a server, and only the server knows the address of who is connecting to it. Some Usenet providers do keep usage logs, but not all make this logged information casually available to outside parties such as the Recording Industry Association of America.[39][40][41] The existence of anonymising gateways to USENET also complicates the tracing of a postings true origin.
History
[edit]Newsgroup experiments first occurred in 1979. Tom Truscott and Jim Ellis of Duke University came up with the idea as a replacement for a local announcement program, and established a link with nearby University of North Carolina using Bourne shell scripts written by Steve Bellovin. The public release of news was in the form of conventional compiled software, written by Steve Daniel and Truscott.[8][43] In 1980, Usenet was connected to ARPANET through UC Berkeley, which had connections to both Usenet and ARPANET. Mary Ann Horton, the graduate student who set up the connection, began "feeding mailing lists from the ARPANET into Usenet" with the "fa" ("From ARPANET"[44]) identifier.[45] Usenet gained 50 member sites in its first year, including Reed College, University of Oklahoma, and Bell Labs,[8] and the number of people using the network increased dramatically; however, it was still a while longer before Usenet users could contribute to ARPANET.[46]
Network
[edit]UUCP networks spread quickly due to the lower costs involved, and the ability to use existing leased lines, X.25 links or even ARPANET connections. By 1983, thousands of people participated from more than 500 hosts, mostly universities and Bell Labs sites but also a growing number of Unix-related companies; the number of hosts nearly doubled to 940 in 1984. More than 100 newsgroups existed, more than 20 devoted to Unix and other computer-related topics, and at least a third to recreation.[47][8] As the mesh of UUCP hosts rapidly expanded, it became desirable to distinguish the Usenet subset from the overall network. A vote was taken at the 1982 USENIX conference to choose a new name. The name Usenet was retained, but it was established that it only applied to news.[48] The name UUCPNET became the common name for the overall network.
In addition to UUCP, early Usenet traffic was also exchanged with FidoNet and other dial-up BBS networks. By the mid-1990s there were almost 40,000 FidoNet systems in operation, and it was possible to communicate with millions of users around the world, with only local telephone service. Widespread use of Usenet by the BBS community was facilitated by the introduction of UUCP feeds made possible by MS-DOS implementations of UUCP, such as UFGATE (UUCP to FidoNet Gateway), FSUUCP and UUPC. In 1986, RFC 977 provided the Network News Transfer Protocol (NNTP) specification for distribution of Usenet articles over TCP/IP as a more flexible alternative to informal Internet transfers of UUCP traffic. Since the Internet boom of the 1990s, almost all Usenet distribution is over NNTP.[49]
Software
[edit]Early versions of Usenet used Duke's A News software, designed for one or two articles a day. Matt Glickman and Horton at Berkeley produced an improved version called B News that could handle the rising traffic (about 50 articles a day as of late 1983).[8] With a message format that offered compatibility with Internet mail and improved performance, it became the dominant server software. C News, developed by Geoff Collyer and Henry Spencer at the University of Toronto, was comparable to B News in features but offered considerably faster processing. In the early 1990s, InterNetNews by Rich Salz was developed to take advantage of the continuous message flow made possible by NNTP versus the batched store-and-forward design of UUCP. Since that time INN development has continued, and other news server software has also been developed.[50]
Public venue
[edit]Usenet was the first Internet community and the place for many of the most important public developments in the pre-commercial Internet. It was the place where Tim Berners-Lee announced the launch of the World Wide Web,[51] where Linus Torvalds announced the Linux project,[52] and where Marc Andreessen announced the creation of the Mosaic browser and the introduction of the image tag,[53] which revolutionized the World Wide Web by turning it into a graphical medium.
Internet jargon and history
[edit]Many jargon terms now in common use on the Internet originated or were popularized on Usenet.[54] Likewise, many conflicts which later spread to the rest of the Internet, such as the ongoing difficulties over spamming, began on Usenet.[55]
"Usenet is like a herd of performing elephants with diarrhea. Massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it."
— Gene Spafford, 1992
Decline
[edit]Sascha Segan of PC Magazine said in 2008 that "Usenet has been dying for years".[56] He argued that it was dying by the late 1990s, when large binary files became a significant proportion of Usenet traffic, and Internet service providers "sensibly started to wonder why they should be reserving big chunks of their own disk space for pirated movies and repetitive porn."
AOL discontinued Usenet access in 2005. In May 2010, Duke University, whose implementation had started Usenet more than 30 years earlier, decommissioned its Usenet server, citing low usage and rising costs.[57][58] On February 4, 2011, the Usenet news service link at the University of North Carolina at Chapel Hill (news.unc.edu) was retired after 32 years.[citation needed]
In response, John Biggs of TechCrunch said "As long as there are folks who think a command line is better than a mouse, the original text-only social network will live on".[59] While there are still some active text newsgroups on Usenet, the system is now primarily used to share large files between users, and the underlying technology of Usenet remains unchanged.[60]
Usenet traffic changes
[edit]Over time, the amount of Usenet traffic has steadily increased. As of 2010[update] the number of all text posts made in all Big-8 newsgroups averaged 1,800 new messages every hour, with an average of 25,000 messages per day.[61] However, these averages are minuscule in comparison to the traffic in the binary groups.[62] Much of this traffic increase reflects not an increase in discrete users or newsgroup discussions, but instead the combination of massive automated spamming and an increase in the use of .binaries newsgroups[61] in which large files are often posted publicly. A small sampling of the change (measured in feed size per day) follows:

| Daily volume | Daily posts | Date |
|---|---|---|
| 4.5 GiB | 1996 Dec | |
| 9 GiB | 1997 Jul | |
| 12 GiB | 554 k | 1998 Jan |
| 26 GiB | 609 k | 1999 Jan |
| 82 GiB | 858 k | 2000 Jan |
| 181 GiB | 1.24 M | 2001 Jan |
| 257 GiB | 1.48 M | 2002 Jan |
| 492 GiB | 2.09 M | 2003 Jan |
| 969 GiB | 3.30 M | 2004 Jan |
| 1.52 TiB | 5.09 M | 2005 Jan |
| 2.27 TiB | 7.54 M | 2006 Jan |
| 2.95 TiB | 9.84 M | 2007 Jan |
| 3.07 TiB | 10.13 M | 2008 Jan |
| 4.65 TiB | 14.64 M | 2009 Jan |
| 5.42 TiB | 15.66 M | 2010 Jan |
| 7.52 TiB | 20.12 M | 2011 Jan |
| 9.29 TiB | 23.91 M | 2012 Jan |
| 11.49 TiB | 28.14 M | 2013 Jan |
| 14.61 TiB | 37.56 M | 2014 Jan |
| 17.87 TiB | 44.19 M | 2015 Jan |
| 23.87 TiB | 55.59 M | 2016 Jan |
| 27.80 TiB | 64.55 M | 2017 Jan |
| 37.35 TiB | 73.95 M | 2018 Jan |
| 60.38 TiB | 104.04 M | 2019 Jan |
| 62.40 TiB | 107.49 M | 2020 Jan |
| 100.71 TiB | 171.86 M | 2021 Jan |
| 220.00 TiB[64] | 279.16 M | 2023 Aug |
| 274.49 TiB | 400.24 M | 2024 Feb |
In 2008, Verizon Communications, Time Warner Cable and Sprint Nextel signed an agreement with Attorney General of New York Andrew Cuomo to shut down access to sources of child pornography.[65] Time Warner Cable stopped offering access to Usenet. Verizon reduced its access to the "Big 8" hierarchies. Sprint stopped access to the alt.* hierarchies. AT&T stopped access to the alt.binaries.* hierarchies. Cuomo never specifically named Usenet in his anti-child pornography campaign. David DeJean of PC World said that some worry that the ISPs used Cuomo's campaign as an excuse to end portions of Usenet access, as it is costly for the Internet service providers and not in high demand by customers. In 2008 AOL, which no longer offered Usenet access, and the four providers that responded to the Cuomo campaign were the five largest Internet service providers in the United States; they had more than 50% of the U.S. ISP market share.[66] On June 8, 2009, AT&T announced that it would no longer provide access to the Usenet service as of July 15, 2009.[67]
AOL announced that it would discontinue its integrated Usenet service in early 2005, citing the growing popularity of weblogs, chat forums and on-line conferencing.[68] The AOL community had a tremendous role in popularizing Usenet some 11 years earlier.[69]
In August 2009, Verizon announced that it would discontinue access to Usenet on September 30, 2009.[70][71] JANET announced it would discontinue Usenet service, effective July 31, 2010, citing Google Groups as an alternative.[72] Microsoft announced that it would discontinue support for its public newsgroups (msnews.microsoft.com) from June 1, 2010, offering web forums as an alternative.[73]
Primary reasons cited for the discontinuance of Usenet service by general ISPs include the decline in volume of actual readers due to competition from blogs, along with cost and liability concerns of increasing proportion of traffic devoted to file-sharing and spam on unused or discontinued groups.[74][75]
Some ISPs did not include pressure from Cuomo's campaign against child pornography as one of their reasons for dropping Usenet feeds as part of their services.[76] ISPs Cox and Atlantic Communications resisted the 2008 trend but both did eventually drop their respective Usenet feeds in 2010.[77][78][79]
Archives
[edit]Public archives of Usenet articles have existed since the early days of Usenet, such as the system created by Kenneth Almquist in late 1982.[80][81] Distributed archiving of Usenet posts was suggested in November 1982 by Scott Orshan, who proposed that "Every site should keep all the articles it posted, forever."[82] Also in November of that year, Rick Adams responded to a post asking "Has anyone archived netnews, or does anyone plan to?"[83] by stating that he was, "afraid to admit it, but I started archiving most 'useful' newsgroups as of September 18."[84] In June 1982, Gregory G. Woodbury proposed an "automatic access to archives" system that consisted of "automatic answering of fixed-format messages to a special mail recipient on specified machines."[85]
In 1985, two news archiving systems and one RFC were posted to the Internet. The first system, called keepnews, by Mark M. Swenson of the University of Arizona, was described as "a program that attempts to provide a sane way of extracting and keeping information that comes over Usenet." The main advantage of this system was to allow users to mark articles as worthwhile to retain.[86] The second system, YA News Archiver by Chuq Von Rospach, was similar to keepnews, but was "designed to work with much larger archives where the wonderful quadratic search time feature of the Unix ... becomes a real problem."[87] Von Rospach in early 1985 posted a detailed RFC for "archiving and accessing usenet articles with keyword lookup." This RFC described a program that could "generate and maintain an archive of Usenet articles and allow looking up articles based on the article-id, subject lines, or keywords pulled out of the article itself." Also included was C code for the internal data structure of the system.[88]
The desire to have a full text search index of archived news articles is not new either, one such request having been made in April 1991 by Alex Martelli who sought to "build some sort of keyword index for [the news archive]."[89] In early May, Martelli posted a summary of his responses to Usenet, noting that the "most popular suggestion award must definitely go to 'lq-text' package, by Liam Quin, recently posted in alt.sources."[90]
The Alt Sex Stories Text Repository (ASSTR) site archived and indexed erotic and pornographic stories posted to the Usenet group alt.sex.stories.[91]
The archiving of Usenet has led to fears of loss of privacy.[92] An archive simplifies ways to profile people. This has partly been countered with the introduction of the X-No-Archive: Yes header, which is itself controversial.[93]
Archives by Deja News and Google Groups
[edit]Web-based archiving of Usenet posts began in March 1995 at Deja News with a very large, searchable database. In February 2001, this database was acquired by Google;[94] Google had begun archiving Usenet posts for itself starting in the second week of August 2000.
Google Groups hosts an archive of Usenet posts dating back to May 1981. The earliest posts, which date from May 1981 to June 1991, were donated to Google by the University of Western Ontario with the help of David Wiseman and others,[95] and were originally archived by Henry Spencer at the University of Toronto's Zoology department.[96] The archives for late 1991 through early 1995 were provided by Kent Landfield from the NetNews CD series[97] and Jürgen Christoffel from GMD.[98]
Google has been criticized by Vice and Wired contributors as well as former employees for its stewardship of the archive and for breaking its search functionality.[99][100][101]
As of January 2024, Google Groups carries a header notice, saying:
Effective from 22 February 2024, Google Groups will no longer support new Usenet content. Posting and subscribing will be disallowed, and new content from Usenet peers will not appear. Viewing and searching of historical data will still be supported as it is done today.
An explanatory page adds:[102]
In addition, Google’s Network News Transfer Protocol (NNTP) server and associated peering will no longer be available, meaning Google will not support serving new Usenet content or exchanging content with other NNTP servers. This change will not impact any non-Usenet content on Google Groups, including all user and organization-created groups.
See also
[edit]Usenet newsreaders
[edit]Usenet/newsgroup service providers
[edit]Usenet history
[edit]Usenet administrators
[edit]Usenet had administrators on a server-by-server basis, not as a whole. A few famous administrators:
- Chris Lewis
- Gene Spafford, a.k.a. Spaf
- Henry Spencer
- Kai Puolamäki
- Mary Ann Horton
References
[edit]- ^ a b Hosch, William L.; Gregersen, Erik (May 17, 2021). "USENET". Encyclopædia Britannica. Archived from the original on March 25, 2023. Retrieved May 2, 2023.
- ^ a b From Usenet to CoWebs: interacting with social information spaces, Christopher Lueg, Danyel Fisher, Springer (2003), ISBN 1-85233-532-7, ISBN 978-1-85233-532-8
- ^ a b The jargon file v4.4.7 Archived January 5, 2016, at the Wayback Machine, Jargon File Archive.
- ^ Chapter 3 - The Social Forces Behind The Development of Usenet Archived August 4, 2016, at the Wayback Machine, Netizens Netbook by Ronda Hauben and Michael Hauben.
- ^ "USENET Newsgroup Terms – SPAM". Archived from the original on September 15, 2012.
- ^ Pre-Internet; Usenet needing "just local telephone service" in most larger towns, depends on the number of local dial-up FidoNet "nodes" operated free of charge by hobbyist "SysOps" (as FidoNet echomail variations or via gateways with the Usenet news hierarchy. This is virtual Usenet or newsgroups access, not true Usenet.) The participating SysOps typically carry 6–30 Usenet newsgroups each, and will often add another on request. If a desired newsgroup was not available locally, a user would need to dial to another city to download the desired news and upload one's own posts. In all cases it is desirable to hang up as soon as possible and read/write offline, making "newsreader" software commonly used to automate the process. Fidonet, bbscorner.com Archived February 7, 2022, at the Wayback Machine
fidonet.org, Randy_Bush.txt Archived December 3, 2003, at the Wayback Machine - ^ a b Bonnett, Cara (May 17, 2010). "Duke to shut Usenet server, home to the first electronic newsgroups". Duke University. Archived from the original on June 3, 2020. Retrieved June 3, 2020.
- ^ a b c d e Emerson, Sandra L. (October 1983). "Usenet / A Bulletin Board for Unix Users". BYTE. pp. 219–236. Retrieved January 31, 2015.
- ^ "Invitation to a General Access UNIX Network Archived September 24, 2012, at the Wayback Machine", James Ellis and Tom Truscott, in First Official Announcement of USENET, NewsDemon (K&L Technologies, Inc), 1979
- ^ Lehnert, Wendy G.; Kopec, Richard (2007). Web 101. Addison Wesley. p. 291. ISBN 9780321424679
- ^ "Store And Forward Communication: UUCP and FidoNet". Archived from the original on June 30, 2012.. Carnegie Mellon School of Computer Science.
- ^ Kozierok, Charles M. (2005). The TCP/IP guide: a comprehensive, illustrated Internet protocols reference. No Starch Press. p. 1401. ISBN 978-159327-047-6
- ^ One way to virtually read and participate in Usenet newsgroups using an ordinary Internet browser is to do an internet search on a known newsgroup, such as the high volume forum: "sci.physics". Retrieved April 28, 2019
- ^ "Best Usenet clients". UsenetReviewz. Archived from the original on August 8, 2020. Retrieved August 13, 2020.
- ^ "Open Directory Usenet Clients". Dmoz.org. October 9, 2008. Archived from the original on July 30, 2012. Retrieved December 14, 2010.
- ^ Jain, Dominik (July 30, 2006). "OE-QuoteFix Description". Archived from the original on September 21, 2012. Retrieved June 4, 2007.
- ^ "Improve-Usenet". October 13, 2008. Archived from the original on July 13, 2012.
- ^ "Improve-Usenet Comments". October 13, 2008. Archived from the original on April 26, 2008. Retrieved June 29, 2009.
- ^ "Google Groups". Archived from the original on May 25, 2012. Retrieved December 14, 2010.
- ^ "News: links to Google Groups". Archived from the original on July 12, 2012.
- ^ "Who can force the moderators to obey the group charter?". Big-8.org. Archived from the original on August 4, 2012. Retrieved December 14, 2010.
- ^ "How does a group change moderators?". Big-8.org. Archived from the original on July 19, 2012. Retrieved December 14, 2010.
- ^ "Early Usenet Newsgroup Hierarchies". Livinginternet.com. October 25, 1990. Archived from the original on September 21, 2012. Retrieved December 14, 2010.
- ^ "How to Create a New Big-8 Newsgroup". Big-8.org. July 7, 2010. Archived from the original on July 22, 2012. Retrieved December 14, 2010.
- ^ Donath, Judith (May 23, 2014). The Social Machine: Designs for Living Online. MIT Press. ISBN 9780262027014. Archived from the original on January 17, 2023. Retrieved November 7, 2020.
Today, Usenet still exists, but it is an unsociable morass of spam, porn, and pirated software
- ^ "Unraveling the Internet's oldest and weirdest mystery". The Kernel. March 22, 2015. Archived from the original on May 18, 2015. Retrieved May 7, 2015.
Groups filled with spam, massive fights took place against spammers and over what to do about the spam. People stopped using their email addresses in messages to avoid harvesting. People left the net.
- ^ "The American Way of Spam". Archived from the original on May 18, 2015. Retrieved May 7, 2015.
...many of the newsgroups have since been overrun with junk messages.
- ^ Microsoft Responds to the Evolution of Communities Archived September 18, 2012, at the Wayback Machine, Announcement, undated. "Microsoft hitting 'unsubscribe' on newsgroups". CNET. Archived from the original on July 12, 2012., CNET, May 4, 2010.
- ^ Gregersen, Erik; Hosch, William L. (February 17, 2022). "newsgroup". Encyclopedia Britannica. Archived from the original on April 28, 2023. Retrieved April 28, 2023.
- ^ "Usenet storage is more than 60 petabytes (60000 terabytes)". binsearch.info. Archived from the original on May 21, 2020. Retrieved October 20, 2020.
- ^ "usenet backup (uBackup)". wikiHow. Wikihow.com. Archived from the original on February 25, 2020. Retrieved October 20, 2020.
- ^ "Eweka 4446 Days Retention". Eweka.nl. Archived from the original on October 20, 2020. Retrieved October 20, 2020.
- ^ "Digital Millenium Copyright Act". Archived from the original on September 10, 2012.
- ^ "Cancel Messages FAQ". Archived from the original on February 15, 2008. Retrieved June 29, 2009.
...Until authenticated cancels catch on, there are no options to avoid forged cancels and allow unforged ones...
- ^ Microsoft knowledgebase article stating that many servers ignore cancel messages "Support.microsoft.com". Archived from the original on July 19, 2012.
- ^ "Microsoft Word - Surmacz.doc" (PDF). Archived (PDF) from the original on May 21, 2013. Retrieved December 14, 2010.
- ^ ...every part of a Usenet post may be forged apart from the left most portion of the "Path:" header... "By-users.co.uk". Archived from the original on July 23, 2012.
- ^ "Better living through forgery". Newsgroup: news.admin.misc. June 10, 1995. Usenet: StUPidfuk01@uunet.uu.net. Archived from the original on July 24, 2012. Retrieved December 5, 2014.
- ^ "Giganews Privacy Policy". Giganews.com. Archived from the original on July 31, 2012. Retrieved December 14, 2010.
- ^ "Privacy Policy UsenetServer". Usenetserver.com. Archived from the original on June 4, 2020. Retrieved July 8, 2020.
- ^ "Logging Policy". Aioe.org. June 9, 2005. Archived from the original on July 8, 2012. Retrieved December 14, 2010.
- ^ "Quux.org". Archived from the original on July 14, 2012. Retrieved December 14, 2010.
- ^ LaQuey, Tracy (1990). The User's directory of computer networks. Digital Press. p. 386. ISBN 978-1555580476
- ^ "And So It Begins". Archived from the original on July 15, 2010. Retrieved September 15, 2014.
- ^ "History of the Internet, Chapter Three: History of Electronic Mail". Archived from the original on August 12, 2014. Retrieved September 15, 2014.
- ^ Hauben, Michael and Hauben, Ronda. "Netizens: On the History and Impact of Usenet and the Internet, On the Early Days of Usenet: The Roots of the Cooperative Online Culture Archived June 10, 2015, at the Wayback Machine". First Monday vol. 3 num.August 8, 3 1998
- ^ Haddadi, H. (2006). "Network Traffic Inference Using Sampled Statistics Archived November 17, 2015, at the Wayback Machine". University College London.
- ^ Horton, Mark (December 11, 1990). "Arachnet". Archived from the original on September 21, 2012. Retrieved June 4, 2007.
- ^ Huston, Geoff (1999). ISP survival guide: strategies for running a competitive ISP. Wiley. p. 439.
- ^ "Unix/Linux news servers". Newsreaders.com. Archived from the original on September 5, 2012. Retrieved December 14, 2010.
- ^ Tim Berners-Lee (August 6, 1991). "WorldWideWeb: Summary". Newsgroup: alt.hypertext. Usenet: 6487@cernvax.cern.ch. Archived from the original on June 2, 2013. Retrieved June 4, 2007.
- ^ Torvalds, Linus. "What would you like to see most in minix?". Newsgroup: comp.os.minix. Usenet: 1991Aug25.205708.9541@klaava.Helsinki.FI. Archived from the original on October 27, 2006. Retrieved September 9, 2006.
- ^ Marc Andreessen (March 15, 1993). "NCSA Mosaic for X 0.10 available". Newsgroup: comp.windows.x. Usenet: MARCA.93Mar14225600@wintermute.ncsa.uiuc.edu. Archived from the original on June 16, 2006. Retrieved June 4, 2007.
- ^ Kaltenbach, Susan (December 2000). "The Evolution of the Online Discourse Community" (PDF). Archived (PDF) from the original on July 14, 2011. Retrieved May 26, 2010.
Verb Doubling: Doubling a verb may change its semantics, Soundalike Slang: Punning jargon, The -P convention: A LISPy way to form questions, Overgeneralization: Standard abuses of grammar, Spoken Inarticulations: Sighing and <*sigh*>ing, Anthropomorphization: online components were named "Homunculi," daemons," etc., and there were also "confused" programs. Comparatives: Standard comparatives for design quality
- ^ Campbell, K. K. (October 1, 1994). "Chatting With Martha Siegel of the Internet's Infamous Canter & Siegel". Electronic Frontier Foundation. Archived from the original on November 25, 2007. Retrieved September 24, 2010.
- ^ Segan, Sascha (July 31, 2008). "R.I.P Usenet: 1980-2008". PC Magazine. Archived from the original on September 9, 2012. Retrieved November 30, 2022.
- ^ Cara Bonnett (May 17, 2010). "A Piece of Internet History". Duke Today. Archived from the original on July 11, 2012. Retrieved May 24, 2010.
- ^ Andrew Orlowski (May 20, 2010). "Usenet's home shuts down today". The Register. Archived from the original on September 21, 2012. Retrieved May 24, 2010.
- ^ ""Reports of Usenet's Death Are Greatly Exaggerated". Archived from the original on July 16, 2012.." TechCrunch. August 1, 2008. Retrieved on May 8, 2011.
- ^ "What Is Usenet". Top 10 Usenet. Archived from the original on February 8, 2021. Retrieved February 8, 2021.
- ^ a b "Top 100 text newsgroups by postings". NewsAdmin. Archived from the original on October 16, 2006. Retrieved December 14, 2010.
- ^ "Top 100 binary newsgroups by postings". NewsAdmin. Archived from the original on October 16, 2006. Retrieved December 14, 2010.
- ^ "Usenet Piracy". IP Arrow. Archived from the original on February 24, 2024. Retrieved February 24, 2024.
- ^ "Usenet Newsgroup Feed Size - NewsDemon Usenet Newsgroup Access".
- ^ Rosencrance, Lisa. "3 top ISPs to block access to sources of child porn". Archived from the original on July 22, 2012.. Computer World. June 8, 2008. Retrieved on April 30, 2009.
- ^ DeJean, David. "Usenet: Not Dead Yet." PC World. Tuesday October 7, 2008. "2". October 7, 2008. Archived from the original on September 21, 2012. Retrieved September 17, 2017.. Retrieved on April 30, 2009.
- ^ "ATT Announces Discontinuation of USENET Newsgroup Services". NewsDemon. June 9, 2009. Archived from the original on September 21, 2012. Retrieved June 18, 2009.
- ^ Hu, Jim. ""AOL shutting down newsgroups". Archived from the original on July 23, 2012.." CNet. January 25, 2005. Retrieved on May 1, 2009.
- ^ "AOL Pulls Plug on Newsgroup Service". Betanews.com. January 25, 2005. Archived from the original on July 22, 2012. Retrieved December 14, 2010.
- ^ Bode, Karl. "Verizon To Discontinue Newsgroups September 30". DSL Reports. Archived from the original on July 31, 2012.. DSLReports. August 31, 2009. Retrieved on October 24, 2009.
- ^ ""Verizon Newsgroup Service Has Been Discontinued". Archived from the original on September 21, 2012." Verizon Central Support. Retrieved on October 24, 2009.
- ^ Ukerna.ac.uk[dead link]
- ^ "Microsoft Responds to the Evolution of Communities". microsoft.com. Archived from the original on June 22, 2003. Retrieved September 1, 2011.
- ^ "AOL shutting down newsgroups". cnet.com/. January 25, 2005. Archived from the original on August 29, 2008. Retrieved September 1, 2011.
- ^ "Verizon To Discontinue Newsgroups". dslreports.com. August 31, 2009. Archived from the original on March 6, 2012. Retrieved September 1, 2011.
- ^ "The Comcast Newsgroups Service Discontinued". dslreports.com. September 16, 2008. Archived from the original on December 6, 2014. Retrieved December 5, 2014.
- ^ "Cox to Drop Free Usenet Service June 30th". Zeropaid.com. April 22, 2010. Archived from the original on September 21, 2012. Retrieved September 3, 2011.
- ^ "Cox Discontinues Usenet, Starting In June". Geeknet, Inc. April 21, 2010. Archived from the original on September 21, 2012. Retrieved September 1, 2011.
- ^ "Cox Communications and Atlantic Broadband Discontinue Usenet Access". thundernews.com. April 27, 2010. Archived from the original on September 12, 2012. Retrieved September 1, 2011.
- ^ "How to obtain back news items". Archived from the original on July 10, 2012. Retrieved December 14, 2010.
- ^ "How to obtain back news items (second posting)". Newsgroup: net.general. December 21, 1982. Archived from the original on January 29, 2011. Retrieved December 5, 2014.
message-id:bnews.spanky.138
- ^ "Distributed archiving of netnews". Archived from the original on July 8, 2012. Retrieved December 14, 2010.
- ^ "Archive of netnews". Archived from the original on July 24, 2012. Retrieved December 14, 2010.
- ^ "Re: Archive of netnews". Archived from the original on July 15, 2012. Retrieved December 14, 2010.
- ^ "Automatic access to archives". Archived from the original on July 12, 2012. Retrieved December 14, 2010.
- ^ "keepnews – A Usenet news archival system". Archived from the original on July 17, 2012. Retrieved December 14, 2010.
- ^ "YA News Archiver". Archived from the original on July 9, 2012. Retrieved December 14, 2010.
- ^ "RFC usenet article archive program with keyword lookup". Archived from the original on July 15, 2012. Retrieved December 14, 2010.
- ^ "Looking for fulltext indexing software for archived news". Archived from the original on September 21, 2012. Retrieved December 14, 2010.
- ^ "Summary: search for fulltext indexing software for archived news". Archived from the original on July 8, 2012. Retrieved December 14, 2010.
- ^ "Asstr.org". Archived from the original on August 18, 2014. Retrieved August 13, 2014.
- ^ Segan, Sascha (July 31, 2008). "R.I.P Usenet: 1980–2008 – Usenet's Decline – Columns by PC Magazine". Pcmag.com. Archived from the original on September 9, 2012. Retrieved December 14, 2010.
- ^ Strawbridge, Matthew (2006). Netiquette: Internet Etiquette in the Age of the Blog. Software Reference. p. 53. ISBN 978-0955461408
- ^ Cullen, Drew (February 12, 2001). "Google saves Deja.com Usenet service". The Register. Archived from the original on September 21, 2012.. The Register.
- ^ Wiseman, David. "Magi's NetNews Archive Involvement" Archived February 9, 2005, at archive.today, csd.uwo.ca.
- ^ Mieszkowski, Katharine. ""The Geeks Who Saved Usenet". Archived from the original on July 10, 2012.", archive.salon.com (January 7, 2002).
- ^ Feldman, Ian. "Usenet on a CD-ROM, no longer a fable". February 10, 1992. Archived from the original on July 7, 2012., "TidBITS" (February 10, 1992)
- ^ "Google Groups Archive Information". Archived from the original on July 9, 2012. (December 21, 2001)
- ^ Poulsen, Kevin (October 7, 2009). "Google's Abandoned Library of 700 Million Titles". Wired. Archived from the original on March 9, 2017. Retrieved March 12, 2017.
- ^ Braga, Matthew (February 13, 2015). "Google, a Search Company, Has Made Its Internet Archive Impossible to Search". Motherboard. Archived from the original on September 5, 2015. Retrieved August 30, 2015.
- ^ Edwards, Douglas (2011). I'm Feeling Lucky: The Confessions of Google Employee Number 59. Houghton Mifflin Harcourt. pp. 209–213. ISBN 978-0-547-41699-1.
- ^ "Google Groups ending support for Usenet - Google Groups Help". Google Support.
Further reading
[edit]- Bruce Jones (July 1, 1997). "USENET History mailing list archive covering 1990-1997". Archived from the original on May 7, 2019.
- Hauben, Michael; Hauben, Ronda; Truscott, Tom (April 27, 1997). Netizens: On the History and Impact of Usenet and the Internet (Perspectives). Wiley-IEEE Computer Society P. ISBN 978-0-8186-7706-9. Archived from the original on June 10, 2015. Retrieved June 6, 2015.
- Bryan Pfaffenberger (December 31, 1994). The USENET Book: Finding, Using, and Surviving Newsgroups on the Internet. Addison Wesley. ISBN 978-0-201-40978-9.
- Kate Gregory; Jim Mann; Tim Parker & Noel Estabrook (June 1995). Using Usenet Newsgroups. Que. ISBN 978-0-7897-0134-3.
- Mark Harrison (July 1995). The USENET Handbook (Nutshell Handbook). O'Reilly. ISBN 978-1-56592-101-6.
- Spencer, Henry; David Lawrence (January 1998). Managing Usenet. O'Reilly. ISBN 978-1-56592-198-6.
- Don Rittner (June 1997). Rittner's Field Guide to Usenet. MNS Publishing. ISBN 978-0-937666-50-0.
- Konstan, J.; Miller, B.; Maltz, D.; Herlocker, J.; Gordon, L.; Riedl, J. (March 1997). "GroupLens: applying collaborative filtering to Usenet news". Communications of the ACM. 40 (3): 77–87. CiteSeerX 10.1.1.377.1605. doi:10.1145/245108.245126. S2CID 15008577.
- Miller, B.; Riedl, J.; Konstan, J. (January 1997). Experiences with GroupLens: Making Usenet useful again (PDF). Proceedings of the 1997 Usenix Winter Technical Conference. Archived (PDF) from the original on March 6, 2006. Retrieved December 13, 2005.
- "20 Year Usenet Timeline". Archived from the original on January 5, 2007. Retrieved June 27, 2006.
- Schwartz, Randal (June 15, 2006). "Web 2.0, Meet Usenet 1.0". Linux Magazine. Archived from the original on February 16, 2007. Retrieved September 3, 2025.
- Kleiner, Dmytri; Wyrick, Brian (January 29, 2007). "InfoEnclosure 2.0". Archived from the original on October 25, 2011. Retrieved June 4, 2007.
External links
[edit]- IETF working group USEFOR (USEnet article FORmat), tools.ietf.org
- A-News Archive: Early Usenet news articles: 1981 to 1982., quux.org
- "Netscan". Archived from the original on June 21, 2007. Social Accounting Reporting Tool
- Living Internet A comprehensive history of the Internet, including Usenet. livinginternet.com
- Usenet Glossary A comprehensive list of Usenet terminology
- Usenet free servers A list of free providers of Usenet server access
Usenet
View on GrokipediaOverview
Definition and Core Principles
Usenet, a portmanteau of "users' network," constitutes a worldwide distributed discussion system comprising hierarchically organized collections of newsgroups for exchanging threaded messages and files among participants.[10] Initially implemented via the Unix-to-Unix Copy Protocol (UUCP) on dial-up connections, it enabled asynchronous communication across interconnected Unix systems as an accessible means for posting and retrieving articles beyond the scope of ARPANET's email lists.[11] Articles, the fundamental units of content, include headers specifying subjects, authors, dates, and references to prior messages, facilitating the formation of conversation threads that users navigate chronologically or topically.[12] At its core, Usenet embodies decentralization through a federated model of independent servers that exchange articles via peer-to-peer newsfeeds, eschewing any central authority or single point of control over content dissemination.[13] This propagation mechanism—wherein servers relay incoming articles to their configured peers—ensures broad replication and resilience against individual server failures, as no proprietary database or host dictates availability or moderation universally.[14] Newsgroups adhere to a hierarchical naming convention, such as comp.sys.mac for topics in Macintosh computing, which partitions discussions by broad categories (e.g., comp for computers) into subtopics, promoting topical focus while allowing alternative hierarchies for specialized communities.[11] Empirically, this structure contrasts with centralized client-server paradigms, like those in web-based forums, where a singular authority manages persistence and access; in Usenet, article visibility depends on feed policies and retention durations across servers, yielding potential inconsistencies such as delayed propagation or selective omissions by operators, yet fostering robustness through redundancy.[13] Threading relies on explicit reference headers linking replies to antecedents, enabling readers to reconstruct discussions without reliance on server-side indexing, a principle that underscores Usenet's emphasis on self-organizing, user-driven discourse over administered curation.[12]Key Components and Decentralized Nature
Usenet's core components comprise news servers responsible for storing articles and forwarding them across the network, newsreaders that provide user interfaces for accessing and posting to newsgroups, and news feeds that enable the transfer of articles between interconnected servers.[15][16] News servers operate independently, maintaining local repositories typically in directories like/var/spool/news, while newsreaders connect via protocols such as NNTP to retrieve content without direct server-to-server dependency for user access.[15]
The decentralized nature of Usenet arises from its peer-to-peer propagation model, where servers selectively subscribe to specific newsgroups and exchange articles through configured feeds rather than relying on a central hub.[15][17] This lack of a global authority or unified index means that article availability varies, with servers forming partial mirrors of the full corpus, and users pulling content from their local server, which may not hold all posts.[15] Feed policies, often defined using batch files (BAT files) in early implementations, dictate what articles are pushed to downstream peers, allowing operators to control volume and scope autonomously.[15]
This architecture has supported over 100,000 newsgroups historically, fostering resilience and autonomy but introducing challenges like inconsistent propagation delays—typically resolving within hours as articles disseminate via flooding algorithms—and variable retention periods determined by individual server storage policies.[18][16] Propagation relies on queued or immediate feeds, with delays stemming from network topology and operator configurations rather than centralized scheduling.[15]
Technical Architecture
Protocols and News Propagation
The Network News Transfer Protocol (NNTP), defined in RFC 3977, serves as the primary application-layer protocol for distributing Usenet articles between news servers and facilitating client-server interactions.[19] Originally specified in RFC 977 in March 1986, NNTP enables efficient transmission over reliable full-duplex channels, supporting commands for posting articles, retrieving lists of newsgroups, and transferring articles via modes like IHAVE and SEND. Server-to-server feeds typically use NNTP over IP connections in modern implementations, replacing earlier UUCP-based batch transfers for real-time propagation.[20] Usenet employs a flood-fill propagation mechanism where articles are injected into the network at a local server and then disseminated to peering servers. Upon receipt of an article—identified uniquely by its Message-ID header—a server checks for duplicates before forwarding it to its peers, ensuring eventual consistency across the decentralized network without central coordination.[21] This process relies on server peering agreements, with articles pushed via NNTP feeds; propagation delays depend on network topology and peering density, often completing globally within hours.[22] Retention periods vary by server policy and content type: text articles are typically held for days to weeks, while binary content on commercial providers can persist for years, with some offering over 16 years (approximately 5,800+ days) as of 2025 to support archival access.[23] Usenet articles conform to a structured format outlined in RFC 1036, comprising headers and a body separated by a blank line. Essential headers include From (author), Newsgroups (target hierarchy), Subject, Date, Message-ID (unique identifier formatted as unique@domain), and Path (propagation trace with site names separated by '!'). Threading is maintained through References and In-Reply-To headers, which list Message-IDs of parent messages, allowing newsreaders to reconstruct discussions hierarchically.[24] [25] To mitigate spam, Usenet supports cancellation control messages, which instruct servers to remove specified articles by referencing their Message-ID. These are processed locally if authenticated—originally via approved sender lists or later cancel locks (e.g., cryptographic hashes)—and propagate similarly to regular articles, though adoption varies as some servers ignore unauthenticated cancels to prevent abuse. Empirical evidence from early spam incidents, such as the 1994 Canterbury Dreamware flood, demonstrates cancellations' role in rapid content removal, though incomplete propagation can leave remnants on distant servers.[26] [27]Newsgroups: Structure and Moderation
Newsgroups in Usenet are organized into hierarchical categories prefixed by topical domains, facilitating structured navigation across diverse discussions. The primary hierarchies, known as the Big Eight—comprising comp. (computing), humanities. (arts and literature), misc. (miscellaneous), news. (Usenet administration), rec. (recreation), sci. (science), soc. (social issues), and talk. (debate)—are managed by a volunteer Big-8 Management Board that oversees creation through formal proposals and community voting processes to ensure relevance and sustainability.[28][29] In contrast, alternative hierarchies such as alt. permit vote-free creation via control messages issued by any user, enabling rapid proliferation without centralized approval and reflecting Usenet's decentralized ethos, though this often resulted in fragmented or short-lived groups.[30] By the late 1990s, Usenet encompassed over 100,000 newsgroups across these and other hierarchies, driven by exponential growth in user participation, though active groups numbered in the tens of thousands.[31] Moderation operates on a spectrum, with the majority of newsgroups unmoderated, allowing direct propagation of posts from users to servers without intermediary review, which promotes immediacy and unrestricted exchange but exposes groups to spam, off-topic content, and abuse.[32] Moderated newsgroups, such as comp.risks (focused on computing safety incidents), route submissions via email to designated moderators who evaluate and approve posts for relevance and quality before propagation, aiming to maintain focused discourse and filter low-value contributions.[33] This approach yields benefits like reduced noise and higher signal-to-noise ratios, as evidenced by sustained participation in long-standing moderated groups, but introduces drawbacks including processing delays—sometimes days or weeks—and risks of moderator bias or overreach, potentially suppressing dissenting views under the guise of quality control.[34] Empirical observations from Usenet operators indicate that moderation demands ongoing volunteer effort, with bottlenecks emerging in high-volume groups, while unmoderated forums rely on community self-policing through norms like follow-ups and critiques.[35] Newsgroup lifecycle governance occurs via the control pseudo-newsgroup, where control messages propose creations, renamings, or deletions, processed by server software to issue commands like "newgroup" or "rmgroup." For Big Eight hierarchies, these proposals undergo board-vetted voting requiring majority support from discussants, enforcing communal consensus without mandatory server compliance, as propagation depends on individual site policies.[36] Alternative hierarchies bypass such votes, allowing forking through unilateral control messages, which proliferated groups but also sparked "newsgroup wars" over legitimacy and carriage disputes among backbone providers. This model underscores Usenet's lack of central authority, with site administrators retaining autonomy to carry or reject groups based on local resources and community input, preventing any single entity from dictating global structure.[37]Access Tools: Newsreaders and Servers
Usenet access requires specialized client software known as newsreaders, which connect to servers via the Network News Transfer Protocol (NNTP) to retrieve and post articles.[38] Command-line newsreaders such as tin and nn provide efficient, text-based interfaces suitable for Unix-like systems, enabling local or remote reading of newsgroups with features like threaded article navigation and header caching for speed.[39] [40] Graphical user interface (GUI) newsreaders, including integrations in email clients like Mozilla Thunderbird, offer point-and-click usability for subscribing to newsgroups, viewing threads, and composing posts, making them accessible to users less familiar with terminal commands.[41] A key feature distinguishing traditional newsreaders from web-based alternatives is the implementation of scoring filters, which allow users to assign numerical scores to articles based on criteria such as author, subject keywords, or posting patterns, thereby personalizing feeds by promoting or hiding content algorithmically.[42] This enables power users to manage high-volume discussions effectively, reducing noise in unmoderated groups, whereas web interfaces often prioritize simplicity over such granular control, potentially limiting customization for advanced filtering needs.[43] Usenet servers store and propagate articles, with access historically provided through internet service providers (ISPs) offering free NNTP feeds, though retention periods were typically limited to days or weeks.[44] Following the 2000s, many ISPs discontinued complimentary Usenet services due to escalating bandwidth and storage costs driven by binary content proliferation, prompting a shift toward commercial providers.[44] Paid servers, such as those from Newshosting, maintain extensive binary retention exceeding 6,200 days as of October 2025, ensuring availability of historical archives via subscription-based access with enhanced completion rates and speeds.[45] Users connect newsreaders to these servers using credentials, bypassing ISP limitations for reliable, high-retention Usenet interaction.[44]Handling Binary and Multimedia Content
Usenet, designed primarily for text-based articles, requires binary and multimedia files to be encoded into text format for transmission via the NNTP protocol. Early methods included uuencode, which converts binary data to printable ASCII characters but introduces approximately 35% overhead due to escaping non-ASCII bytes.[46] This encoding ensures compatibility with text-only servers, though it increases transmission size and processing demands.[47] The yEnc scheme, introduced in the late 1990s, became the dominant encoding for binaries by offering superior efficiency, with encoded data expanding to only 1-2% above the original binary size.[46] yEnc achieves this through minimal escaping and CRC-32 checksums for error detection, reducing bandwidth usage and decoding time compared to uuencode or MIME base64, which can add 33% overhead.[48] Large files are typically split into multiple articles, each encoded separately and posted sequentially in binary newsgroups like those under alt.binaries hierarchies.[48] To facilitate retrieval of multipart binaries scattered across articles, NZB index files—XML documents containing message-ID pointers and metadata—enable newsreaders to automate downloading and reassembly.[49] Users generate NZBs from indexers that scan newsgroup headers, allowing efficient fetching without manual header downloads.[50] Retention policies differ markedly between text and binary content due to storage and bandwidth constraints. Text articles, being smaller, often retain for thousands of days on many servers, while binaries demand more resources, leading providers to prioritize shorter or tiered retention.[51] In 2025, premium providers maintain binary retention exceeding 5,000 days (over 13 years), supported by extensive mirroring across backbones.[52] Free or ISP servers typically offer days to weeks for binaries versus longer for text, reflecting cost-based trade-offs.[53] The high volume of binary traffic prompted server policies strictly segregating content: binaries are confined to designated groups to prevent flooding text discussions, as disguised binary posts inflate sizes and enable spam proliferation by evading filters.[54] This separation mitigates bandwidth overload, with many operators enforcing rules or automated removal of off-topic binaries in text hierarchies to preserve usability.[51]Historical Development
Origins in ARPANET-Era Experimentation (1979–1985)
Usenet originated as an experimental distributed discussion system designed to circumvent the bandwidth and policy restrictions of the ARPANET, which prohibited non-research communications and favored dedicated leased lines unsuitable for many academic sites. In late 1979, graduate students Tom Truscott and Jim Ellis at Duke University conceived the idea of leveraging the Unix-to-Unix Copy (UUCP) protocol—a store-and-forward mechanism using dial-up phone lines—to exchange files and messages between Unix systems, enabling asynchronous, low-cost information sharing among universities lacking ARPANET access.[55][56][57] The initial implementation involved shell scripts written by Steve Bellovin at the University of North Carolina (UNC), which connected Duke and UNC for the first exchanges; the earliest documented article, posted in December 1979, discussed Unix shell programming techniques. By early 1980, these scripts evolved into compiled software dubbed "A News," developed by Steve Daniel and distributed publicly to handle growing traffic on UUCP links. This volunteer-driven effort, without central funding or administration, relied on site operators manually configuring batch transfers via modems, typically nightly, to propagate articles across connected hosts.[58][55] Early adoption was fueled by the proliferation of Unix systems in academia and research labs, expanding from two initial sites (Duke and UNC) to about 15 by the end of 1980 and 150 by 1981, as universities like the University of California, Berkeley, and Bell Labs joined via UUCP feeds. By 1982, participation reached around 400 sites, and empirical logs indicate hundreds more by 1985, sustained by organic propagation without formal governance—operators shared software updates and moderated content locally to manage volume. This decentralized model emphasized resilience over speed, with articles batched into files for transfer, reflecting first-principles adaptations to constrained telephony infrastructure rather than real-time networking.[57][59][17]Expansion and Institutional Adoption (1986–1993)
In 1983, B News software, developed by Mark Horton and Matt Glickman at Bell Labs, superseded the original A News implementation, introducing improved article threading, storage efficiency, and batching capabilities that facilitated larger-scale propagation over UUCP networks.[9][60] This upgrade addressed limitations in handling growing volumes of posts, enabling Usenet to scale beyond initial university sites.[58] The introduction of the Network News Transfer Protocol (NNTP) in March 1986, as specified in RFC 977, marked a pivotal shift to TCP/IP-based transmission, allowing direct integration with ARPANET and emerging Internet infrastructure.[58][61] NNTP supported client-server access to remote news servers, reducing reliance on batch file transfers and enabling real-time querying, which accelerated adoption among academic institutions connected via NSFNET.[62] By leveraging NSFNET's backbone, Usenet expanded from hundreds of sites in the early 1980s to widespread institutional use, with propagation efficiency improving causal connectivity across research networks.[57] The alt.* hierarchy emerged in the late 1980s, initiated through alternative creation processes like those for alt.sex, providing a decentralized counterpoint to moderated hierarchies and fostering unmoderated discussions on diverse topics.[63] Commercialization began with providers like PSINet offering paid Usenet feeds by 1990, including dial-up access that extended availability beyond academia.[64] Concurrently, the practice of posting Frequently Asked Questions (FAQs) files gained traction in the late 1980s, standardizing information dissemination and reducing repetitive queries in high-traffic groups.[65] These developments underscored Usenet's transition to a robust, multi-stakeholder system by 1993.[66]Peak Usage and "Eternal September" (1994–1999)
During the mid-1990s, Usenet experienced its zenith of participation, fueled by the expansion of commercial internet service providers that integrated gateways to the network. America Online (AOL), which began offering Usenet access in September 1993, saw its subscriber base surge from approximately 2 million in 1993 to over 5 million by 1995, channeling a massive wave of non-technical users into Usenet groups and amplifying traffic volumes.[67] Similarly, Microsoft Network (MSN) and other dial-up services introduced gateways, broadening access beyond academic and technical enclaves to mainstream audiences seeking discussion forums on diverse topics from computing to hobbies.[68] This era solidified the "Eternal September," a term originating from the perpetual influx of novices that eroded Usenet's self-policing culture, as the one-time annual onboarding of university freshmen—accustomed to learning etiquette via frequently asked questions (FAQs) and netiquette—gave way to unending arrivals lacking such preparation. By 1994–1995, the phenomenon persisted, with veterans reporting heightened disruption from off-topic posts, flame wars, and failure to adhere to group norms, transforming transient September overloads into a chronic state.[67] [69] The decentralization of Usenet, reliant on voluntary server peering, buckled under this scale, as exponential message propagation strained bandwidth and encouraged excessive crossposting—early harbingers of spam—without centralized moderation to curb abuse.[70] Participation peaked with an estimated several million regular readers worldwide by the late 1990s, coinciding with the proliferation of over 40,000 newsgroups by mid-decade, many in the alt.* hierarchy spawned by unmoderated creation scripts.[5] Tools like kill files, which allowed users to programmatically filter authors, subjects, or keywords, gained widespread adoption as a pragmatic response to noise from unskilled posters, enabling experienced users to curate feeds amid the deluge.[69] Web-based interfaces, such as Deja News launched in 1995, further democratized access by enabling browser-based searching of archives without native newsreader software, inadvertently commodifying discussions while exposing them to broader scrutiny and off-topic incursions.[71] This accessibility, however, exacerbated cultural fractures, as commercial incentives prioritized volume over the meritocratic ethos that had sustained Usenet's earlier coherence.[72]Decline and Fragmentation (2000–2010)
During the 2000s, Usenet experienced a sharp reduction in mainstream usage, driven primarily by escalating spam volumes, the resource-intensive distribution of binary files, and the emergence of more user-friendly alternatives like web-based forums. Spam proliferation, which intensified after early incidents such as the 1994 "green card" advertisement cross-posted to thousands of newsgroups by lawyers Laurence Canter and Martha Siegel, overwhelmed discussion hierarchies with off-topic commercial and abusive messages, eroding signal-to-noise ratios and deterring participants.[73] This unmoderated chaos contrasted with the structured moderation of emerging platforms, contributing to user migration rather than any single commercial pivot. In response to deteriorating quality, the Usenet II initiative launched in 1998 as a peered network among select "sound" sites adhering to strict anti-spam policies, effectively fragmenting the ecosystem by excluding high-volume or unreliable peers to preserve discussion integrity.[74] However, adoption remained limited, as the original Usenet backbone continued to propagate vast binary content via alt.binaries.* groups, which ballooned storage and bandwidth demands—often exceeding terabytes daily for full feeds—prompting many ISPs to curtail or eliminate free access. For instance, Comcast terminated its Usenet service for customers in September 2008, citing voluntary compliance with efforts to curb illegal content distribution amid New York Attorney General Andrew Cuomo's campaign against child exploitation material in binaries.[75] [76] The rise of peer-to-peer (P2P) networks like BitTorrent, gaining traction from the early 2000s, further eroded Usenet's role in binary sharing by offering decentralized, metadata-efficient file distribution without reliance on news servers.[77] Concurrently, web forums such as Slashdot (launched 1997) provided browser-accessible threading, built-in search, and reduced setup barriers compared to dedicated newsreaders, attracting users seeking convenience over Usenet's decentralized but cumbersome propagation model.[78] These factors compounded, leading to a qualitative collapse in active readership; backbone operators reported sustained drops in text-based traffic as communities splintered or dissolved, though binary retention persisted in niche paid services.Cultural and Social Impact
Community Norms and Usenet Jargon
Usenet participants established informal community norms, collectively termed netiquette, to promote civil discourse and efficient communication amid the system's decentralized nature. These guidelines emphasized plain-text posting, trimming excessive quoted material from prior messages to reduce redundancy, and restricting cross-posting to relevant newsgroups to avoid cluttering unrelated discussions.[79] Signatures, or sigs, were limited to brief blocks of 4-6 lines containing personal identifiers or disclaimers, appended automatically to posts to maintain readability.[79] Such practices arose organically in the 1980s as user volumes grew, predating formal codification, and were reinforced by the term "netiquette" first appearing in a 1982 Usenet posting.[80] Enforcement relied on social mechanisms rather than technical controls, with violators often facing public rebuke through flames—heated, insulting rebuttals—or exclusion via user-configured killfiles that filtered unwanted content.[81] Hierarchies like alt.* operated under implicit charters, where persistent off-topic posting or failure to heed frequently asked questions (FAQs) invited collective ostracism, preserving group cohesion without centralized moderation.[82] Research on Usenet norms highlights both explicit rules (e.g., posted group guidelines) and implicit expectations (e.g., deference to expertise), socialized through observation and peer feedback, which sustained participation until external pressures like spam eroded adherence.[83] Usenet jargon encapsulated these dynamics, with terms like flame denoting aggressive, ad hominem exchanges that emerged in the early 1980s as a response to perceived breaches of decorum.[84] The word troll, originating around 1990 in the alt.folklore.urban newsgroup, described deliberate provocative posts designed to "troll for newbies" by eliciting outraged replies, exploiting anonymity to test or disrupt community patience.[85][86] Pseudonymous posting fostered candid, unfiltered debate but amplified vitriol, as disinhibition from unverified identities enabled behaviors rarer on modern platforms requiring real-name verification. By the mid-1990s, RFC 1855 formalized select norms, advising against "heated messages" and urging conservatism in transmission to mitigate such conflicts.[79]Contributions to Internet Culture and Innovation
Usenet facilitated early collaborative open-source software development by providing a decentralized platform for technical discussions and code sharing. Linus Torvalds announced his Linux kernel project on August 25, 1991, via a posting to the comp.os.minix newsgroup, seeking feedback and contributors, which spurred global participation and evolved into dedicated forums like comp.os.linux for ongoing development and distribution.[87] [88] This model exemplified peer-driven innovation, where participants freely exchanged patches and ideas without central authority, laying groundwork for the open-source movement's emphasis on communal improvement over proprietary control.[89] The threaded conversation format pioneered in Usenet directly influenced the architecture of subsequent online discussion systems, including web forums and social media comment sections. By organizing replies in hierarchical trees attached to original posts, Usenet enabled scalable, context-preserving debates that avoided the linear limitations of earlier bulletin board systems.[4] [90] This structure promoted efficient information flow in technical and hobbyist communities, fostering norms of asynchronous, merit-based engagement that decentralized authority and prioritized substantive contributions—contrasting with later centralized platforms while serving as their conceptual precursor.[91] Usenet exported key jargon into broader internet lexicon, notably popularizing "spam" for abusive bulk postings. The term, drawn from a Monty Python sketch depicting repetitive intrusion, first gained traction on March 31, 1993, when Usenet users applied it to describe the Canter and Siegel lawyers' cross-posted advertisements flooding hundreds of newsgroups, marking an early consensus on network etiquette violations.[92] [93] In parallel, groups like comp.lang.c hosted unfiltered exchanges on programming that reinforced a hacker ethic of open information sharing and rigorous critique, while alt.tasteless advanced irreverent, boundary-testing humor through antics such as the 1994 coordinated "invasion" of rec.pets.cats, which highlighted emergent norms of playful disruption in digital spaces.[94] [95] These elements underscored Usenet's role as an incubator for resilient, self-regulating cultural practices rather than mere anarchy.Social Dynamics: Collaboration vs. Conflict
Usenet's decentralized and largely unmoderated framework enabled collaborative achievements in specialized communities, particularly prior to the 1990s, when participant pools were small and expertise-driven. By 1988, approximately 140,000 active users engaged in niche discussions across roughly 11,000 connected systems, yielding high signal-to-noise ratios in groups like those in the sci.* hierarchy, where scientists and researchers exchanged technical insights through threaded, iterative critiques resembling informal peer review.[96][97] Moderated newsgroups further enhanced this quality by filtering submissions, as evidenced by empirical observations from 1987 showing consistently superior content relevance compared to unmoderated counterparts, which supported focused cross-disciplinary projects such as early software debugging and protocol refinements shared across hierarchies.[98] Conflicts arose inherently from the system's openness, manifesting in flame wars—intense, personal exchanges of insults—and off-topic floods that disrupted discourse. Unmoderated groups' immediacy allowed rapid idea propagation but invited escalations, with the Meow Wars (April 1996–circa 1998) exemplifying this: hundreds of users across over 80 newsgroups bombarded threads with repetitive "meow" posts and cultural references, sustaining chaos for 45 weeks and highlighting how anonymous provocation could hijack collective attention.[99] Post-1994 influxes amplified such issues, as broader access introduced casual disruptions, eroding pre-existing norms in unmoderated spaces while moderated groups preserved coherence longer through gatekeeping.[100] Pseudonymous posting facilitated raw, adversarial debate, empowering users to challenge orthodoxies and expose flaws via unvarnished critique—a causal driver of breakthroughs absent in identity-enforcing platforms—while shielding participants from real-world backlash in contentious fields.[101] However, this anonymity equally empowered disruptors, enabling sustained abuse by insulating bad actors from accountability and intensifying conflicts beyond productive contention. Unmoderated dynamics thus traded moderated stability for velocity in knowledge exchange, with empirical trade-offs evident in moderated groups' enduring focus versus unmoderated ones' volatility.[90][98]Controversies and Criticisms
Spam Proliferation and Network Overload
The proliferation of spam on Usenet originated from sporadic off-topic cross-postings in the early 1990s but escalated into systematic abuse with the advent of automated bulk messaging. On April 12, 1994, immigration lawyers Laurence Canter and Martha Siegel initiated the first large-scale commercial spam by posting advertisements for U.S. green card lottery services to over 5,000 newsgroups, exploiting Usenet's decentralized propagation to reach millions without incurring marginal distribution costs.[73] [102] [103] This "Green Card Spam" triggered immediate backlash, including server blacklisting of the perpetrators' sites, but demonstrated the vulnerability of Usenet's broadcast model to low-cost replication, paving the way for subsequent floods of advertisements, chain letters, and make-money-fast schemes.[104] Spam volume surged through the mid-1990s, with automated scripts enabling rapid multiplication of identical or variant messages across hierarchies, overwhelming storage and bandwidth. Providers reported exponential growth in unwanted traffic, as each article propagated identically to all connected servers, amplifying the load from even modest posting volumes; by the late 1990s, operational costs for disk space and transit escalated, prompting many institutions to curtail access.[105] The decentralized architecture, lacking a central authority to enforce propagation rules, allowed spammers to target high-visibility groups while evading uniform filtering, resulting in network overload where legitimate discourse was drowned out and server maintenance became unsustainable for smaller operators.[77] In response, Usenet administrators deployed cancel messages—control articles requesting deletion of spam—and automated cancelbots to detect and purge bulk postings based on criteria like crossposting thresholds or keyword patterns.[26] Additional measures included the Usenet Death Penalty, whereby backbone providers severed feeds from egregious abusers, and informal blacklists coordinated via meta-groups.[106] However, enforcement faltered due to Usenet's peer-to-peer federation; site operators retained autonomy to honor or ignore cancels, often prioritizing local user demands over collective norms, which fragmented countermeasures and enabled spam resurgence from rogue servers.[107] This dynamic exemplified a tragedy of the commons, wherein individual incentives to post freely eroded the shared resource's viability, as spammers externalized propagation costs onto the network while unmoderated groups lacked scalable incentives for restraint.[108] The absence of proprietary controls or mandatory authentication—unlike emerging web forums—accelerated degradation, with overload not solely attributable to external commercialization but to inherent flaws in voluntary cooperation among autonomous nodes, ultimately driving provider attrition and reduced participation by the early 2000s.[109]Binary Distribution: Piracy and Legal Challenges
Binary files, such as software, images, and media, were distributed on Usenet by encoding them into text format suitable for transmission over text-only protocols, primarily within the alt.binaries.* newsgroup hierarchy that developed in the early 1990s.[6] This hierarchy included subgroups like alt.binaries.warez.* dedicated to sharing cracked software and games, enabling users to upload and download large files split across multiple posts.[110] Early encodings like uuencode incurred high overhead, but the yEnc scheme, introduced around 1998, optimized binary-to-text conversion by minimizing padding and escaping, reducing file sizes by up to 30-40% compared to predecessors and facilitating faster, more efficient transfers.[111] The surge in binary postings, particularly copyrighted material, positioned Usenet as a key platform for digital piracy, with alt.binaries.* groups accounting for significantly higher data volumes than text-based discussions—often estimated at 10 times the rest of Usenet traffic by the late 1990s.[112] Legal scrutiny intensified as binary distribution enabled unauthorized sharing of commercial software, music, films, and other protected works, prompting copyright holders to target both individual distributors and service providers. The Digital Millennium Copyright Act (DMCA) of 1998 allowed rights holders to issue takedown notices, requiring Usenet providers to remove specific infringing posts from their archives, though the decentralized propagation across servers complicated complete eradication.[113] In the mid-2000s, the Recording Industry Association of America (RIAA) and Motion Picture Association pressured ISPs to block access to alt.binaries.* groups, citing liability risks; by 2007, this culminated in the RIAA's lawsuit against Usenet.com, alleging the provider induced infringement by offering unlimited access to copyrighted recordings, resulting in a 2009 court ruling against the service for failing DMCA safe harbor protections due to inadequate repeat-infringer policies.[114] [115] Arrests of Usenet users involved in warez distribution occurred amid broader crackdowns on organized piracy groups that relied on the network for rapid releases; for instance, in 2003, a technology manager pleaded guilty to distributing pirated software, games, and media via online methods including Usenet postings, facing up to 10 years in prison under copyright laws.[116] While binaries occasionally preserved rare or abandonware files for archival purposes, empirical patterns showed predominant use for illegal copying, fueling a piracy stigma that contributed to free ISPs dropping Usenet support.[117] In paid provider ecosystems, binary retention sustains the system—comprising the vast majority of stored data due to high-volume media uploads—but exposes operators to ongoing DMCA compliance burdens and potential secondary liability, contrasting with torrents' peer traceability yet mirroring challenges in enforcing decentralized distribution.[118] This dynamic preserved Usenet's viability post-text decline by catering to file-sharing demands, albeit under heightened legal constraints that favored compliant, indexed services over anonymous free access.Free Speech, Anonymity, and Abuse
Usenet's decentralized architecture permitted users to post messages under pseudonyms without requiring personal accounts or verification, enabling a high degree of anonymity that facilitated open discourse on sensitive topics.[119] This feature positioned Usenet as a refuge for dissident expression, notably during the mid-1990s conflict in alt.religion.scientology, where participants leaked internal Church of Scientology documents, including the "Fishman Affidavit," despite legal efforts by the organization to suppress them via copyright claims and site operator pressure.[120] The resulting "alt.scientology.war" highlighted Usenet's resistance to centralized censorship, as posts proliferated across servers even after targeted removals, disseminating critiques that challenged institutional control over information.[121] However, anonymity also amplified abusive behaviors, including coordinated harassment campaigns known as "flame wars," where pseudonymous users engaged in prolonged, vitriolic attacks without real-world accountability, a dynamic later termed the online disinhibition effect.[122] Such incidents were exacerbated in unmoderated hierarchies like alt., which emerged in 1992 as a fork bypassing the moderated Big Eight's creation guidelines, allowing rapid proliferation of off-topic, inflammatory, or rule-violating content that the structured hierarchies sought to contain through volunteer oversight.[30] Usenet's lax controls further enabled the distribution of illegal materials, including child sexual abuse imagery in certain alt.binaries. subgroups during the 1990s and 2000s, prompting international efforts by organizations like the Internet Watch Foundation to monitor and report such content to providers.[123] By 2008, major ISPs such as Verizon and Sprint began filtering approximately 0.5% of active alt. discussion groups to excise these materials, reflecting how growth in user-generated binaries—correlating with overall network expansion from thousands to millions of daily posts—outpaced voluntary self-regulation and led to systemic overload from unchecked harmful uploads.[124] Empirical patterns indicate that while Usenet's model debunked assumptions of effective top-down moderation by sustaining unfiltered debate, it underscored the limits of decentralized self-policing at scale: abuse metrics, including spam volume and illegal postings, escalated alongside participation surges, as documented in provider logs and forensic analyses, rather than stemming from any baseline "toxicity" in the medium itself.[125] This tension revealed that anonymity's virtues in shielding dissent coexisted with vulnerabilities to exploitation, where low barriers to entry amplified both innovative discourse and predatory actions without inherent mechanisms for resolution.Current Status and Legacy
Ongoing Usage and Provider Ecosystem (2010s–2025)
In the 2010s and continuing into 2025, Usenet has maintained a niche but persistent user base, with millions of messages posted daily across over 100,000 newsgroups, though activity is heavily skewed toward binary content rather than text discussions.[126] Text-based hierarchies like the Big-8 sustain limited engagement, with the management board tracking fewer than 300 actively moderated groups as of mid-2024, focusing on topics such as computing, sciences, and recreation.[127] Binary newsgroups, particularly in the alt.binaries.* namespace, dominate traffic due to their role in long-term file archiving and distribution, supported by commercial providers offering retention periods exceeding 6,000 days—equivalent to over 16 years of stored articles.[128][45] The provider ecosystem has evolved into a commercial model reliant on dedicated backbones and resellers, compensating for the withdrawal of free access from most ISPs in the early 2010s. Major backbones, including those from Eweka and Newshosting's tier-1 infrastructure, peer directly to ensure high completion rates above 99% and unlimited bandwidth for subscribers.[129][130] Providers like UsenetServer and Pure Usenet maintain petabyte-scale storage, with retention metrics such as 5 PB+ for binaries, enabling reliable access for privacy-focused users who pair services with VPNs to mitigate logging risks.[131] This paid structure has causally sustained binary viability by incentivizing infrastructure investment, contrasting with the fragmentation of free text feeds. Recent enhancements include widespread adoption of NNTP over TLS encryption for secure connections, standard across top providers since the mid-2010s, alongside community resources like Reddit's r/usenet for indexing tools and setup guides.[132] Usenet thus endures for specialized applications in technology niches, academic data retention, and anonymous file sharing, where its decentralized retention outperforms ephemeral web alternatives.[133]Archival Efforts and Accessibility
Google Groups serves as the most extensive centralized archive of Usenet content, incorporating the Deja News collection acquired by Google in February 2001, which originated in March 1995 and extends back to postings from 1981.[134][135] This archive enables full-text search across pre-1990s material, though Google discontinued support for new Usenet posting, subscription, and real-time viewing in February 2024, retaining only historical searchability.[136] Alternatives include partial dumps on the Internet Archive, such as collections of alt.* hierarchies in mbox format, and community-hosted repositories like UsenetArchives.com, which index hundreds of millions of posts dating to the 1980s.[137][138] Preservation efforts emphasize text-based newsgroups, particularly the Big-8 hierarchies (comp., humanities., misc., news., rec., sci., soc., talk.), with initiatives like BlueWorld Hosting's public archive providing over 20 years of retention and the Free Usenet Text Archive offering ad-free access to approximately 300 million posts in about 300 GB.[139][140] Community-driven projects, including those by Archive Team, involve scraping and mirroring to counter decentralization risks, though these remain fragmented and focused on non-binary content to avoid legal issues.[141] Archiving faces inherent challenges from Usenet's distributed propagation model via NNTP, where posts do not universally reach all servers, resulting in incomplete captures dependent on individual server logs and retention policies.[142] Proliferation of spam, which escalated in the 1990s and constitutes a significant portion of later volumes—often over half in some groups—further pollutes datasets, complicating curation without native filtering tools.[105] In 2025, historical access relies on Google Groups for search or dedicated web interfaces, supplemented by paid NNTP readers from providers with extended retention (up to decades in some cases) or web proxies for browsing, yet these cannot fully replicate original threading and context preserved in live newsreaders.[143] This decentralization, while resilient for ongoing use, precludes a singular comprehensive archive akin to web crawls, rendering originals irreplaceable for scholarly or contextual analysis.Comparisons to Modern Alternatives and Lessons Learned
Usenet's decentralized, server-federated model differs markedly from the centralized architectures of contemporary platforms such as Reddit and X (formerly Twitter). Reddit's subreddit system imposes hierarchical moderation by volunteer or appointed administrators, enabling targeted community governance but concentrating power in few hands and facilitating algorithmic promotion of popular content over substantive depth.[144] In contrast, Usenet's threading mechanism supported persistent, hierarchical discussions across independent servers, preserving context for complex topics like software development, though without built-in tools this exposed networks to unchecked spam and off-topic flooding absent in Reddit's upvote/downvote curation.[126] X emphasizes ephemeral, real-time posting with character limits and verified accounts for visibility, prioritizing virality over Usenet's archival permanence, which allowed long-term reference but scaled poorly as traffic grew without proprietary algorithms to filter noise.[145] These differences underscore decentralization's empirical trade-offs: Usenet demonstrated how protocol-based systems can drive innovation by enabling pseudonymous, borderless collaboration without corporate gatekeeping, as evidenced by its role in early open-source dissemination prior to centralized repositories.[146] Yet, causal factors in its marginalization include the absence of user retention incentives—like personalized feeds or monetization—coupled with spam's exponential growth, which overwhelmed voluntary self-policing and deterred mainstream adoption as the web offered frictionless alternatives by the mid-1990s. Modern platforms' algorithmic moderation, while mitigating such overload, often veers into over-correction via opaque content suppression, amplifying concerns over viewpoint bias in centralized decision-making.[145] Key lessons for truth-seeking systems emphasize balancing unmoderated openness, which yielded Usenet's breakthroughs in technical discourse, against scalable defenses against abuse; pure decentralization falters at volume without hybrid incentives, as unfiltered anonymity invites manipulation rivaling platform algorithms' flaws. In 2025, Usenet's federation inspires the Fediverse—networks like Mastodon—where instance-level policies approximate Usenet's server autonomy while incorporating federation protocols to evade single-point control, though adoption lags due to usability hurdles mirroring Usenet's interface rigidity.[147] This evolution debunks uncritical nostalgia: raw access propelled early internet progress, but sustained viability demands evolved mechanisms beyond either extreme centralization or unchecked distribution.[146]References
- https://wiki.gentoo.org/wiki/Usenet
