Recent from talks
Nothing was collected or created yet.
Internet filter
View on Wikipedia| Censorship |
|---|
An Internet filter is a type of internet censorship that restricts or controls the content an Internet user is capable to access, especially when utilized to restrict material delivered over the Internet via the Web, Email, or other means. Such restrictions can be applied at various levels: a government can attempt to apply them nationwide (see Internet censorship), or they can, for example, be applied by an Internet service provider to its clients, by an employer to its personnel, by a school to its students, by a library to its visitors, by a parent to a child's computer, or by an individual user to their own computers. The motive is often to prevent access to content which the computer's owner(s) or other authorities may consider objectionable. When imposed without the consent of the user, content control can be characterised as a form of internet censorship. Some filter software includes time control functions that empowers parents to set the amount of time that child may spend accessing the Internet or playing games or other computer activities.
Terminology
[edit]The term "content control" is used on occasion by CNN,[1] Playboy magazine,[2] the San Francisco Chronicle,[3] and The New York Times.[4] However, several other terms, including "content filtering software", "web content filter", "filtering proxy servers", "secure web gateways", "censorware", "content security and control", "web filtering software", "content-censoring software", and "content-blocking software", are often used. "Nannyware" has also been used in both product marketing and by the media. Industry research company Gartner uses "secure web gateway" (SWG) to describe the market segment.[5]
Companies that make products that selectively block Web sites do not refer to these products as censorware, and prefer terms such as "Internet filter" or "URL Filter"; in the specialized case of software specifically designed to allow parents to monitor and restrict the access of their children, "parental control software" is also used. Some products log all sites that a user accesses and rates them based on content type for reporting to an "accountability partner" of the person's choosing, and the term accountability software is used. Internet filters, parental control software, and/or accountability software may also be combined into one product.
Those critical of such software, however, use the term "censorware" freely: consider the Censorware Project, for example.[6] The use of the term "censorware" in editorials criticizing makers of such software is widespread and covers many different varieties and applications: Xeni Jardin used the term in a 9 March 2006 editorial in The New York Times, when discussing the use of American-made filtering software to suppress content in China; in the same month a high school student used the term to discuss the deployment of such software in his school district.[7][8]
In general, outside of editorial pages as described above, traditional newspapers do not use the term "censorware" in their reporting, preferring instead to use less overtly controversial terms such as "content filter", "content control", or "web filtering"; The New York Times and The Wall Street Journal both appear to follow this practice. On the other hand, Web-based newspapers such as CNET use the term in both editorial and journalistic contexts, for example "Windows Live to Get Censorware."[9]
Types of filtering
[edit]Filters can be implemented in many different ways: by software on a personal computer, via network infrastructure such as proxy servers, DNS servers, or firewalls that provide Internet access. No solution provides complete coverage, so most companies deploy a mix of technologies to achieve the proper content control in line with their policies.
Browser based filters
[edit]- Browser based content filtering solution is the most lightweight solution to do the content filtering, and is implemented via a third party browser extension.
E-mail filters
[edit]- E-mail filters act on information contained in the mail body, in the mail headers such as sender and subject, and e-mail attachments to classify, accept, or reject messages. Bayesian filters, a type of statistical filter, are commonly used. Both client and server based filters are available.
Client-side filters
[edit]- This type of filter is installed as software on each computer where filtering is required.[10][11] This filter can typically be managed, disabled or uninstalled by anyone who has administrator-level privileges on the system. A DNS-based client-side filter would be to set up a DNS Sinkhole, such as Pi-Hole.
Content-limited (or filtered) ISPs
[edit]- Content-limited (or filtered) ISPs are Internet service providers that offer access to only a set portion of Internet content on an opt-in or a mandatory basis. Anyone who subscribes to this type of service is subject to restrictions. The type of filters can be used to implement government,[12] regulatory[13] or parental control over subscribers.
Network-based filtering
[edit]- This type of filter is implemented at the transport layer as a transparent proxy, or at the application layer as a web proxy.[14] Filtering software may include data loss prevention functionality to filter outbound as well as inbound information. All users are subject to the access policy defined by the institution. The filtering can be customized, so a school district's high school library can have a different filtering profile than the district's junior high school library.
DNS-based filtering
[edit]- This type of filtering is implemented at the DNS layer and attempts to prevent lookups for domains that do not fit within a set of policies (either parental control or company rules). Multiple free public DNS services offer filtering options as part of their services. DNS sinkholes such as Pi-Hole can be also be used for this purpose, though client-side only.[15]
Search-engine filters
[edit]- Many search engines, such as Google and Bing offer users the option of turning on a safety filter. When this safety filter is activated, it filters out the inappropriate links from all of the search results. If users know the actual URL of a website that features explicit or adult content, they have the ability to access that content without using a search engine. Some providers offer child-oriented versions of their engines that permit only child friendly websites.[16]
Parental controls
[edit]- Some ISPs offer parental control options. Some offer security software which includes parental controls. Mac OS X v10.4 offers parental controls for several applications (Mail, Finder, iChat, Safari & Dictionary). Microsoft's Windows Vista operating system also includes content-control software.
Reasons for filtering
[edit]The Internet does not intrinsically provide content blocking, and therefore there is much content on the Internet that is considered unsuitable for children, given that much content is given certifications as suitable for adults only, e.g. 18-rated games and movies.
Internet service providers (ISPs) that block material containing pornography, or controversial religious, political, or news-related content en route are often utilized by parents who do not permit their children to access content not conforming to their personal beliefs. Content filtering software can, however, also be used to block malware and other content that is or contains hostile, intrusive, or annoying material including adware, spam, computer viruses, worms, trojan horses, and spyware.
Most content control software is marketed to organizations or parents. It is, however, also marketed on occasion to facilitate self-censorship, for example by people struggling with addictions to online pornography, gambling, chat rooms, etc. Self-censorship software may also be utilised by some in order to avoid viewing content they consider immoral, inappropriate, or simply distracting. A number of accountability software products are marketed as self-censorship or accountability software. These are often promoted by religious media and at religious gatherings.[17]
Technology
[edit]Content filtering technology exists in two major forms: application gateway or packet inspection. For HTTP access the application gateway is called a web-proxy or just a proxy. Such web-proxies can inspect both the initial request and the returned web page using arbitrarily complex rules and will not return any part of the page to the requester until a decision is made. In addition they can make substitutions in whole or for any part of the returned result. Packet inspection filters do not initially interfere with the connection to the server but inspect the data in the connection as it goes past, at some point the filter may decide that the connection is to be filtered and it will then disconnect it by injecting a TCP-Reset or similar faked packet. The two techniques can be used together with the packet filter monitoring a link until it sees an HTTP connection starting to an IP address that has content that needs filtering. The packet filter then redirects the connection to the web-proxy which can perform detailed filtering on the website without having to pass through all unfiltered connections. This combination is quite popular because it can significantly reduce the cost of the system.
There are constraints to IP level packet-filtering, as it may result in rendering all web content associated with a particular IP address inaccessible. This may result in the unintentional blocking of legitimate sites that share the same IP address or domain. For instance, university websites commonly employ multiple domains under one IP address. Moreover, IP level packet-filtering can be surpassed by using a distinct IP address for certain content while still being linked to the same domain or server.[18]
Gateway-based content control software may be more difficult to bypass than desktop software as the user does not have physical access to the filtering device. However, many of the techniques in the Bypassing filters section still work.
Content labeling
[edit]Content labeling may be considered another form of content-control software. In 1994, the Internet Content Rating Association (ICRA) — now part of the Family Online Safety Institute — developed a content rating system for online content providers. Using an online questionnaire a webmaster describes the nature of their web content. A small file is generated that contains a condensed, computer readable digest of this description that can then be used by content filtering software to block or allow that site.
ICRA labels come in a variety of formats.[19] These include the World Wide Web Consortium's Resource Description Framework (RDF) as well as Platform for Internet Content Selection (PICS) labels used by Microsoft's Internet Explorer Content Advisor.[20]
ICRA labels are an example of self-labeling. Similarly, in 2006 the Association of Sites Advocating Child Protection (ASACP) initiated the Restricted to Adults self-labeling initiative. ASACP members were concerned that various forms of legislation being proposed in the United States were going to have the effect of forcing adult companies to label their content.[21] The RTA label, unlike ICRA labels, does not require a webmaster to fill out a questionnaire or sign up to use. Like ICRA the RTA label is free. Both labels are recognized by a wide variety of content-control software.
The Voluntary Content Rating (VCR) system was devised by Solid Oak Software for their CYBERsitter filtering software, as an alternative to the PICS system, which some critics deemed too complex. It employs HTML metadata tags embedded within web page documents to specify the type of content contained in the document. Only two levels are specified, mature and adult, making the specification extremely simple.
By country
[edit]Australia
[edit]The Australian Internet Safety Advisory Body has information about "practical advice on Internet safety, parental control and filters for the protection of children, students and families" that also includes public libraries.[22]
NetAlert, the software made available free of charge by the Australian government, was allegedly cracked by a 16-year-old student, Tom Wood, less than a week after its release in August 2007. Wood supposedly bypassed the $84 million filter in about half an hour to highlight problems with the government's approach to Internet content filtering.[23]
The Australian Government has introduced legislation that requires ISPs to "restrict access to age restricted content (commercial MA15+ content and R18+ content) either hosted in Australia or provided from Australia" that was due to commence from 20 January 2008, known as Cleanfeed.[24]
Cleanfeed is a proposed mandatory ISP level content filtration system. It was proposed by the Beazley led Australian Labor Party opposition in a 2006 press release, with the intention of protecting children who were vulnerable due to claimed parental computer illiteracy. It was announced on 31 December 2007 as a policy to be implemented by the Rudd ALP government, and initial tests in Tasmania have produced a 2008 report. Cleanfeed is funded in the current budget, and is moving towards an Expression of Interest for live testing with ISPs in 2008. Public opposition and criticism have emerged, led by the EFA and gaining irregular mainstream media attention, with a majority of Australians reportedly "strongly against" its implementation.[25] Criticisms include its expense, inaccuracy (it will be impossible to ensure only illegal sites are blocked) and the fact that it will be compulsory, which can be seen as an intrusion on free speech rights.[25] Another major criticism point has been that although the filter is claimed to stop certain materials, the underground rings dealing in such materials will not be affected. The filter might also provide a false sense of security for parents, who might supervise children less while using the Internet, achieving the exact opposite effect.[original research?] Cleanfeed is a responsibility of Senator Conroy's portfolio.
Denmark
[edit]In Denmark it is stated policy that it will "prevent inappropriate Internet sites from being accessed from children's libraries across Denmark".[26] "'It is important that every library in the country has the opportunity to protect children against pornographic material when they are using library computers. It is a main priority for me as Culture Minister to make sure children can surf the net safely at libraries,' states Brian Mikkelsen in a press-release of the Danish Ministry of Culture."[27]
United Kingdom
[edit]Many libraries in the UK such as the British Library[28] and local authority public libraries[29] apply filters to Internet access. According to research conducted by the Radical Librarians Collective, at least 98% of public libraries apply filters; including categories such as "LGBT interest", "abortion" and "questionable".[30] Some public libraries block Payday loan websites[31]
United States
[edit]The use of Internet filters or content-control software varies widely in public libraries in the United States, since Internet use policies are established by the local library board. Many libraries adopted Internet filters after Congress conditioned the receipt of universal service discounts on the use of Internet filters through the Children's Internet Protection Act (CIPA). Other libraries do not install content control software, believing that acceptable use policies and educational efforts address the issue of children accessing age-inappropriate content while preserving adult users' right to freely access information. Some libraries use Internet filters on computers used by children only. Some libraries that employ content-control software allow the software to be deactivated on a case-by-case basis on application to a librarian; libraries that are subject to CIPA are required to have a policy that allows adults to request that the filter be disabled without having to explain the reason for their request.
Many legal scholars believe that a number of legal cases, in particular Reno v. American Civil Liberties Union, established that the use of content-control software in libraries is a violation of the First Amendment.[32] The Children's Internet Protection Act [CIPA] and the June 2003 case United States v. American Library Association found CIPA constitutional as a condition placed on the receipt of federal funding, stating that First Amendment concerns were dispelled by the law's provision that allowed adult library users to have the filtering software disabled, without having to explain the reasons for their request. The plurality decision left open a future "as-applied" Constitutional challenge, however.
In November 2006, a lawsuit was filed against the North Central Regional Library District (NCRL) in Washington State for its policy of refusing to disable restrictions upon requests of adult patrons, but CIPA was not challenged in that matter.[33] In May 2010, the Washington State Supreme Court provided an opinion after it was asked to certify a question referred by the United States District Court for the Eastern District of Washington: "Whether a public library, consistent with Article I, § 5 of the Washington Constitution, may filter Internet access for all patrons without disabling Web sites containing constitutionally-protected speech upon the request of an adult library patron." The Washington State Supreme Court ruled that NCRL's internet filtering policy did not violate Article I, Section 5 of the Washington State Constitution. The Court said: "It appears to us that NCRL's filtering policy is reasonable and accords with its mission and these policies and is viewpoint neutral. It appears that no article I, section 5 content-based violation exists in this case. NCRL's essential mission is to promote reading and lifelong learning. As NCRL maintains, it is reasonable to impose restrictions on Internet access in order to maintain an environment that is conducive to study and contemplative thought." The case returned to federal court.
In March 2007, Virginia passed a law similar to CIPA that requires public libraries receiving state funds to use content-control software. Like CIPA, the law requires libraries to disable filters for an adult library user when requested to do so by the user.[34]
Criticism
[edit]Filtering errors
[edit]Overblocking
[edit]Utilizing a filter that is overly zealous at filtering content, or mislabels content not intended to be censored can result in over-blocking, or over-censoring. Overblocking can filter out material that should be acceptable under the filtering policy in effect, for example health related information may unintentionally be filtered along with porn-related material because of the Scunthorpe problem. Filter administrators may prefer to err on the side of caution by accepting over blocking to prevent any risk of access to sites that they determine to be undesirable. Content-control software was mentioned as blocking access to Beaver College before its name change to Arcadia University.[35] Another example was the filtering of Horniman Museum.[36] As well, over-blocking may encourage users to bypass the filter entirely.
Underblocking
[edit]Whenever new information is uploaded to the Internet, filters can under block, or under-censor, content if the parties responsible for maintaining the filters do not update them quickly and accurately, and a blacklisting rather than a whitelisting filtering policy is in place.[37]
Morality and opinion
[edit]Many[38] would not be satisfied with government filtering viewpoints on moral or political issues, agreeing that this could become support for propaganda. Many[39] would also find it unacceptable that an ISP, whether by law or by the ISP's own choice, should deploy such software without allowing the users to disable the filtering for their own connections. In the United States, the First Amendment to the United States Constitution has been cited in calls to criminalise forced internet censorship. (See section below)
Religious, anti-religious, and political censorship
[edit]Many types of content-control software have been shown to block sites based on the religious and political leanings of the company owners. Examples include blocking several religious sites[40][41] (including the Web site of the Vatican), many political sites, and homosexuality-related sites.[42] X-Stop was shown to block sites such as the Quaker web site, the National Journal of Sexual Orientation Law, The Heritage Foundation, and parts of The Ethical Spectacle.[43] CYBERsitter blocks out sites like National Organization for Women.[44] Nancy Willard, an academic researcher and attorney, pointed out that many U.S. public schools and libraries use the same filtering software that many Christian organizations use.[45] Cyber Patrol, a product developed by The Anti-Defamation League and Mattel's The Learning Company,[46] has been found to block not only political sites it deems to be engaging in 'hate speech' but also human rights web sites, such as Amnesty International's web page about Israel and gay-rights web sites, such as glaad.org.[47]
Legal actions
[edit]In 1998, a United States federal district court in Virginia ruled (Loudoun v. Board of Trustees of the Loudoun County Library) that the imposition of mandatory filtering in a public library violates the First Amendment.[48]
In 1996 the US Congress passed the Communications Decency Act, banning indecency on the Internet. Civil liberties groups challenged the law under the First Amendment, and in 1997 the Supreme Court ruled in their favor.[49] Part of the civil liberties argument, especially from groups like the Electronic Frontier Foundation,[50] was that parents who wanted to block sites could use their own content-filtering software, making government involvement unnecessary.[51]
In the late 1990s, groups such as the Censorware Project began reverse-engineering the content-control software and decrypting the blacklists to determine what kind of sites the software blocked. This led to legal action alleging violation of the "Cyber Patrol" license agreement.[52] They discovered that such tools routinely blocked unobjectionable sites while also failing to block intended targets.
Some content-control software companies responded by claiming that their filtering criteria were backed by intensive manual checking. The companies' opponents argued, on the other hand, that performing the necessary checking would require resources greater than the companies possessed and that therefore their claims were not valid.[53]
The Motion Picture Association successfully obtained a UK ruling enforcing ISPs to use content-control software to prevent copyright infringement by their subscribers.[54]
Bypassing filters
[edit]Content filtering in general can "be bypassed entirely by tech-savvy individuals." Blocking content on a device "[will not]…guarantee that users won't eventually be able to find a way around the filter."[55] Content providers may change URLs or IP addresses to circumvent filtering. Individuals with technical expertise may use a different method by employing multiple domains or URLs that direct to a shared IP address where restricted content is present. This strategy doesn't circumvent IP packet filtering, however can evade DNS poisoning and web proxies. Additionally, perpetrators may use mirrored websites that avoid filters.[56]
Some software may be bypassed successfully by using alternative protocols such as FTP or telnet or HTTPS, conducting searches in a different language, using a proxy server or a circumventor such as Psiphon. Also cached web pages returned by Google or other searches could bypass some controls as well. Web syndication services may provide alternate paths for content. Some of the more poorly designed programs can be shut down by killing their processes: for example, in Microsoft Windows through the Windows Task Manager, or in Mac OS X using Force Quit or Activity Monitor. Numerous workarounds and counters to workarounds from content-control software creators exist. Google services are often blocked by filters, but these may most often be bypassed by using https:// in place of http:// since content filtering software is not able to interpret content under secure connections (in this case SSL).[needs update]
An encrypted VPN can be used as means of bypassing content control software, especially if the content control software is installed on an Internet gateway or firewall. Other ways to bypass a content control filter include translation sites and establishing a remote connection with an uncensored device.[57]
See also
[edit]- Adultism
- Ad filtering
- Comparison of content-control software and providers (incl. parental control software)
- Computer and network surveillance
- Content moderation
- David Burt, a former librarian and advocate for content-control software
- Deep content inspection
- Egress filtering, control of outbound network traffic
- Financial Coalition Against Child Pornography
- Internet safety
- Nymwar
- Opposition to pornography
- Peacefire, a U.S.-based website dedicated to "preserving First Amendment rights for Internet users, particularly those younger than 18"
- Russian State Duma Bill 89417-6 - a proposed bill that would mandate content control software
- Scunthorpe problem
- Wordfilter, generic name for scripts typically used on Internet forums or chat rooms that automatically scans users' posts or comments as they are submitted and automatically changes or censors particular words or phrases
References
[edit]- ^ "Young, angry … and wired - May 3, 2005". Edition.CNN.com. 3 May 2005. Archived from the original on 8 December 2009. Retrieved 25 October 2009.
- ^ Umstead, R. Thomas (20 May 2006). "Playboy Preaches Control". Multichannel News. Archived from the original on 22 September 2013. Retrieved 25 June 2013.
- ^ Woolls, Daniel (October 25, 2002). "Web sites go blank to protest strict new Internet law". sfgate.com. Associated Press. Archived from the original on 8 July 2003.
- ^ Bickerton, Derek (30 November 1997). "Digital Dreams". The New York Times. Retrieved 25 October 2009.
- ^ "IT Glossary: Secure Web Gateway". gartner.com. Retrieved 27 March 2012.
- ^ "Censorware Project". censorware.net. Archived from the original on 20 June 2015. Retrieved 17 November 2001.
- ^ "159.54.226.83/apps/pbcs.dll/article?AID=/20060319/COLUMN0203/603190309/1064". Archived from the original on 19 October 2007.
- ^ "DMCA 1201 Exemption Transcript, April 11 - Censorware". Sethf.com. 11 April 2003. Retrieved 25 October 2009.
- ^ "Windows Live to get censorware - ZDNet.co.uk". news.ZDNet.co.uk. 14 March 2006. Archived from the original on 5 December 2008. Retrieved 25 October 2009.
- ^ Client-side filters. NetSafeKids. National Academy of Sciences. 2003. ISBN 9780309082747. Retrieved 24 June 2013.
- ^ "Protecting Your Kids with Family Safety". microsoft.com. Retrieved 10 July 2012.
- ^ Xu, Xueyang; Mao, Z. Morley; Halderman, J. Alex (5 Jan 2011). "Internet Censorship in China: Where Does the Filtering Occur?" (PDF). Georgia Tech. University of Michigan. Archived from the original (PDF) on 24 March 2012. Retrieved 10 July 2012.
- ^ Christopher Williams (3 May 2012). "The Pirate Bay cut off from millions of Virgin Media customers". The Daily Telegraph. Retrieved 8 May 2012.
- ^ "Explicit and Transparent Proxy Deployments". websense.com. 2010. Archived from the original on 18 April 2012. Retrieved 30 March 2012.
- ^ Fruhlinger, Keith Shaw and Josh (2022-07-13). "What is DNS and how does it work?". Network World. Retrieved 2023-08-22.
- ^ Filtering. NetSafeKids. National Academy of Sciences. 2003. ISBN 9780309082747. Retrieved 22 November 2010.
- ^ "Accountability Software: Accountability and Monitoring Software Reviews". UrbanMinistry.org. TechMission, Safe Families. Retrieved 25 October 2009.
- ^ Varadharajan, Vijay (2010). "Internet filtering - Issues and challenges". IEEE Security & Privacy. 8 (4): 62–65. Bibcode:2010ISPri...8d..62V. doi:10.1109/MSP.2010.131.
- ^ "ICRA: Technical standards used". Family Online Safety Institute. Archived from the original on 2007-07-24. Retrieved 2008-07-04.
- ^ "Browse the Web with Internet Explorer 6 and Content Advisor". microsoft.com. March 26, 2003.
- ^ "ASACP Participates in Financial Coalition Against Child Pornography". November 20, 2007. Retrieved 2008-07-04.
- ^ "NetAlert: Parents Guide to Internet Safety" (PDF). Australian Communications and Media Authority. 2 August 2007. Archived from the original (PDF) on 19 April 2013. Retrieved 24 June 2013.
- ^ "Teenager cracks govt's $84m porn filter". the Sydney Morning Herald. Fairfax Digital. Australian Associated Press (AAP). 25 August 2007. Retrieved 24 June 2013.
- ^ "Restricted Access Systems Declaration 2007" (PDF). Australian Communications and Media Authority. 2007. Archived from the original (PDF) on 24 March 2012. Retrieved 24 June 2013.
- ^ a b "Learn - No Clean Feed - Stop Internet Censorship in Australia". Electronic Frontiers Australia. Archived from the original on 7 January 2010. Retrieved 25 October 2009.
- ^ "Danish Ministry of Culture Chooses SonicWALL CMS 2100 Content Filter to Keep Children's Libraries Free of Unacceptable Material". PR Newswire.com (Press release). Retrieved 2009-10-25.
- ^ "Danish Minister of Culture offers Internet filters to libraries". saferinternet.org. Archived from the original on 2009-02-12. Retrieved 2009-10-25.
- ^ "British Library's wi-fi service blocks 'violent' Hamlet". BBC News. 13 August 2013.
- ^ "Do we want a perfectly filtered world?", Louise Cooke, Lecturer, Department of Information Science, Loughborough University, November 2006. Archived 4 December 2013 at the Wayback Machine
- ^ "New research maps the extent of web filtering in public libraries". 11 April 2016. Retrieved 18 July 2016.
- ^ Short, Adrian (3 April 2014). "Should public libraries block payday loan websites?". Pirate Party UK. Archived from the original on 11 September 2016. Retrieved 16 April 2014.
- ^ Wallace, Jonathan D. (November 9, 1997). "Purchase of blocking software by public libraries is unconstitutional".
- ^ "ACLU Suit Seeks Access to Information on Internet for Library Patrons". ACLU of Washington. November 16, 2006. Archived from the original on December 5, 2006.
- ^ Sluss, Michael (March 23, 2007). "Kaine signs library bill: The legislation requires public libraries to block obscene material with Internet filters". The Roanoke Times. Archived from the original on February 29, 2012. Retrieved March 24, 2007.
- ^ "Web Censors Prompt College To Consider Name Change". slashdot.org. 2 March 2000. Retrieved 22 November 2010.
- ^ Lester Haines (8 October 2004). "Porn filters have a field day on Horniman Museum". The Register.
- ^ Stark, Philip B. (10 November 2007). "The Effectiveness of Internet Content Filters" (PDF). University of California, Berkeley. Archived (PDF) from the original on 2010-07-15. Retrieved 22 November 2010.
- ^ Lui, Spandas (23 March 2010). "Microsoft, Google and Yahoo! speak out in ISP filter consultation". AARNet.com. Retrieved 22 November 2010.
- ^ "Google and Yahoo raise doubts over planned net filters". BBC News. 16 February 2010. Retrieved 30 April 2010.
- ^ Kelly Wilson (2008-11-06). "Hometown Has Been Shutdown - People Connection Blog: AIM Community Network". AOL Hometown. Archived from the original on 2008-05-09. Retrieved 2009-10-25.
- ^ "Notice!!". Members.tripod.com. Retrieved 2009-10-25.
- ^ "www.glaad.org/media/archive_detail.php?id=103&". Archived from the original on June 7, 2008.
- ^ "The Mind of a Censor". Spectacle.org. Retrieved 2009-10-25.
- ^ "CYBERsitter: Where do we not want you to go today?". Spectacle.org. Retrieved 2009-10-25.
- ^ "See: Filtering Software: The Religious Connection". Csriu.org. Archived from the original on 2008-07-05. Retrieved 2009-10-25.
- ^ "See: ADL and The Learning Company Develop Educational Software". Anti-Defamation League. Archived from the original on 2011-02-09. Retrieved 2011-08-26.
- ^ "See: Cyber Patrol Examined". peacefire.org. Retrieved 2011-08-26.
- ^ "Mainstream Loudon v. Board of Trustees of the Loudon County Library, 24 F. Supp. 2d 552 (E.D. Va. 1998)". Tomwbell.com. Retrieved 25 October 2009.
- ^ "Reno v. American Civil Liberties Union - 521 U.S. 844 (1997)". Justia.com. 26 June 1997.
- ^ "Legal Victories". Electronic Frontier Foundation. Retrieved 2019-02-01.
- ^ "Children Internet Safety". www.justice.gov. 2015-05-26. Archived from the original on 2019-02-02. Retrieved 2019-02-01.
- ^ "Microsystems v Scandinavia Online, Verified Complaint". Electronic Frontier Foundation. United States District Court, District of Massachusetts. 15 March 2000. Archived from the original on 12 February 2009. Retrieved 25 October 2009.
- ^ Seth Finkelstein & Lee Tien. "Electronic Frontier Foundation White Paper 1 for NRC project on Tools and Strategies for Protecting Kids from Pornography and Their Applicability to Other Inappropriate Internet Content". National Academy of Sciences, Engineering, and Medicine. Archived from the original on 19 April 2006.
- ^ "Sky, Virgin Media Asked to Block Piracy Site Newzbin2". BBC News. 9 November 2011. Retrieved 26 March 2012.
- ^ Satterfield, Brian (4 June 2007). "Understanding Content Filtering: An FAQ for Nonprofits". TechSoup.org. Retrieved 24 June 2013.
- ^ Varadharajan, Vijay (July 2010). "Internet filtering - Issues and challenges". IEEE Security & Privacy. 8 (4): 62–65. Bibcode:2010ISPri...8d..62V. doi:10.1109/msp.2010.131. ISSN 1540-7993.
- ^ "Is It Possible To Easily Avoid Internet Filters?". Comodo Cybersecurity. 4 June 2007. Retrieved 2 October 2018.
Internet filter
View on GrokipediaDefinition and Terminology
Core Concepts and Scope
An internet filter, also known as content filtering or web filtering, refers to software, hardware, or protocol-based systems designed to monitor, restrict, or block access to specific online content based on predefined criteria such as URLs, keywords, file types, or content categories.[1][12] These systems inspect network traffic or user requests in real-time, comparing them against rule sets to permit or deny transmission, thereby preventing exposure to malware, phishing sites, explicit material, or unauthorized resources.[3] Core to this concept is the distinction between whitelisting (allowing only approved content) and blacklisting (blocking prohibited items), with hybrid approaches adapting dynamically to threats.[13] The primary purposes of internet filters encompass cybersecurity defense, operational efficiency, legal compliance, and behavioral control. In enterprise environments, filters mitigate risks by blocking malicious downloads or productivity drains like social media during work hours, reducing data breach incidents reported at 2,200 per day globally in 2023.[3] For educational institutions, mandates such as the U.S. Children's Internet Protection Act (CIPA) of 2000 require filters on federally funded networks to obstruct obscene images, child pornography, or content harmful to minors, with 96% of public schools employing such technologies by 2001.[14][15] Parental and personal uses focus on shielding children from violence or hate speech, while governmental applications extend to national security by curbing disinformation or extremist propaganda, though implementations vary by jurisdiction and can inadvertently suppress legitimate discourse.[16] The scope of internet filtering extends beyond mere web browsing to encompass email scanning, application-level controls, and protocol inspections across devices, networks, and ISPs, influencing an estimated 4.5 billion global internet users as of 2023.[17] It operates on principles of pattern matching and categorization—assigning sites to buckets like "gambling" or "weapons"—but faces limitations including evasion via VPNs, proxy servers, or encrypted traffic, which accounted for over 90% of web data by 2024.[7] Overblocking, where benign educational or research materials are restricted, occurs in up to 30% of school filters per studies, highlighting trade-offs between safety and access.[18] Emerging integrations with AI enhance accuracy by analyzing context rather than static rules, yet raise concerns over false positives and scalability in high-volume traffic scenarios exceeding 100 Gbps.[13]Historical Development
The development of internet filters originated in the early 1990s amid the rapid commercialization of the World Wide Web, which amplified public concerns over unrestricted access to pornography, hate speech, and other objectionable content, particularly for minors in households and educational settings.[19] The inaugural commercial internet filtering software, Net Nanny, was launched in January 1994 by Gordon Ross, employing rudimentary keyword-based detection to scan and block text deemed inappropriate on web pages and in communications.[20] This approach relied on predefined lists of prohibited terms, often resulting in aggressive over-blocking, such as flagging innocuous sites containing words like "breast" in medical contexts.[19] Concurrently, other pioneering tools emerged, including SurfWatch, which introduced category-based URL blacklisting for parental controls, and Cyber Patrol, which expanded filtering to network-level enforcement in schools and libraries by the mid-1990s.[21] Legislative efforts in the United States accelerated the adoption and refinement of these technologies. The Communications Decency Act (CDA) of 1996 sought to criminalize the online transmission of "indecent" materials accessible to children, but its key provisions were invalidated by the Supreme Court in Reno v. ACLU (1997) as overly broad violations of First Amendment rights, shifting reliance toward voluntary private-sector filtering solutions.[19] This ruling prompted software vendors to enhance user-configurable options, such as customizable block lists in Net Nanny and Cyber Patrol. The Children's Internet Protection Act (CIPA), enacted in 2000 and upheld by the Supreme Court in United States v. American Library Association (2003), mandated the deployment of filters on computers in schools and libraries receiving federal E-rate funding to prevent access to obscene or harmful content, spurring widespread institutional implementation and market growth for tools like WebSense, originally developed around 1994 for workplace productivity by blocking non-work-related sites.[19] By the late 1990s and early 2000s, internet filters evolved from standalone client-side applications to include server-based and protocol-level mechanisms, influenced by international precedents such as China's nascent Great Firewall system, which began deploying IP blocking and keyword inspection on state-controlled networks around 1998 to enforce political and moral censorship.[22] Early circumvention tools, like the 2000 cphack utility designed to bypass Cyber Patrol, highlighted technical limitations and prompted vendors to incorporate dynamic database updates and hybrid rule sets, laying groundwork for more sophisticated blacklist maintenance by organizations rating content categories.[19] These advancements reflected a causal progression from reactive, text-scanning methods to proactive, database-driven architectures, driven by empirical demands for scalability amid exponential internet growth, though persistent issues with false positives underscored the inherent challenges of algorithmic content judgment.[19]Types of Filters
Client-Side and Browser-Based Filters
Client-side and browser-based filters consist of software installed on end-user devices or integrated as browser extensions that locally inspect and regulate web traffic to prevent access to specified content. These mechanisms operate by intercepting HTTP/HTTPS requests and responses at the application layer, evaluating them against local rule sets, blacklists, or categorization databases before rendering in the browser.[23] Unlike server-side approaches, they do not require network intermediaries for core decision-making, enabling deployment without administrative control over upstream infrastructure.[24] Common implementations include standalone applications such as parental control suites and antivirus software with web protection features that users can enable to automatically block access to harmful or malicious websites by monitoring inbound and outbound traffic for malware or objectionable material, as well as browser add-ons that enforce URL-based or keyword restrictions.[23][13][25] Browser-based variants, often available as extensions for platforms like Google Chrome or Mozilla Firefox, leverage APIs to modify page loading behavior, such as redirecting or suppressing domains matching predefined patterns.[26] These filters typically rely on periodically updated local databases for site categorization—classifying URLs into groups like "adult content" or "gambling"—or perform real-time scans for keywords and scripts indicative of threats. Advantages of client-side filters include rapid response times, as evaluations occur without round-trip delays to remote servers, thereby minimizing latency in blocking attempts and improving perceived performance.[24] They also enhance privacy by processing data on-device, avoiding the transmission of user activity logs to third-party providers, which reduces exposure to centralized data breaches.[24] However, deployment requires manual installation and configuration on each device, limiting scalability in multi-user environments like schools or enterprises. Limitations arise from their vulnerability to user tampering; technically adept individuals can disable extensions, switch browsers, or employ virtual machines to evade restrictions, undermining enforcement in unsupervised settings. Resource consumption on the host device—due to constant traffic monitoring—can degrade performance, particularly on lower-end hardware, and incomplete HTTPS decryption may allow evasion of content scans.[23] Effectiveness further depends on database freshness, as outdated categorizations fail to address newly emerging sites, necessitating regular updates that users may neglect.[13] Despite these drawbacks, client-side filters remain a foundational tool for individualized control, often complemented by hybrid systems incorporating cloud-sourced intelligence for enhanced accuracy.[24]Network and ISP-Level Filters
Network and ISP-level filters enforce content restrictions at the infrastructure layer, typically managed by Internet Service Providers (ISPs) or enterprise network operators, affecting all subscribers or users within the network without endpoint-specific setup. These systems monitor and intervene in traffic flows at routers, gateways, or DNS resolvers to prevent access to blacklisted domains, IP addresses, or traffic patterns associated with prohibited material, such as illegal content or productivity-detracting sites.[27][28] Core mechanisms include IP address blocking, where network devices configured with access control lists (ACLs) or firewalls silently discard packets routed to targeted IPs, effectively isolating entire servers or ranges; this method is blunt and can inadvertently block collateral content hosted on shared IPs, such as via content delivery networks (CDNs).[29] DNS filtering operates by tampering with domain name system queries: ISP resolvers return non-routable "sinkhole" IPs (e.g., 127.0.0.1), forged NXDOMAIN errors, or redirects to warning pages for blocked domains, halting resolution before connections form.[30][31] More advanced deployments incorporate deep packet inspection (DPI) appliances to scrutinize payload contents against rule sets or signatures, enabling protocol-specific blocks (e.g., HTTP/HTTPS or BitTorrent), though DPI demands significant computational resources and raises privacy concerns due to unencrypted traffic analysis.[32][28] ISPs maintain centralized blocklists, often sourced from government mandates, commercial vendors like NetClean or BrightCloud, or automated feeds, integrated into core routing infrastructure for scalability across millions of users.[33] In the United Kingdom, a 2013 policy under Prime Minister David Cameron prompted major ISPs—BT, Sky, TalkTalk, and Virgin Media—to roll out default-activated filters by December 2013 for new customers, with existing users prompted to opt in or out; Ofcom oversaw completion by end-2014, targeting categories like pornography via category-based URL blocking with opt-out via customer portals.[34][35] In Pakistan, ISPs implement dual-layer filtering at international gateways and local exchanges using IP null-routing and DNS poisoning to enforce blocks on approximately 800,000 URLs as of 2006 data, covering political dissent, blasphemy, and obscenity, with lists updated via the Pakistan Telecommunication Authority (PTA).[36] Empirical assessments reveal limitations: filters frequently overblock benign sites (e.g., up to 20-30% false positives in tests of commercial systems) due to imprecise heuristics and shared hosting, while underblocking evasive tactics like domain generation algorithms or encrypted tunnels.[37] Circumvention via VPNs, Tor, or third-party DNS (e.g., 8.8.8.8) undermines enforcement, as these reroute traffic outside ISP purview, rendering network-level controls ineffective against technically adept users; studies on adolescent protection, for instance, found no significant reduction in exposure to harmful content despite household or ISP filters.[38][33][39] Such systems also fragment the internet architecture, complicating legitimate services like anycast DNS and fostering reliance on opaque blocklist curation prone to errors or abuse.[27][28]DNS and Protocol-Based Filters
DNS-based filters intercept Domain Name System (DNS) queries from client devices, evaluating requested domains against predefined policies or blocklists before resolving them to IP addresses. If a domain matches criteria for malicious activity, inappropriate content, or restricted categories—such as phishing sites or adult material—the filtering DNS server responds with an invalid IP address, a null response, or an NXDOMAIN error, preventing the initial connection attempt.[40] This approach operates at the DNS protocol level (UDP/TCP port 53), enabling rapid blocking with minimal computational overhead, as it avoids downloading full web content.[30] Services like Cloudflare Gateway and DNSFilter implement this by maintaining real-time threat intelligence feeds, categorizing over 1 billion domains into risk levels, and applying machine-learning-enhanced policies to block threats proactively.[40][30] In enterprise and ISP deployments, DNS filtering supports granular controls, such as whitelisting essential domains while blocking categories like social media or gambling sites, often integrated with recursive DNS resolvers to enforce network-wide policies without client-side software.[41] For example, CleanBrowsing's DNS service, launched in 2017, filters traffic for over 10 million users by blocking malware domains and enforcing content policies, reducing exposure to phishing attacks that accounted for 36% of data breaches in 2023 per Verizon's DBIR.[42] However, DNS filtering's effectiveness diminishes against circumvention techniques, including custom DNS-over-HTTPS (DoH) resolvers like those in Firefox since version 2019 or VPNs that bypass local DNS entirely.[40] Protocol-based filters extend beyond DNS by inspecting traffic at the transport and application layers, analyzing protocol headers, payloads, and behaviors to enforce blocking rules on specific communication standards. These filters, often implemented via firewalls or deep packet inspection (DPI) systems, target protocols such as HTTP/HTTPS (ports 80/443), FTP, or SMTP, allowing administrators to permit or deny traffic based on protocol-specific attributes like request methods, headers, or encrypted patterns.[43] For instance, in URL filtering—a common protocol-based technique—systems parse HTTP requests to block granular paths (e.g., /adult-content on a permitted domain), surpassing DNS's domain-only granularity, as deployed in tools like Zscaler or Cisco Umbrella since the early 2010s.[44] Advanced protocol-based methods detect non-standard protocol usage, such as blocking peer-to-peer (P2P) protocols like BitTorrent via signature matching or anomaly detection, which has been used by ISPs to curb bandwidth-intensive illegal file sharing; a 2022 study by the OECD noted such filters reduced P2P traffic by up to 70% in filtered networks. In censorship contexts, protocol blocking may restrict encrypted tunnels like VPN protocols (e.g., OpenVPN on UDP 1194) or degrade HTTPS performance through selective DPI, as observed in national firewalls where it undermines privacy without fully eliminating access.[45] Limitations include high resource demands for DPI—requiring terabit-per-second processing in large-scale deployments—and vulnerability to protocol obfuscation, where tools like Shadowsocks encapsulate traffic in innocuous protocols to evade detection.[43] Hybrid systems combining DNS and protocol inspection, such as those in next-generation firewalls, achieve layered defense but introduce latency, with average inspection delays of 5-10 milliseconds per packet in enterprise tests.[13]| Filter Type | Mechanism | Strengths | Weaknesses | Example Implementations |
|---|---|---|---|---|
| DNS-Based | Domain resolution blocking via invalid responses | Low latency; bandwidth-efficient; easy deployment | Bypassed by IP access or alternative resolvers; no URL/path granularity | Cloudflare DNS, CleanBrowsing[40][42] |
| Protocol-Based | Header/payload inspection (e.g., HTTP URL parsing, protocol signatures) | Fine-grained control; detects encrypted anomalies | High computational cost; prone to evasion via obfuscation | Cisco Umbrella DPI, Zscaler URL filtering |
