Recent from talks
Nothing was collected or created yet.
Criticism of Facebook
View on Wikipedia
| This article is part of a series about |
| Meta Platforms |
|---|
| Products and services |
| People |
| Business |
Facebook (and parent company Meta Platforms) has been the subject of criticism and legal action since it was founded in 2004.[1] Criticisms include the outsize influence Facebook has on the lives and health of its users and employees, as well as Facebook's influence on the way media, specifically news, is reported and distributed. Notable issues include Internet privacy, such as use of a widespread "like" button on third-party websites tracking users,[2][3] possible indefinite records of user information,[4] automatic facial recognition software,[5][6] and its role in the workplace, including employer-employee account disclosure.[7] The use of Facebook can have negative psychological and physiological effects[8] that include feelings of sexual jealousy,[9][10] stress,[11][12] lack of attention,[13] and social media addiction that in some cases is comparable to drug addiction.[14][15]
Facebook's operations have also received coverage. The company's electricity usage,[16] tax avoidance,[17] real-name user requirement policies,[18] censorship policies,[19][20] handling of user data,[21] and its involvement in the United States PRISM surveillance program and Facebook–Cambridge Analytica data scandal have been highlighted by the media and by critics.[22][23] Facebook has come under scrutiny for 'ignoring' or shirking its responsibility for the content posted on its platform, including copyright and intellectual property infringement,[24] hate speech,[25][26] incitement of rape,[27] violence against minorities,[28][29][30] terrorism,[31][32] fake news,[33][34][35] Facebook murder, crimes, and violent incidents live-streamed through its Facebook Live functionality.[36][37][38]
The company and its employees have also been subject to litigation cases over the years,[39][40][41][42] with its most prominent case concerning allegations that CEO Mark Zuckerberg broke an oral contract with Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra to build the then-named "HarvardConnection" social network in 2004, instead allegedly opting to steal the idea and code to launch Facebook months before HarvardConnection began.[43][44][45] The original lawsuit was eventually settled in 2009, with Facebook paying approximately $20 million in cash and 1.25 million shares.[46][47] A new lawsuit in 2011 was dismissed.[48] This, alongside another controversy involving Zuckerberg and fellow co-founder and former CFO Eduardo Saverin, was further explored in the 2010 American biographical drama film The Social Network. Some critics point to problems which they say will result in the demise of Facebook. Facebook has been banned by several governments for various reasons, including Syria,[49] China,[50] Iran[51] and Russia.
Censorship
[edit]While Facebook operates transparent policies around certain types of content moderation—such as the removing of hateful speech and images which contain sex or violence—the company has been criticized for selectively censoring information in nontransparent ways. Some examples of this include:
Censorship of criticism of Facebook
[edit]Newspapers regularly report stories of users who claim they've been censored on Facebook for being critical of Facebook itself, with their posts removed or made less visible. Examples include Elizabeth Warren in 2019[52] and Rotem Shtarkman in 2016.[53]
In the context of media reports[54] and lawsuits[55] from people formerly working on Facebook content moderation, a former Facebook moderator (Chris Gray) has claimed that specific rules existed to monitor and sometimes target posts about Facebook which are anti-Facebook or criticize Facebook for some action, for instance by matching the keywords "Facebook" or "DeleteFacebook".[56]
Facebook's search function has been accused of preventing users from searching for certain terms. Michael Arrington of TechCrunch has written about Facebook's possible censorship of "Ron Paul" as a search term. MoveOn.org's Facebook group for organizing protests against privacy violations could for a time not be found through searching, the word privacy was also restricted.[57]
Censorship around global politics
[edit]In 2015, it was reported that Facebook has a policy to censor anything related to Kurdish opposition against Turkey, such as maps of Kurdistan, flags of Kurdish armed terrorist groups (such as PKK and YPG), and criticism of Mustafa Kemal Atatürk, the founder of Turkey.[58][59]
In 2016, Facebook banned and also removed content regarding the Kashmir conflict.[60]
During a podcast, Mark Zuckerberg admitted that Facebook suppressed all the coverage of Joe Biden's son's email leaks during the 2020 United States elections due to a general request from the FBI.[61] The censored news claimed that the son of Joe Biden, who was vice-president in Obama's administration, used his father's influence to fix a deal with a Ukrainian businessman.
Following outreach from the United States Department of Justice, Facebook removed a Facebook group that was used to share information about Immigration and Customs Enforcement.[62][63][64]
Censorship in line with US foreign policy
[edit]In 2021, Facebook was accused of censoring messages critical of Israel and supportive of Palestine.[65] During conflict over the Sheikh Jarrah property dispute in 2021, Facebook was accused of deleting hundreds of posts critical of Israel. Senior Facebook officials apologized to the Palestinian Prime Minister for censoring pro-Palestinian voices.[66]
In October 2021, a secret blacklist of "dangerous individuals and organizations" maintained by Facebook was discovered by The Intercept, which revealed censorship in the MENA region was stricter than in USA. Critics and scholars have argued the blacklist and the guideline stifles free discussion, as well as enforcing an uneven enforcement of the rules.[67][68]
Privacy issues
[edit]Facebook has faced a number of privacy concerns; for instance, in August 2019, it was revealed that the company had enlisted contractors to generate transcripts of users' audio chats. The contractors were tasked with re-transcribing the conversations in order to gauge the accuracy of the automatic transcription tool.[69][70][71] In part these concerns stem from the company's revenue model that involves selling information about its users, and the loss of privacy this could entail. In addition, employers and other organizations and individuals have been known to use Facebook data for their own purposes. As a result peoples' identities have sometimes been revealed without their permission. In response, pressure groups and governments have increasingly asserted the users' right to privacy and to control their personal data.
Psychological/sociological effects
[edit]In addition to noting with evolutionary biologist George C. Williams in the development of evolutionary medicine that most chronic medical conditions are the consequence of evolutionary mismatches between a stateless environment of nomadic hunter-gatherer life in bands and contemporary human life in sedentary technologically modern state societies (e.g. WEIRD societies),[72] psychiatrist Randolph M. Nesse has argued that evolutionary mismatch is an important factor in the development of certain mental disorders.[73][74][75] In 1948, 50 percent of U.S. households owned at least one automobile.[76] In 2000, a majority of U.S. households had at least one personal computer and internet access the following year.[77] In 2002, a majority of U.S. survey respondents reported having a mobile phone.[78] In September 2007, a majority of U.S. survey respondents reported having broadband internet at home.[79] In January 2013, a majority of U.S. survey respondents reported owning a smartphone.[80]
Facebook addiction
[edit]The "World Unplugged" study, which was conducted in 2011, claims that for some users quitting social networking sites is comparable to quitting smoking or giving up alcohol.[81] Another study conducted in 2012 by researchers from the University of Chicago Booth School of Business in the United States found that drugs like alcohol and tobacco could not keep up with social networking sites regarding their level of addictiveness.[82] A 2013 study in the journal CyberPsychology, Behavior, and Social Networking found that some users decided to quit social networking sites because they felt they were addicted. In 2014, the site went down for about 30 minutes, prompting several users to call emergency services.[83]
In April 2015, the Pew Research Center published a survey of 1,060 U.S. teenagers ages 13 to 17 who reported that nearly three-quarters of them either owned or had access to a smartphone, 92 percent went online daily with 24 percent saying they went online "almost constantly".[84] In March 2016, Frontiers in Psychology published a survey of 457 post-secondary student Facebook users (following a face validity pilot of another 47 post-secondary student Facebook users) at a large university in North America showing that the severity of ADHD symptoms had a statistically significant positive correlation with Facebook usage while driving a motor vehicle and that impulses to use Facebook while driving were more potent among male users than female users.[85]
In June 2018, Children and Youth Services Review published a regression analysis of 283 adolescent Facebook users in the Piedmont and Lombardy regions of Northern Italy (that replicated previous findings among adult users) showing that adolescents reporting higher ADHD symptoms positively predicted Facebook addiction, persistent negative attitudes about the past and that the future is predetermined and not influenced by present actions, and orientation against achieving future goals, with ADHD symptoms additionally increasing the manifestation of the proposed category of psychological dependence known as "problematic social media use".[86]
In October 2023, court documents in the US alleged Meta of designing its platforms deliberately to develop addiction in children using them. The company knowingly allowed underage users to hold accounts, violating the Children's Online Privacy Protection Act. According to whistleblower Frances Haugen, the company intentionally targets children below the age of 18.[87][88]
Self-harm and suicide
[edit]Research shows that people who are feeling suicidal use the internet to search for suicide methods. Websites provide graphic details and information on how to take your own life. This cannot be right. Where this content breaches the policies of internet and social media providers it must be removed.
I do not think it is going too far to question whether even you, the owners, any longer have any control over [the sites'] content. If that is the case, then children should not be accessing your services at all, and parents should be aware that the idea of any authority overseeing algorithms and content is a mirage.
In January 2019, both the Health Secretary of the United Kingdom, and the Children's Commissioner for England, urged Facebook and other social media companies to take responsibility for the risk to children posed by content on their platforms related to self-harm and suicide.[90]
Envy
[edit]Facebook has been criticized for making people envious and unhappy due to the constant exposure to positive yet unrepresentative highlights of their peers. Such highlights include, but are not limited to, journal posts, videos, and photos that depict or reference such positive or otherwise outstanding activities, experiences, and facts. This effect is caused mainly by the fact that most users of Facebook usually only display the positive aspects of their lives while excluding the negative, though it is also strongly connected to inequality and the disparities between social groups as Facebook is open to users from all classes of society. Sites such as AddictionInfo.org[91] state that this kind of envy has profound effects on other aspects of life and can lead to severe depression, self-loathing, rage and hatred, resentment, feelings of inferiority and insecurity, pessimism, suicidal tendencies and desires, social isolation, and other issues that can prove very serious. This condition has often been called "Facebook Envy" or "Facebook Depression" by the media.[92][93][94][95][96][97]
In 2010, Social Science Computer Review published research by economists Ralf Caers and Vanessa Castelyns who sent an online questionnaire to 398 and 353 LinkedIn and Facebook users respectively in Belgium and found that both sites had become tools for recruiting job applicants for professional occupations as well as additional information about applicants, and that it was being used by recruiters to decide which applicants would receive interviews.[98] In 2017, sociologist Ofer Sharone conducted interviews with unemployed workers to research the effects of LinkedIn and Facebook as labor market intermediaries and found that social networking services (SNS) have had a filtration effect that has little to do with evaluations of merit, and that the SNS filtration effect has exerted new pressures on workers to manage their careers to conform to the logic of the SNS filtration effect.[99]
A joint study conducted by two German universities demonstrated Facebook envy and found that as many as one out of three people actually feel worse and less satisfied with their lives after visiting the site. Vacation photos were found to be the most common source of feelings of resentment and jealousy. After that, social interaction was the second biggest cause of envy, as Facebook users compare the number of birthday greetings, likes, and comments to those of their friends. Visitors who contributed the least tended to feel the worst. "According to our findings, passive following triggers invidious emotions, with users mainly envying happiness of others, the way others spend their vacations; and socialize", the study states.[100]
A 2013 study by researchers at the University of Michigan found that the more people used Facebook, the worse they felt afterwards.[101][96][97]
Narcissistic users who show excessive grandiosity give negative emotion to viewers and cause envy, but as a result, that may cause viewers' loneliness. Viewers sometimes need to terminate relationships with them to avoid this negative emotion. However, this "avoidance" such as "terminate relationships" would be reinforcement and it may lead to loneliness. The cyclical pattern is a vicious circle of loneliness and avoidance coping, the study states.[102]
Divorce
[edit]Social networks, like Facebook, can have a detrimental effect on marriages, with users becoming worried about their spouse's contacts and relations with other people online, leading to marital breakdown and divorce.[103] According to a 2009 survey in the UK, around 20 percent of divorce petitions included references to Facebook.[104][105][106][107] Facebook has given us a new platform for interpersonal communication. Researchers proposed that high levels of Facebook use could result in Facebook-related conflict and breakup/divorce.[108] Previous studies have shown that romantic relationships can be damaged by excessive Internet use, Facebook jealousy, partner surveillance, ambiguous information, and online portrayal of intimate relationships.[109][110][111][112][113] Excessive Internet users reported having greater conflict in their relationships. Their partners feel neglected and there's lower commitment and lower feelings of passion and intimacy in the relationship. According to the article, researchers suspect that Facebook may contribute to an increase in divorce and infidelity rates in the near future due to the amount and ease of accessibility to connect with past partners.[108] The use of Facebook can cause feelings of sexual jealousy.[9][10]
Stress
[edit]Research performed by psychologists from Edinburgh Napier University indicated that Facebook adds stress to users' lives. Causes of stress included fear of missing important social information, fear of offending contacts, discomfort or guilt from rejecting user requests or deleting unwanted contacts or being unfriended or blocked by Facebook friends or other users, the displeasure of having friend requests rejected or ignored, the pressure to be entertaining, criticism or intimidation from other Facebook users, and having to use appropriate etiquette for different types of friends.[114] Many people who started using Facebook for positive purposes or with positive expectations have found that the website has negatively impacted their lives.[115]
Next to that, the increasing number of messages and social relationships embedded in SNS also increases the amount of social information demanding a reaction from SNS users. Consequently SNS users perceive they are giving too much social support to other SNS friends. This dark side of SNS usage is called 'social overload'. It is caused by the extent of usage, number of friends, subjective social support norms, and type of relationship (online-only vs offline friends) while age has only an indirect effect. The psychological and behavioral consequences of social overload include perceptions of SNS exhaustion, low user satisfaction, and high intentions to reduce or stop using SNS.[116]
Narcissism
[edit]In July 2018, a meta-analysis published in Psychology of Popular Media found that grandiose narcissism positively correlated with time spent on social media, frequency of status updates, number of friends or followers, and frequency of posting self-portrait digital photographs,[117] while a meta-analysis published in the Journal of Personality in April 2018 found that the positive correlation between grandiose narcissism and social networking service usage was replicated across platforms (including Facebook).[118] In March 2020, the Journal of Adult Development published a regression discontinuity analysis of 254 Millennial Facebook users investigating differences in narcissism and Facebook usage between the age cohorts born from 1977 to 1990 and from 1991 to 2000 and found that the later born Millennials scored significantly higher on both.[119] In June 2020, Addictive Behaviors published a systematic review finding a consistent, positive, and significant correlation between grandiose narcissism and the proposed category of psychological dependence called "problematic social media use".[120] Also in 2018, social psychologist Jonathan Haidt and FIRE President Greg Lukianoff noted in The Coddling of the American Mind that former Facebook president Sean Parker stated in a 2017 interview that the Like button was consciously designed to prime users receiving likes to feel a dopamine rush as part of a "social-validation feedback loop".[121]
Non-informing, knowledge-eroding medium
[edit]Facebook is a Big Tech company with over 2.7 billion monthly active users as of the second quarter of 2020 and therefore has a meaningful impact on the masses that use it.[122] Big data algorithms are used in personalized content creation and automatization; however, this method can be used to manipulate users in various ways.[123] The problem of misinformation is exacerbated by the educational bubble, users' critical thinking ability and news culture.[124] Based on a 2015 study, 62.5% of the Facebook users are oblivious to any curation of their News Feed. Furthermore, scientists have started to investigate algorithms with unexpected outcomes that may lead to antisocial political, economic, geographic, racial, or other discrimination. Facebook has remained scarce in transparency of the inner workings of the algorithms used for News Feed correlation.[125] Algorithms use the past activities as a reference point for predicting users' taste to keep them engaged. However, this leads to the formation of a filter bubble that starts to refrain users from diverse information. Users are left with a skewed worldview derived from their own preferences and biases.[126]
In 2015, researchers from Facebook published a study indicating that the Facebook algorithm perpetuates an echo chamber amongst users by occasionally hiding content from individual feeds that users potentially would disagree with: for example the algorithm removed one in every 13 diverse content from news sources for self-identified liberals. In general, the results from the study indicated that the Facebook algorithm ranking system caused approximately 15% less diverse material in users' content feeds, and a 70% reduction in the click-through-rate of the diverse material.[127][128] In 2018, social psychologist Jonathan Haidt and FIRE President Greg Lukianoff argued in The Coddling of the American Mind that the filter bubbles created by the News Feed algorithm of Facebook and other platforms are one of the principal factors amplifying political polarization in the United States since 2000 (when a majority of U.S. households first had at least one personal computer and then internet access the following year).[129][77]
Facebook has, at least in the political field, a counter-effect on being informed: in two studies from the US with a total of more than 2,000 participants, the influence of social media on the general knowledge on political issues was examined in the context of two US presidential elections. The results showed that the frequency of Facebook use was moderately negatively related to general political knowledge. This was also the case when considering demographic, political-ideological variables and previous political knowledge. According to the latter, a causal relationship is indicated: the higher the Facebook use, the more the general political knowledge declines.[130] In 2019, Jonathan Haidt argued that there is a "very good chance American democracy will fail, that in the next 30 years we will have a catastrophic failure of our democracy."[131] Following the 2021 United States Capitol attack, in February 2021, Facebook announced that it would reduce the amount of political content in users News Feeds.[132]
Other psychological effects
[edit]It has been admitted by many students that they have experienced bullying on the site, which leads to psychological harm. High school students face a possibility of bullying and other adverse behaviors over Facebook every day. Many studies have attempted to discover whether Facebook has a positive or negative effect on children's and teenagers' social lives, and many of them have come to the conclusion that there are distinct social problems that arise with Facebook usage. British neuroscientist Susan Greenfield stuck up for the issues that children encounter on social media sites, stating that these sites can rewire the brain, which caused some hysteria regarding the safety of social media usage. She did not back up her claims with research, but did cause quite a few studies to be done on the subject. When an individual's self-image is broken down by others as a result of badmouthing, criticism, harassment, criminalization or vilification, intimidation, demonization, demoralization, belittlement, or attacking someone over the site, it can cause much of the envy, anger, or depression users report feeling after prolonged Facebook usage.[133][134][135]
Sherry Turkle, in her book Alone Together: Why We Expect More from Technology and Less from Each Other, argues that social media brings people closer and further apart at the same time. One of the main points she makes is that there is a high risk in treating persons online with dispatch like objects. Although people are networked on Facebook, their expectations of each other tend to be lessened. According to Turkle, this could cause a feeling of loneliness in spite of being together.[136]
Between 2016 and 2018, the number of 12- to 15-year-olds who reported being bullied over social media rose from 6% to 11%, in the region covered by Ofcom.[90][better source needed]
User influence experiments
[edit]Academic and Facebook researchers have collaborated to test if the messages people see on Facebook can influence their behavior. For instance, in "A 61-Million-Person Experiment in Social Influence And Political Mobilization", during the 2010 elections, Facebook users were given the opportunity to "tell your friends you voted" by clicking on an "I voted" button. Users were 2% more likely to click the button if it was associated with friends who had already voted.[137]
Much more controversially, a 2014 study of "Emotional Contagion Through Social Networks" manipulated the balance of positive and negative messages seen by 689,000 Facebook users.[138] The researchers concluded that they had found "some of the first experimental evidence to support the controversial claims that emotions can spread throughout a network, [though] the effect sizes from the manipulations are small."[139]
Unlike the "I voted" study, which had presumptively beneficial ends and raised few concerns, this study was criticized for both its ethics and methods/claims. As controversy about the study grew, Adam Kramer, a lead author of both studies and member of the Facebook data team, defended the work in a Facebook update.[140] A few days later, Sheryl Sandburg, Facebook's COO, made a statement while traveling abroad. While at an Indian Chambers of Commerce event in New Delhi she stated that "This was part of ongoing research companies do to test different products, and that was what it was. It was poorly communicated and for that communication we apologize. We never meant to upset you."[141]
Shortly thereafter, on July 3, 2014, USA Today reported that the privacy watchdog group Electronic Privacy Information Center (EPIC) had filed a formal complaint with the Federal Trade Commission claiming that Facebook had broken the law when it conducted the study on the emotions of its users without their knowledge or consent. In its complaint, EPIC alleged that Facebook had deceived users by secretly conducting a psychological experiment on their emotions: "At the time of the experiment, Facebook did not state in the Data Use Policy that user data would be used for research purposes. Facebook also failed to inform users that their personal information would be shared with researchers."[142]
Beyond the ethical concerns, other scholars criticized the methods and reporting of the study's findings. John Grohol, writing for Psych Central, argued that despite its title and claims of "emotional contagion", this study did not look at emotions at all. Instead, its authors used an application (called "Linguistic Inquiry and Word Count" or LIWC 2007) that simply counted positive and negative words to infer users' sentiments. He wrote that a shortcoming of the LIWC tool is that it does not understand negations. Hence, the tweet "I am not happy" would be scored as positive: "Since the LIWC 2007 ignores these subtle realities of informal human communication, so do the researchers." Grohol concluded that given these subtleties, the effect size of the findings are little more than a "statistical blip".
Kramer et al. (2014) found a 0.07%—that's not 7 percent, that's 1/15th of one percent!!—decrease in negative words in people's status updates when the number of negative posts on their Facebook news feed decreased. Do you know how many words you'd have to read or write before you've written one less negative word due to this effect? Probably thousands.[143]
The consequences of the controversy are pending (be it FTC or court proceedings) but it did prompt an "Editorial Expression of Concern"[144] from its publisher, the Proceedings of the National Academy of Sciences, as well as a blog posting from OkCupid titled "We experiment on human beings!"[145] In September 2014, law professor James Grimmelmann argued that the actions of both companies were "illegal, immoral, and mood-altering" and filed notices with the Maryland Attorney General and Cornell Institutional Review Board.[146]
In the UK, the study was also criticized by the British Psychological Society which said, in a letter to The Guardian, "There has undoubtedly been some degree of harm caused, with many individuals affected by increased levels of negative emotion, with consequent potential economic costs, increase in possible mental health problems and burden on health services. The so-called 'positive' manipulation is also potentially harmful."[147]
Tax avoidance
[edit]Facebook uses a complicated series of shell companies in tax havens to avoid paying billions of dollars in corporate tax.[148] According to The Express Tribune, Facebook is among the corporations that "avoided billions of dollars in tax using offshore companies."[149] For example, Facebook routes billions of dollars in profits using the Double Irish and Dutch Sandwich tax avoidance schemes to bank accounts in the Cayman Islands. The Dutch newspaper NRC Handelsblad concluded from the Paradise Papers published in late 2017 that Facebook pays "practically no taxes" worldwide.[150]
For example, Facebook paid:
- In 2011, £2.9m tax on £840m profits in the UK;
- In 2012 and 2013 no tax in the UK;
- In 2014 £4,327 tax on hundreds of millions of pounds in UK revenues which were transferred to tax havens.[151]
According to economist and member of the PvdA delegation inside the Progressive Alliance of Socialists & Democrats in the European Parliament (S&D) Paul Tang, between 2013 and 2015 the EU lost an estimated €1,453m – €2,415m to Facebook.[152] When comparing to others countries outside the EU, the EU is only taxing Facebook with a rate of 0.03% to 0.1% of its revenue (around 6% of its EBT) whereas this rate is near 28% in countries outside the EU. Even had a rate between 2% and 5% been applied during this period – as suggested by the ECOFIN Council – a fraud of this rate by Facebook would have meant a loss to the EU between €327m and €817m.[152]
| Revenue (m EUR) | EBT (m EUR) | Tax (m EUR) | Tax / EBT | Tax / Revenue | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Total | EU | Rest of the world | Total | EU | Rest of the world | Total | EU | Rest of the world | Total | EU | Rest of the world | Total | EU | Rest of the world | ||
| Facebook Inc. | 2013 | 5,720 | 3,069 | 2,651 | 2,001 | (4) | 2,005 | 911 | 3 | 908 | 46% | n.a | 45% | 15.93% | 0.10% | 34.25% |
| 2014 | 10,299 | 5,017 | 5,282 | 4,057 | (20) | 4,077 | 1,628 | 5 | 1,623 | 40% | n.a | 40% | 15.81% | 0.09% | 30.73% | |
| 2015 | 16,410 | 8,253 | 8,157 | 5,670 | (43) | 5,627 | 2,294 | 3 | 2,291 | 40% | 6% | 41% | 13.98% | 0.03% | 28.09% | |
On July 6, 2016, the U.S. Department of Justice filed a petition in the U.S. District Court in San Francisco, asking for a court order to enforce an administrative summons issued to Facebook, Inc., under Internal Revenue Code section 7602,[153] in connection with an Internal Revenue Service examination of Facebook's year 2010 U.S. Federal income tax return.[154][155]
In November 2017, the Irish Independent recorded that for the 2016 financial year, Facebook had paid €30 million of Irish corporation tax on €12.6 billion of revenues that were routed through Ireland, giving an Irish effective tax rate of under 1%.[156] The €12.6 billion of 2016 Facebook revenues routed through Ireland was almost half of Facebook's global revenues.[157] In April 2018, Reuters wrote that all of Facebook's non–U.S. accounts were legally housed in Ireland for tax purposes, but were being moved due to the May 2018 EU GDPR regulations.[158]
In November 2018, the Irish Times reported that Facebook routed over €18.7 billion of revenues through Ireland (almost half all global revenues), on which it paid €38 million of Irish corporation tax.[159]
Treatment of employees and labor issues
[edit]Labor issues
[edit]Moderators
[edit]Facebook hires some employees through contractors, including Accenture, Arvato, Cognizant, CPL Resources, Genpact, and Teleperformance, to serve as content moderators, reviewing potentially problematic content posted to both Facebook and Instagram.[164] Many of these contractors face unrealistic expectations, harsh working conditions, and constant exposure to disturbing content, including graphic violence, animal abuse, and child pornography.[160][161] Contractor employment is contingent on achieving and maintaining a score of 98 on a 100-point scale on a metric known as "accuracy". Falling below a score of 98 can result in dismissal. Some have reported post-traumatic stress disorder (PTSD) stemming from lack of access to counseling, coupled with unforgiving expectations and the violent content they are assigned to review.[160]
Content moderator Keith Utley, who was employed by Cognizant, experienced a heart attack during work in March 2018; the office lacked a defibrillator, and Utley was transported to a hospital where he died.[162][165] Selena Scola, an employee of contractor Pro Unlimited, Inc., sued her employer after she developed PTSD as a result of "constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace".[166] In December 2019, former CPL employee Chris Gray began legal action in the High Court of Ireland, claiming damages for PTSD experienced as a moderator,[167] the first of an estimated 20+ pending cases. In February 2020, employees in Tampa, Florida filed a lawsuit against Facebook and Cognizant alleging they developed PTSD and related mental health impairments as a result of constant and unmitigated exposure to disturbing content.[168] In 2025, Meta's new moderation contractors in Ghana filed legal action against the company for similar issues. Workers reported developing mental illnesses, attempted self-harm and suicides, and poor working conditions.[169]
In February 2020, the European Union Commissioners criticized the plans that Facebook has for dealing with the working conditions of those who are contracted to moderate content on the social media platform.[170]
Facebook agreed to settle a class action lawsuit for $52 million on May 12, 2020, which included a $1,000 payment to each of the 11,250 moderators in the class, with additional compensation available for the treatment of PTSD and other conditions resulting from the jobs.[171][172][173]
Employees
[edit]Plans for a Facebook-owned real estate development known as "Willow Village" have been criticized for resembling a "company town", which often curtails the rights of residents, and encourages or forces employees to remain within an environment created and monitored by their employer outside of work hours.[174] Critics have referred to the development as "Zucktown" and "Facebookville" and the company has faced additional criticism for the effect it will have on existing communities in California.
The operational manager at Facebook as of March 2021, along with three former candidates of the Facebook hiring process complained to the EEOC of racial bias being practiced at the company against Black people. The current employee, Oscar Veneszee Jr. accused the firm of conducting subjective evaluations and pushing the idea of racial stereotypes. The EEOC has labeled the practice as "systemic" racial bias and has initiated an investigation.[175]
Misleading campaigns against competitors
[edit]In May 2011, emails were sent to journalists and bloggers making critical allegations about Google's privacy policies; however, it was later discovered that the anti-Google campaign, conducted by PR giant Burson-Marsteller, was paid for by Facebook in what CNN referred to as "a new level skullduggery" and which Daily Beast called a "clumsy smear". While taking responsibility for the campaign, Burson-Marsteller said it should not have agreed to keep its client's (Facebook's) identity a secret. "Whatever the rationale, this was not at all standard operating procedure and is against our policies, and the assignment on those terms should have been declined", it said.[176]
In December 2020, Apple Inc. announced an initiative of Anti-Tracking measures (opt-in tracking policy) to be introduced to their App Store Services. Facebook quickly reacted and started to criticise the initiative, claiming Apple's anti-tracking privacy focused change would have "harmful impact on many small businesses that are struggling to stay afloat and on the free internet that we all rely on more than ever". Facebook also launched a so-called "Speak Up For Small Businesses" page. Apple in their response stated that "users should know when their data is being collected and shared across other apps and websites – and they should have the choice to allow that or not". Apple was also backed up by the Electronic Frontier Foundation (EFF) who stated that "Facebook touts itself in this case as protecting small businesses, and that couldn't be further from the truth".[177]
In March 2022, The Washington Post revealed that Facebook had partnered with Republican consulting firm Targeted Victory to orchestrate a campaign to damage the public reputation of competitor TikTok.[178]
Copying competitors' products and features
[edit]Beyond acquiring competitors in the social and messaging space with strong potential, Facebook often simply copies products or features to get to the market faster. Internal emails have shown that Facebook's leadership, including Mark Zuckerberg were frustrated by the time the company spends on prototyping, and suggested to explore copying entire products like Pinterest. "Copying is faster than innovating" – admitted an employee on the internal email thread, which continued: "If you gave the top-down order to go ahead, copy e.g. Pinterest or the gaming dynamics on Foursquare ... I am sure [a] very small team of engineers, a [product manager], and a designer would get it done super quickly."[179][180]
Many Facebook employees seem to be questioning Facebook's approach of cloning competitors. According to leaks, a top quoted question in Facebook's internal all-hands was: "What is our next big product, which does not imitate already existing products on the market?"[181]
Snapchat
[edit]In June 2014, Facebook launched Slingshot, an app for sending ephemeral photos like Snapchat does. In August 2016, the company released Facebook Stories, which is a copy of Snapchat's most popular feature.[182]
TikTok
[edit]In August 2020, Facebook built Instagram Reels, a feature that functioned and looked similar to TikTok.[183]
For several months, Facebook was experimenting with an app called Hobbi, which took many cues from Pinterest.[184]
Clubhouse
[edit]In the summer of 2021, Facebook started to roll out Live Audio Rooms, which resembles Clubhouse.[185]
Content
[edit]Facebook or Meta Platforms has been criticized for its management of various content on posts, photos and entire groups and profiles. This includes but is not limited to allowing violent content, including content related to war crimes, and not limiting the spread of fake news and COVID-19 misinformation on their platform, as well as allowing incitement of violence against multiple groups.
Misguiding news publishers and advertisers on video engagement
[edit]Facebook heavily pushed news publishers towards making more videos and discouraging text content. However, this was revealed to be wrong as metrics used for time spent on videos was faulty overestimating by 60-80%, later unsealed court documents revealed the metric was wrong between 150-900% time. A group of advertisers in California sued Facebook over the allegation.[186][187]
Technical
[edit]Real-name policy controversy and compromise
[edit]Facebook has a real-name system policy for user profiles. The real-name policy stems from the position "that way, you always know who you're connecting with. This helps keep our community safe."[18] The real-name system does not allow adopted names or pseudonyms,[188] and in its enforcement has suspended accounts of legitimate users, until the user provides identification indicating the name.[189] Facebook representatives have described these incidents as very rare.[189] A user claimed responsibility via the anonymous Android and iOS app Secret for reporting "fake names" which caused user profiles to be suspended, specifically targeting the stage names of drag queens.[190] On October 1, 2014, Chris Cox, Chief Product Officer at Facebook, offered an apology: "In the two weeks since the real-name policy issues surfaced, we've had the chance to hear from many of you in these communities and understand the policy more clearly as you experience it. We've also come to understand how painful this has been. We owe you a better service and a better experience using Facebook, and we're going to fix the way this policy gets handled so everyone affected here can go back to using Facebook as you were."[191]
On December 15, 2015, Facebook announced in a press release[192] that it would be providing a compromise to its real name policy after protests from groups such as the gay/lesbian community and abuse-victims.[193] The site is developing a protocol that will allow members to provide specifics as to their "special circumstance" or "unique situation" with a request to use pseudonyms, subject to verification of their true identities. At that time, this was already being tested in the U.S. Product manager Todd Gage and vice president of global operations Justin Osofsky also promised a new method for reducing the number of members who must go through ID verification while ensuring the safety of others on Facebook. The fake name reporting procedure will also be modified, forcing anyone who makes such an allegation to provide specifics that would be investigated and giving the accused individual time to dispute the allegation.[194]
Deleting users' statuses
[edit]There have been complaints of user statuses being mistakenly or intentionally deleted for alleged violations of Facebook's posting guidelines. Especially for non-English speaking writers, Facebook does not have a proper support system to genuinely read the content and make decisions. Sometimes the content of a status did not have any "abusive" or defaming language, but it nevertheless got deleted on the basis that it had been secretly reported by a group of people as "offensive". For other languages than English, Facebook until now is not able to identify the group approach that is used to vilify humanitarian activism. In another incident, Facebook had to apologize after it deleted a free speech group's post about the abuse of human rights in Syria. In that case, a spokesman for Facebook said the post was "mistakenly" removed by a member of its moderation team, which receives a high volume of take-down requests.[195]
Enabling of harassment
[edit]Facebook instituted a policy by which it is now self-policed by the community of Facebook users.[when?] Some users have complained that this policy allows Facebook to empower abusive users to harass them by allowing them to submit reports on even benign comments and photos as being "offensive" or "in violation of Facebook Rights and Responsibilities" and that enough of these reports result in the user who is being harassed in this way getting their account blocked for a predetermined number of days or weeks, or even deactivated entirely.[196]
Facebook UK policy director Simon Milner told Wired magazine that "Once the piece of content has been seen, assessed and deemed OK, (Facebook) will ignore further reports about it."[197]
Lack of customer support
[edit]Facebook lacks any form of live customer support beyond "community" support pages and FAQ's which offer only general troubleshooting advice, often making it impossible to resolve issues that require the services of an administrator or are not covered in the FAQs. The automated emailing system used when filling out a support form often directs users back to the help center or to pages that are outdated and cannot be accessed, leaving users at a dead end with no further support available. A person who lost access to Facebook or does not have an account has no easy way to contact the company directly.
Downtime and outages
[edit]Facebook has had a number of outages and downtime large enough to draw some media attention. A 2007 outage resulted in a security hole that enabled some users to read other users' personal mail.[198] In 2008, the site was inaccessible for about a day, from many locations in many countries.[199] In spite of these occurrences, a report issued by Pingdom found that Facebook had less downtime in 2008 than most social-networking websites.[200] On September 16, 2009, Facebook started having major problems loading as people signed in. This was due to a group of hackers deliberately trying to drown out a political speaker who had social networking problems from continuously speaking against the Iranian election results. Just two days later, on September 18, Facebook went down again.[201]
In October 2009, an unspecified number of Facebook users were unable to access their accounts for over three weeks.[202][203][204][205]
On Monday, October 4, 2021, Facebook and its other apps – Instagram, WhatsApp, Messenger, Oculus, as well as the lesser-known Mapillary – had an hours-long DNS-related global outage.[206][207][208] The outage also affected anyone using "Log in with Facebook" to access third-party sites.[209] The downtime lasted approximately five hours and fifteen minutes, from approximately 15:50 UTC to 21:05 UTC, and affected roughly three billion users.[210] The outage was caused by a BGP withdrawal of all of the IP routes to their Domain Name (DNS) servers, which were all self-hosted at the time.[211][206]
A further global outage occurred on Tuesday, March 5, 2024.[212] Facebook, Instagram, Threads and Messenger suddenly stopped working worldwide at 15:00 UTC, ending two hours later. The outage appeared on Super Tuesday, a day of many presidential primary elections in the United States.[213] The cause of the outage was reportedly related to a problem with an automated tool for fixing configuration values.[214] Twitter CEO Elon Musk mocked the outage in X.[215][216][217][218]
Tracking cookies
[edit]Facebook has been criticized heavily for 'tracking' users, even when logged out of the site. Australian technologist Nik Cubrilovic discovered that when a user logs out of Facebook, the cookies from that login are still kept in the browser, allowing Facebook to track users on websites that include "social widgets" distributed by the social network. Facebook has denied the claims, saying they have 'no interest' in tracking users or their activity. They also promised after the discovery of the cookies that they would remove them, saying they will no longer have them on the site. A group of users in the United States have sued Facebook for breaching privacy laws.[219]
As of December 2015[update], to comply with a court order citing violations of the European Union Directive on Privacy and Electronic Communications – which requires users to consent to tracking and storage of data by websites, Facebook no longer allows users in Belgium to view any content on the service, even public pages, without being registered and logged in.[220]
Email address change
[edit]In June 2012, Facebook removed all existing email addresses from user profiles, and added a new @facebook.com email address. Facebook claimed this was part of adding a "new setting that gives people the choice to decide which addresses they want to show on their timelines". However, this setting was redundant to the existing "Only Me" privacy setting which was already available to hide addresses from timelines. Users complained the change was unnecessary, they did not want an @facebook.com email address, and they did not receive adequate notification their profiles had been changed.[221] The change in email address was synchronized to phones due to a software bug, causing existing email addresses details to be deleted.[222] The facebook.com email service was retired in February 2014.[223]
Safety Check bug
[edit]On March 27, 2016, following a bombing in Lahore, Pakistan, Facebook activated its "Safety Check" feature, which allows people to let friends and loved ones know they are okay following a crisis or natural disaster, to people who were never in danger, or even close to the Pakistan explosion. Some users as far as the US, UK and Egypt received notifications asking if they were okay.[224][225]
End-to-end encryption
[edit]In February 2021, the National Crime Agency of the UK expressed its concerns that the installation of end-to-end encryption methods would result in the spread of child pornography going undetected.[226][227][228] Facebook representatives had previously told a UK Parliament committee that the use of these stronger encryption methods would render it easier for pedophiles to share child pornography on Facebook's networks.[226][229] The US-based National Center for Missing and Exploited Children estimates that around 70% of reports to law enforcement regarding the spread of child pornography on Facebook would be lost as a result of the implementation of end-to-end encryption.[229]
In May 2021, Facebook came under fire from Ken McCallum, the Director-General of MI5, for its plans to introduce end-to-end encryption into its Messenger and Instagram services.[226][230] McCallum stated that the introduction of such encryption methods would prevent security organizations from viewing communications related to ongoing terrorist plots and that the implementation of end-to-end encryption would block active counter-terrorism investigations.[226][230][231]
Banning accounts
[edit]On July 3, 2025, BBC reported that Facebook and Instagram users had their accounts arbitrarily banned.[232] A petition accused that AI was in the handling of both the operation and appeals. Accounts were occasionally reinstated when BBC inquired Meta about those cases.[233]
Third-party responses to Facebook
[edit]Government censorship
[edit]Several countries have banned access to Facebook, including Syria,[234] China,[235] and Iran.[236] In 2010, the Office of the Data Protection Supervisor, a branch of the government of the Isle of Man, received so many complaints about Facebook that they deemed it necessary to provide a "Facebook Guidance" booklet (available online as a PDF file), which cited (amongst other things) Facebook policies and guidelines and included an elusive Facebook telephone number. This number when called, however, proved to provide no telephone support for Facebook users, and only played back a recorded message advising callers to review Facebook's online help information.[237]
In 2010, Facebook reportedly allowed an objectionable page, deemed by the Islamic Lawyers Forum (ILF), to be anti-Muslim. The ILF filed a petition with Pakistan's Lahore High Court. On May 18, 2010, Justice Ijaz Ahmad Chaudhry ordered Pakistan's Telecommunication Authority to block access to Facebook until May 31. The offensive page had provoked street demonstrations in Muslim countries due to visual depictions of Muhammad, which are regarded as blasphemous by Muslims.[238][239] A spokesman said Pakistan Telecommunication Authority would move to implement the ban once the order has been issued by the Ministry of Information and Technology. "We will implement the order as soon as we get the instructions", Khurram Mehran told AFP. "We have already blocked the URL link and issued instruction to Internet service providers yesterday", he added. Rai Bashir told AFP that "We moved the petition in the wake of widespread resentment in the Muslim community against the Facebook contents". The petition called on the government of Pakistan to lodge a strong protest with the owners of Facebook, he added. Bashir said a PTA official told the judge his organization had blocked the page, but the court ordered a total ban on the site. People demonstrated outside court in the eastern city of Lahore, Pakistan, carrying banners condemning Facebook. Protests in Pakistan on a larger scale took place after the ban and widespread news of that objectionable page. The ban was lifted on May 31 after Facebook reportedly assured the Lahore High Court that it would remedy the issues in dispute.[240][241][242]
In 2011, a court in Pakistan was petitioned to place a permanent ban on Facebook for hosting a page called "2nd Annual Draw Muhammad Day May 20th 2011".[243][244]
Government fines
[edit]In July 2024, Nigeria's government slapped Meta with a $220 million fine for violating the country's data protection and consumer rights laws on Facebook and WhatsApp. According to Nigeria's Federal Competition and Consumer Protection Commission (FCCPC), Meta broke the rules in five major ways: sharing Nigerian users' data without permission, denying consumers control over their data, practicing discrimination, and abusing market dominance.[245]
Organizations blocking access
[edit]Ontario government employees, Federal public servants, MPPs, and cabinet ministers were blocked from access to Facebook on government computers in May 2007.[246] When the employees tried to access Facebook, a warning message "The Internet website that you have requested has been deemed unacceptable for use for government business purposes". This warning also appears when employees try to access YouTube, MySpace, gambling or pornographic websites.[247] However, innovative employees have found ways around such protocols, and many claim to use the site for political or work-related purposes.[248]
A number of local governments including those in the UK[249] and Finland[250] imposed restrictions on the use of Facebook in the workplace due to the technical strain incurred. Other government-related agencies, such as the US Marine Corps have imposed similar restrictions.[251] A number of hospitals in Finland have also restricted Facebook use citing privacy concerns.[252][253]
Schools blocking access
[edit]The University of New Mexico (UNM) in October 2005 blocked access to Facebook from UNM campus computers and networks, citing unsolicited emails and a similar site called UNM Facebook.[254] After a UNM user signed into Facebook from off campus, a message from Facebook said, "We are working with the UNM administration to lift the block and have explained that it was instituted based on erroneous information, but they have not yet committed to restore your access." UNM, in a message to students who tried to access the site from the UNM network, wrote, "This site is temporarily unavailable while UNM and the site owners work out procedural issues. The site is in violation of UNM's Acceptable Computer Use Policy for abusing computing resources (e.g., spamming, trademark infringement, etc.). The site forces use of UNM credentials (e.g., NetID or email address) for non-UNM business." However, after Facebook created an encrypted login and displayed a precautionary message not to use university passwords for access, UNM unblocked access the following spring semester.[255]
The Columbus Dispatch reported on June 22, 2006, that Kent State University's athletic director had planned to ban the use of Facebook by athletes and gave them until August 1 to delete their accounts.[256] On July 5, 2006, the Daily Kent Stater reported that the director reversed the decision after reviewing the privacy settings of Facebook. As long as they followed the university's policies of online conduct, they could keep their profiles.[257]
Closed social networks
[edit]Several web sites concerned with social networking, such as Salesforce have criticized the lack of information that users get when they share data. Advanced users cannot limit the amount of information anyone can access in their profiles, but Facebook promotes the sharing of personal information for marketing purposes, leading to the promotion of the service using personal data from users who are not fully aware of this. Facebook exposes personal data, without supporting open standards for data interchange.[258] According to several communities[259] and authors[260] closed social networking, on the other hand, promotes data retrieval from other people while not exposing one's personal information.
Openbook was established in early 2010 both as a parody of Facebook and a critique of its changing privacy management protocols.[261]
FB Purity
[edit]Fluff Busting Purity, or FB Purity for short (previously known as Facebook Purity) is a browser extension first launched in 2009 to allow users to remove annoyances such as spam from their feed and allow more individual control over what content is displayed.[262] In response, Facebook banned its developer from using the platform and blocked links to the extension.[263]
Unfollow Everything
[edit]Unfollow Everything is a browser extension designed to help Facebook users reduce their time spent on the platform by mass unliking to reduce the clutter in their news feed. The extension, together with its creator, has been banned by Facebook and subject to legal warnings.[264][265][266]
In 2024, Ethan Zuckerman, an associate professor at University of Massachusetts Amherst filed a suit against Meta in federal court to establish the legality of a hypothetical Unfollow Everything 2.0 browser extension.[267] The case was dismissed on procedural grounds in November 2024.[268][269]
Litigation
[edit]This section needs expansion. You can help by adding to it. (June 2025) |
Meta Platforms, formerly Facebook, Inc., has been involved in many lawsuits since its founding in 2004.
Lobbying
[edit]Facebook is among the biggest spenders on lobbying among tech companies; in 2020, it was the highest spender.[270] It spent more than $80 million on lobbying in the 2010s.[271][272] This funding may serve to weaken privacy protections.[273]
In March 2019, HuffPost reported that Facebook paid lawyer Ed Sussman to lobby for changes to their Wikipedia articles.[274][275]
In December 2021, news broke on The Wall Street Journal pointing to Meta's lobbying efforts to divide US lawmakers and "muddy the waters" in Congress, to hinder regulation following the 2021 whistleblower leaks.[276] Facebook's lobbyist team in Washington suggested to Republican lawmakers that the whistleblower "was trying to help Democrats," while the narrative told to Democratic staffers was that Republicans "were focused on the company's decision to ban expressions of support for Kyle Rittenhouse," The Wall Street Journal reported. According to the article, the company's goal was to "muddy the waters, divide lawmakers along partisan lines and forestall a cross-party alliance" against Facebook (now Meta) in Congress.[277]
In March 2022, the Washington Post reported that Meta had hired the Republican-backed consulting firm Targeted Victory to coordinate lobbying and negative PR against the video app TikTok via local media outlets, including concurrent promotion of corporate initiatives conducted by Facebook.[178]
Terms of use controversy
[edit]While Facebook originally made changes to its terms of use[278] or, terms of service, on February 4, 2009, the changes went unnoticed until Chris Walters, a blogger for the consumer-oriented blog, The Consumerist, noticed the change on February 15, 2009.[279] Walters complained the change gave Facebook the right to "Do anything they want with your content. Forever."[280] The section under the most controversy is the "User Content Posted on the Site" clause. Before the changes, the clause read:[278][non-primary source needed]
You may remove your User Content from the Site at any time. If you choose to remove your User Content, the license granted above will automatically expire, however you acknowledge that the Company may retain archived copies of your User Content.
The "license granted" refers to the license that Facebook has to one's "name, likeness, and image" to use in promotions and external advertising.[278] The new terms of use deleted the phrase that states the license would "automatically expire" if a user chose to remove content. By omitting this line, Facebook license extends to adopt users' content perpetually and irrevocably years after the content has been deleted.[279]
Many users of Facebook voiced opinions against the changes to the Facebook Terms of Use, leading to an Internet-wide debate over the ownership of content. The Electronic Privacy Information Center (EPIC) prepared a formal complaint with the Federal Trade Commission. Many individuals were frustrated with the removal of the controversial clause. Facebook users, numbering more than 38,000, joined a user group against the changes, and a number of blogs and news sites have written about this issue.[279]
After the change was brought to light in Walters's blog entry, in his blog on February 16, 2009, Zuckerberg addressed the issues concerning the recently made changes to Facebook's terms of use. Zuckerberg wrote "Our philosophy is that people own their information and control who they share it with."[281] In addition to this statement Zuckerberg explained the paradox created when people want to share their information (phone number, pictures, email address, etc.) with the public, but at the same time desire to remain in complete control of who has access to this info.[282]
To calm criticism, Facebook returned to its original terms of use. However, on February 17, 2009, Zuckerberg wrote in his blog, that although Facebook reverted to its original terms of use, it is in the process of developing new terms to address the paradox. Zuckerberg stated that these new terms will allow Facebook users to "share and control their information, and it will be written clearly in language everyone can understand." Zuckerberg invited users to join a group entitled "Facebook Bill of Rights and Responsibilities" to give their input and help shape the new terms.
On February 26, 2009, Zuckerberg posted a blog, updating users on the progress of the new Terms of Use. He wrote, "We decided we needed to do things differently and so we're going to develop new policies that will govern our system from the ground up in an open and transparent way." Zuckerberg introduces the two new additions to Facebook: the Facebook Principles[283][non-primary source needed] and the Statement of Rights and Responsibilities.[284][non-primary source needed] Both additions allow users to vote on changes to the terms of use before they are officially released. Because "Facebook is still in the business of introducing new and therefore potentially disruptive technologies", Zuckerberg explains, users need to adjust and familiarize themselves with the products before they can adequately show their support.[285]
This new voting system was initially applauded as Facebook's step to a more democratized social network system.[286] However, the new terms were harshly criticized in a report by computer scientists from the University of Cambridge, who stated that the democratic process surrounding the new terms is disingenuous and significant problems remain in the new terms.[287] The report was endorsed by the Open Rights Group.[288]
In December 2009, EPIC and a number of other U.S. privacy organizations filed another complaint[289] with the Federal Trade Commission (FTC) regarding Facebook's Terms of Service. In January 2011 EPIC filed a subsequent complaint[290] claiming that Facebook's new policy of sharing users' home address and mobile phone information with third-party developers were "misleading and fail[ed] to provide users clear and privacy protections", particularly for children under age 18.[291] Facebook temporarily suspended implementation of its policy in February 2011, but the following month announced it was "actively considering" reinstating the third-party policy.[292]
Interoperability and data portability
[edit]Facebook has been criticized for failing to offer users a feature to export their friends' information, such as contact information, for use with other services or software. The inability of users to export their social graph in an open standard format contributes to vendor lock-in and contravenes the principles of data portability.[293] Automated collection of user information without Facebook's consent violates its Statement of Rights and Responsibilities,[294][non-primary source needed] and third-party attempts to do so (e.g., Web scraping) have resulted in litigation, Power.com.
Facebook Connect has been criticized for its lack of interoperability with OpenID.[295]
Lawsuits over privacy
[edit]Facebook's strategy of making revenue through advertising has created a lot of controversy for its users as some argue that it is "a bit creepy ... but it is also brilliant."[296] Some Facebook users have raised privacy concerns because they do not like that Facebook sells user's information to third parties. In 2012, users sued Facebook for using their pictures and information on a Facebook advertisement.[297] Facebook gathers user information by keeping track of pages users have "Liked" and through the interactions users have with their connections.[298] They then create value from the gathered data by selling it.[298] In 2009 users also filed a lawsuit for Facebook's privacy invasion through the Facebook Beacon system. Facebook's team believed that through the Beacon system people could inspire their friends to buy similar products, however, users did not like the idea of sharing certain online purchases with their Facebook friends.[299] Users were against Facebook's invasion of privacy and sharing that privacy with the world. Facebook users became more aware of Facebook's behavior with user information in 2009 as Facebook launched their new Terms of Service. In Facebook's terms of service, Facebook admits that user information may be used for some of Facebook's own purposes such as sharing a link to your posted images or for their own commercials and advertisements.[300]
As Dijck argues in his book that, "the more users know about what happens to their personal data, the more inclined they are to raise objections."[298] This created a battle between Facebook and Facebook users described as the "battle for information control".[298] Facebook users have become aware of Facebook's intentions and people now see Facebook "as serving the interests of companies rather than its users."[301] In response to Facebook selling user information to third parties, concerned users have resorted to the method of "Obfuscation". Through obfuscation users can purposely hide their real identity and provide Facebook with false information that will make their collected data less accurate. By obfuscating information through sites such as FaceCloak, Facebook users have regained control of their personal information.[302]
Better Business Bureau review
[edit]As of December 2010[update], the Better Business Bureau gave Facebook an "A" rating.[303][304]
As of December 2010[update], the 36-month running count of complaints about Facebook logged with the Better Business Bureau is 1136, including 101 ("Making a full refund, as the consumer requested"), 868 ("Agreeing to perform according to their contract"), 1 ("Refuse [sic] to adjust, relying on terms of agreement"), 20 ("Unassigned"), 0 ("Unanswered") and 136 ("Refusing to make an adjustment").[303]
Security
[edit]Facebook's software has proven vulnerable to likejacking. On July 28, 2010, the BBC reported that security consultant Ron Bowes used a piece of code to scan Facebook profiles to collect data of 100 million profiles. The data collected was not hidden by the user's privacy settings. Bowes then published the list online. This list, which has been shared as a downloadable file, contains the URL of every searchable Facebook user's profile, their name and unique ID. Bowes said he published the data to highlight privacy issues, but Facebook claimed it was already public information.[305]
In early June 2013, The New York Times reported that an increase in malicious links related to the Trojan horse malware program Zeus were identified by Eric Feinberg, founder of the advocacy group Fans Against Kounterfeit Enterprise (FAKE). Feinberg said that the links were present on popular NFL Facebook fan pages and, following contact with Facebook, was dissatisfied with the corporation's "after-the-fact approach". Feinberg called for oversight, stating, "If you really want to hack someone, the easiest place to start is a fake Facebook profile—it's so simple, it's stupid."[306]
Rewards for vulnerability reporting
[edit]On August 19, 2013, it was reported that a Facebook user from Palestinian Autonomy, Khalil Shreateh, found a bug that allowed him to post material to other users' Facebook Walls. Users are not supposed to have the ability to post material to the Facebook Walls of other users unless they are approved friends of those users that they have posted material to. To prove that he was telling the truth, Shreateh posted material to Sarah Goodin's wall, a friend of Facebook CEO Mark Zuckerberg. Following this, Shreateh contacted Facebook's security team with the proof that his bug was real, explaining in detail what was going on. Facebook has a bounty program in which it compensates people a $500+ fee for reporting bugs instead of using them to their advantage or selling them on the black market. However, it was reported that instead of fixing the bug and paying Shreateh the fee, Facebook originally told him that "this was not a bug" and dismissed him. Shreateh then tried a second time to inform Facebook, but they dismissed him yet again. On the third try, Shreateh used the bug to post a message to Mark Zuckerberg's Wall, stating "Sorry for breaking your privacy ... but a couple of days ago, I found a serious Facebook exploit" and that Facebook's security team was not taking him seriously. Within minutes, a security engineer contacted Shreateh, questioned him on how he performed the move and ultimately acknowledged that it was a bug in the system. Facebook temporarily suspended Shreateh's account and fixed the bug after several days. However, in a move that was met with much public criticism and disapproval, Facebook refused to pay out the 500+ fee to Shreateh; instead, Facebook responded that by posting to Zuckerberg's account, Shreateh had violated one of their terms of service policies and therefore "could not be paid". Included with this, the Facebook team strongly censured Shreateh over his manner of resolving the matter. In closing, they asked that Shreateh continue to help them find bugs.[307][308][309]
On August 22, 2013, Yahoo News reported that Marc Maiffret, a chief technology officer of the cybersecurity firm BeyondTrust, is prompting hackers to help raise a $10,000 reward for Khalil Shreateh. On August 20, Maiffret stated that he had already raised $9,000 in his efforts, including the $2,000 he himself contributed. He and other hackers alike have denounced Facebook for refusing Shreateh compensation. Maiffret said: "He is sitting there in Palestine doing this research on a five-year-old laptop that looks like it is half broken. It's something that might help him out in a big way." Facebook representatives have since responded, "We will not change our practice of refusing to pay rewards to researchers who have tested vulnerabilities against real users." Facebook representatives also claimed they'd paid out over $1 million to individuals who have discovered bugs in the past.[310]
Environmental impacts
[edit]In 2010, Prineville, Oregon, was chosen as the site for Facebook's new data center.[311] However, the center has been met with criticism from environmental groups such as Greenpeace because the power utility company contracted for the center, PacifiCorp, generates 60% of its electricity from coal.[312][313][314] In September 2010, Facebook received a letter from Greenpeace containing half a million signatures asking the company to cut its ties to coal-based electricity.[315]
On April 21, 2011, Greenpeace released a report showing that of the top ten big brands in cloud computing, Facebook relied the most on coal for electricity for its data centers. At the time, data centers consumed up to 2% of all global electricity and this amount was projected to increase. Phil Radford of Greenpeace said "we are concerned that this new explosion in electricity use could lock us into old, polluting energy sources instead of the clean energy available today".[316]
On December 15, 2011, Greenpeace and Facebook announced together that Facebook would shift to use clean and renewable energy to power its own operations. Marcy Scott Lynn, of Facebook's sustainability program, said it looked forward "to a day when our primary energy sources are clean and renewable" and that the company is "working with Greenpeace and others to help bring that day closer".[317][318]
In April 2022, Meta Platforms, Alphabet Inc., Shopify, McKinsey & Company, and Stripe, Inc. announced a $925 million Advance market commitment of carbon dioxide removal (CDR) from companies that are developing CDR technology over the next 9 years.[319][320] In January 2023, the American Clean Power Association released an annual industry report that found that 326 corporations had contracted 77.4 gigawatts of wind or solar energy by the end of 2022 and that the three corporate purchasers of the largest volumes of wind and solar energy were Meta Platforms, Amazon, and Alphabet Inc.[321]
Advertising
[edit]Click fraud
[edit]In July 2012, startup Limited Run claimed that 80% of its Facebook clicks came from bots.[322][323][324] Limited Run co-founder Tom Mango told TechCrunch that they "spent roughly a month testing this" with six web analytics services including Google Analytics and in-house software.[322] Click fraud (Allege reason) Limited Run said it came to the conclusion that the clicks were fraudulent after running its own analysis. It determined that most of the clicks for which Facebook was charging it came from computers that were not loading Javascript, a programming language that allows Web pages to be interactive. Almost all Web browsers load Javascript by default, so the assumption is that if a click comes from one that is not, it's probably not a real person but a bot.[325]
Like fraud
[edit]Facebook offers an advertising tool for pages to get more "likes".[326][non-primary source needed] According to Business Insider, this advertising tool is called "Suggested Posts" or "Suggested Pages", allowing companies to market their page to thousands of new users for as little as $50.[327]
Global Fortune 100 firms are increasingly using social media marketing tools as the number of "likes" per Facebook page has risen by 115% globally.[clarification needed][328] Biotechnology company Comprendia investigated Facebook's "likes" through advertising by analyzing the life science pages with the most likes. They concluded that at as much as 40% of "likes" from company pages are suspected to be fake.[329] According to Facebook's annual report, an estimated 0.4% and 1.2% of active users are undesirable accounts that create fake likes.[330]
Small companies such as PubChase have publicly testified against Facebook's advertising tool, claiming legitimate advertising on Facebook creates fraudulent Facebook "likes". In May 2013, PubChase decided to build up its Facebook following through Facebook's advertising tool, which promises to "connect with more of the people who matter to you". After the first day, the company grew suspicious of the increased likes as they ended up with 900 likes from India. According to PubChase, none of the users behind the "likes" seemed to be scientists. The statistics from Google Analytics indicate that India is not in the company's main user base. PubChase continues by stating that Facebook has no interface to delete the fake likes; rather, the company must manually delete each follower themselves.[331]
In February 2014, Derek Muller used his YouTube account Veritasium to upload a video titled "Facebook Fraud". Within three days, the video had gone viral with more than a million views (it has reached 6,371,759 views as of December 15, 2021). In the video, Derek illustrates how after paying US$50 to Facebook advertising, the "likes" to his fan page have tripled in a few days and soon reached 70,000 "likes", compared to his original 2,115 likes before the advertising. Despite the significant increase in likes, Derek noticed his page has actually decreased in engagement – there were fewer people commenting, sharing, and liking his posts and updates despite the significant increase in "likes". Derek also noticed that the users that "liked" his page were users that liked hundreds of other pages, including competing pages such as AT&T and T-Mobile. He theorizes that users are purposely clicking "like" on any and every page to deter attention away from the pages they were paid to "like". Derek claims, "I never bought fake likes, I used Facebook legitimate advertising, but the results are as if I paid for fake likes from a click farm".[332][better source needed]
In response to the fake "likes" complaints, Facebook told Business Insider:
We're always focused on maintaining the integrity of our site, but we've placed an increased focus on abuse from fake accounts recently. We've made a lot of progress by building a combination of automated and manual systems to block accounts used for fraudulent purposes and Like button clicks. We also take action against sellers of fake clicks and help shut them down.[327]
Undesired targeting
[edit]On August 3, 2007, several British companies, including First Direct, Vodafone, Virgin Media, The Automobile Association, Halifax and Prudential pulled advertising in Facebook after finding that their ads were displayed on the page of the British National Party, a far-right political party.[333]
Internal projections
[edit]In 2024, Facebook's internal projections showed ~10% of its revenue that year would come from "ads for scams and banned goods" with 15 billion “higher risk” ads shown daily. A spokesperson responded that the estimates were "rough and overly-inclusive".[334]
Facilitation of housing discrimination
[edit]Facebook has faced allegations that its advertising platforms facilitate housing discrimination by means of internal functions for targeted advertising, which allowed advertisers to target or exclude specific audiences from campaigns.[335][336][337] Researchers have also found that Facebook's advertising platform may be inherently discriminatory, since ad delivery is also influenced by how often specific demographics interact with specific types of advertising – even if they are not explicitly determined by the advertiser.[338]
Under the United States' Fair Housing Act, it is illegal to show a preference for or against tenants based on specific protected classes (including race, ethnicity, and disabilities), when advertising or negotiating the rental or sale of housing. In 2016, ProPublica found that advertisers could target or exclude users from advertising based on an "Ethnic Affinity" – a demographic trait which is determined based on a user's interests and behaviors on Facebook, and not explicitly provided by the user. This could, in turn, be used to discriminate based on race.[339] In February 2017, Facebook stated that it would implement stronger measures to forbid discriminatory advertising across the entire platform. Advertisers who attempt to create ads for housing, employment, or credit (HEC) opportunities would be blocked from using ethnic affinities (renamed "multicultural affinities" and now classified as behaviors) to target the ad. If an advertiser uses any other audience segment to target ads for HEC, they would be informed of the policies, and be required to affirm their compliance with relevant laws and policies.[340]
However, in November 2017, ProPublica found that automated enforcement of these new policies was inconsistent. They were also able to successfully create housing ads that excluded users based on interests and other factors that effectively imply associations with protected classes, including interests in wheelchair ramps, the Spanish-language television network Telemundo, and New York City ZIP codes with majority minority populations. In response to the report, Facebook temporarily removed the ability to target any ad with exclusions based on multicultural affinities.[335][337]
In April 2018, Facebook permanently removed the ability to create exclusions based on multicultural affinities. In July 2018, Facebook signed a legally binding agreement with the State of Washington to take further steps within 90 days to prevent the use of its advertising platform for housing discrimination against protected classes.[341] The following month, Facebook announced that it would remove at least 5,000 categories from its exclusion system to prevent "misuse", including those relating to races and religions.[342] On March 19, 2019, Facebook settled a lawsuit over the matter with the National Fair Housing Alliance, agreeing to create a separate portal for HEC advertising with limited targeting options by September 2019, and to provide a public archive of all HEC advertising.[343][344]
On March 28, 2019, the U.S. Department of Housing and Urban Development (HUD) filed a lawsuit against Facebook, having filed a formal complaint against the company on August 13, 2018. The HUD also took issue with Facebook's tendency to deliver ads based on users having "particular characteristics [that are] most likely to engage with the ad".[345][336]
Fake accounts
[edit]In August 2012, Facebook revealed that more than 83 million Facebook accounts (8.7% of total users) are fake accounts.[346] These fake profiles consist of duplicate profiles, accounts for spamming purposes and personal profiles for business, organization or non-human entities such as pets.[347] As a result of this revelation, the share price of Facebook dropped below $20.[348] Furthermore, there is much effort to detect fake profiles using automated means, in one such work, machine learning techniques are used to detect fake users.[349]
Facebook initially refused to remove a "business" page devoted to a woman's anus, created without her knowledge while she was underage, due to other Facebook users having expressed interest in the topic. After BuzzFeed published a story about it, the page was finally removed. The page listed her family's former home address as that of the "business".[350]
User interface
[edit]Upgrades
[edit]September 2008
[edit]In September 2008, Facebook permanently moved its users to what they termed the "New Facebook" or Facebook 3.0.[351] This version contained several different features and a complete layout redesign. Between July and September, users had been given the option to use the new Facebook in place of the original design,[352] or to return to the old design.
Facebook's decision to migrate their users was met with some controversy in their community. Several groups started opposing the decision, some with over a million users.[353]
October 2009
[edit]In October 2009, Facebook redesigned the news feed so that the user could view all types of things that their friends were involved with. In a statement, they said,[282]
your applications [stories] generate can show up in both views. The best way for your stories to appear in the News Feed filter is to create stories that are highly engaging, as high quality, interesting stories are most likely to garner likes and comments by the user's friends.
This redesign was explained as:[282]
News Feed will focus on popular content, determined by an algorithm based on interest in that story, including the number of times an item is liked or commented on. Live Feed will display all recent stories from a large number of a user's friends.
The redesign was met immediately with criticism with users, many who did not like the amount of information that was coming at them. This was also compounded by the fact that people could not select what they saw.
November/December 2009
[edit]In November 2009, Facebook issued a proposed new privacy policy, and adopted it unaltered in December 2009. They combined this with a rollout of new privacy settings. This new policy declared certain information, including "lists of friends", to be "publicly available", with no privacy settings; it was previously possible to keep access to this information restricted. Due to this change, the users who had set their "list of friends" as private were forced to make it public without even being informed, and the option to make it private again was removed. This was protested by many people and privacy organizations such as the EFF.[354]
The change was described by Ryan Tate as Facebook's Great Betrayal,[355] forcing user profile photos and friends lists to be visible in users' public listing, even for users who had explicitly chosen to hide this information previously,[354] and making photos and personal information public unless users were proactive about limiting access.[356] For example, a user whose "Family and Relationships" information was set to be viewable by "Friends Only" would default to being viewable by "Everyone" (publicly viewable). That is, information such as the gender of the partner the user is interested in, relationship status, and family relations became viewable to those even without a Facebook account. Facebook was heavily criticized[357] for both reducing its users' privacy and pushing users to remove privacy protections. Groups criticizing the changes include the Electronic Frontier Foundation[354] and American Civil Liberties Union.[358] Mark Zuckerberg, CEO, had hundreds of personal photos and his events calendar exposed in the transition.[359] Facebook has since re-included an option to hide friends lists from being viewable; however, this preference is no longer listed with other privacy settings, and the former ability to hide the friends list from selected people among one's own friends is no longer possible.[360] Journalist Dan Gillmor deleted his Facebook account over the changes, stating he "can't entirely trust Facebook"[361] and Heidi Moore at Slate's Big Money temporarily deactivated her account as a "conscientious objection".[362] Other journalists have been similarly disappointed and outraged by the changes.[355] Defending the changes, founder Mark Zuckerberg said "we decided that these would be the social norms now and we just went for it".[363] The Office of the Privacy Commissioner of Canada launched another investigation into Facebook's privacy policies after complaints following the change.[364]
January 2018
[edit]Following a difficult 2017, marked by accusations of relaying fake news and revelations about groups close to Russia which tried to influence the 2016 US presidential election (see Russian interference in the 2016 United States elections) via advertisements on his service, Mark Zuckerberg, announced in his traditional January post:
"We're making a major change to how we build Facebook. I'm changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions".
— Mark Zuckerberg
Following surveys on Facebook users,[365] this desire for change will take the form of a reconfiguration of the News Feed algorithms to:
- Prioritize content of family members and friends (Mark Zuckerberg January 12, Facebook:[366] "The first changes you'll see will be in News Feed, where you can expect to see more from your friends, family and groups".)
- Give priority to news articles from local sources considered more credible
The recent changes of the News Feed algorithm[366] (see content : News Feed#History) are expected to improve "the amount of meaningful content viewed".[367] To this end, the new algorithm is supposed to determine the publications around which a user is most likely to interact with his friends, and make them appear higher in the News Feed instead of items for example from media companies or brands. These are posts "that inspire back-and-forth discussion in the comments and posts that you might want to share and react to".[368] But, as even Mark Zuckerberg admitted,[366] he "expect the time people spend on Facebook and some measures of engagement will go down. But I also expect the time you do spend on Facebook will be more valuable". The less public content a Facebook user sees on their News Feed, the fewer brands are able to reach consumers. That's unarguably a major lose for advertisers[369] and publishers.
This change which seems to be just another update of the social network, is widely criticized because of the heavy consequences it might lead to "In countries such as the Philippines, Myanmar and South Sudan and emerging democracies such Bolivia and Serbia, it is not ethical to plead platform neutrality or to set up the promise of a functioning news ecosystem and then simply withdraw at a whim".[370] Indeed, in such countries, Facebook was the promise of a reliable and objective platform on which they could hope for raw information. Independent media companies tried to fight censorship through their articles and were promoting in a way the right for citizens to know what is going on in their countries.
The company's way of handling scandals and criticism over fake news by diminishing its media company image is even defined as "potentially deadly"[370] regarding the poor and fraught political environments like Myanmar or South Sudan appealed by the "free basics" programme of the social network. Serbian journalist Stevan Dojčinović goes further by describing Facebook as a "monster" and accuses the company of "showing a cynical lack of concern for how its decisions affect the most vulnerable".[371] Indeed, Facebook had experimented with withdrawing media companies' news on user's newsfeed in few countries such as Serbia. Stevan Docjcinovic then wrote an article explaining how Facebook helped them "to bypass mainstream channels and bring [their] stories to hundreds of thousands of readers".[371] The rule about publishers is not being applied to paid posts raising the journalist's fears about the social network "becoming just another playground for the powerful"[371] by letting them for example buy Facebook ads. Critics are also visible in other media companies depicting the private company as the "destroyer of worlds". LittleThings CEO, Joe Speiser states that the algorithm shift "took out roughly 75% of LittleThings" organic traffic while hammering its profit margins"[372] compelling them to close their doors because they were relying on Facebook to share content.
Net neutrality
[edit]"Free Basics" controversy in India
[edit]In February 2016, TRAI ruled against differential data pricing for limited services from mobile phone operators effectively ending zero-rating platforms in India. Zero rating provides access to a limited number of websites for no charge to the end user. Net-neutrality supporters from India (SaveTheInternet.in) brought out the negative implications of the Facebook Free Basic program and spread awareness to the public.[373] Facebook's Free Basics program[374] was a collaboration with Reliance Communications to launch Free Basics in India. The TRAI ruling against differential pricing marked the end of Free Basics in India.[375]
Earlier, Facebook had spent US$44 million in advertising and it implored all of its Indian users to send an email to the Telecom Regulatory Authority to support its program.[376] TRAI later asked Facebook to provide specific responses from the supporters of Free Basics.[377][378]
Treatment of potential competitors
[edit]In December 2018 details on Facebook's behavior against competitors surfaced. The UK parliament member Damian Collins released files from a court ruling between Six4Three and Facebook. According to those files, the social media company Twitter released its app Vine in 2013. Facebook blocked Vine's Access to its data.[379]
In July 2020, Facebook along with other tech giants Apple, Amazon and Google were accused of maintaining harmful power and anti-competitive strategies to quash potential competitors in the market.[380] The CEOs of respective firms appeared in a teleconference on July 29, 2020, before the lawmakers of the United States Congress.[381]
Influence on elections
[edit]In what is known as the Facebook–Cambridge Analytica data scandal, Facebook users were targeted with political advertising without informed consent in an attempt to promote right-wing causes, including the presidential election of Donald Trump.[382] In addition to elections in the United States, Facebook has been implicated in electoral influence campaigns in places like Argentina, Kenya, Malaysia, the United Kingdom, and South Africa, as discussed in the 2019 documentary The Great Hack.[383][384]
Blocking news in Canada
[edit]In response to the Online News Act, Meta Platforms began blocking access to news sites for Canadian users at the beginning of August 2023.[385][386] This also extended to local Canadian news stories about the wildfires,[387] a decision that was heavily criticized by Trudeau, local government officials, academics, researchers, and evacuees.[388][389][390]
Ollie Williams of Yellowknife's Cabin Radio said that users had to resort to posting screenshots of news stories, as posting news directly would result in the link getting blocked.[390][387]
Meta responded to these criticisms by stating that Canadians "can continue to use our technologies to connect with their communities and access reputable information […] from official government agencies, emergency services and non-governmental organizations," and encouraged them to use Facebook's Safety Check feature.[388][391]
See also
[edit]- Criticism of Amazon
- Criticism of Apple
- Criticism of Google
- Criticism of Microsoft
- Criticism of Yahoo!
- Europe v Facebook
- Facebook Files
- Facebook history
- Facebook malware
- Facebook Pixel
- Instagram's impact on people
- Issues involving social networking services
- Online hate speech
- Social media and suicide
- Surveillance capitalism
- Techlash
References
[edit]- ^ "Meta and Mark Zuckerberg must not be allowed to shape the next era of humanity". The Guardian. February 4, 2024. ISSN 0261-3077. Retrieved February 5, 2024.
- ^ Duncan, Geoff (June 17, 2010). "Open letter urges Facebook to strengthen privacy". Digital Trends. Retrieved June 3, 2017.
- ^ Paul, Ian (June 17, 2010). "Advocacy Groups Ask Facebook for More Privacy Changes". PC World. International Data Group. Retrieved June 3, 2017.
- ^ Aspen, Maria (February 11, 2008). "How Sticky Is Membership on Facebook? Just Try Breaking Free". The New York Times. Retrieved June 3, 2017.
- ^ Anthony, Sebastian (March 19, 2014). "Facebook's facial recognition software is now as accurate as the human brain, but what now?". ExtremeTech. Ziff Davis. Retrieved June 3, 2017.
- ^ Gannes, Liz (June 8, 2011). "Facebook facial recognition prompts EU privacy probe". CNET. Retrieved June 3, 2017.
- ^ Friedman, Matt (March 21, 2013). "Bill to ban companies from asking about job candidates' Facebook accounts is headed to governor". The Star-Ledger. Advance Digital. Retrieved June 3, 2017.
- ^ Stangl, Fabian J.; Riedl, René; Kiemeswenger, Roman; Montag, Christian (2023). "Negative psychological and physiological effects of social networking site use: The example of Facebook". Frontiers in Psychology. 14 1141663. doi:10.3389/fpsyg.2023.1141663. ISSN 1664-1078. PMC 10435997. PMID 37599719.
- ^ a b "How Facebook Breeds Jealousy". Seeker. Group Nine Media. February 10, 2010. Retrieved June 3, 2017.
- ^ a b Matyszczyk, Chris (August 11, 2009). "Study: Facebook makes lovers jealous". CNET. Retrieved June 3, 2017.
- ^ Ngak, Chenda (November 27, 2012). "Facebook may cause stress, study says". CBS News. Retrieved June 3, 2017.
- ^ Smith, Dave (November 13, 2015). "Quitting Facebook will make you happier and less stressed, study says". Business Insider. Axel Springer SE. Retrieved June 3, 2017.
- ^ Bugeja, Michael J. (January 23, 2006). "Facing the Facebook". The Chronicle of Higher Education. Archived from the original on February 20, 2008. Retrieved June 3, 2017.
- ^ Hough, Andrew (April 8, 2011). "Student 'addiction' to technology 'similar to drug cravings', study finds". The Daily Telegraph. Retrieved June 3, 2017.
- ^ "Facebook and Twitter 'more addictive than tobacco and alcohol'". The Daily Telegraph. February 1, 2012. Archived from the original on February 16, 2015. Retrieved June 3, 2017.
- ^ Wauters, Robin (September 16, 2010). "Greenpeace Slams Zuckerberg For Making Facebook A 'So Coal Network' (Video)". TechCrunch. AOL. Retrieved June 3, 2017.
- ^ Neate, Rupert (December 23, 2012). "Facebook paid £2.9m tax on £840m profits made outside US, figures show". The Guardian. Retrieved June 3, 2017.
- ^ a b Grinberg, Emanuella (September 18, 2014). "Facebook 'real name' policy stirs questions around identity". CNN. Retrieved June 3, 2017.
- ^ Doshi, Vidhi (July 19, 2016). "Facebook under fire for 'censoring' Kashmir-related posts and accounts". The Guardian. Retrieved June 3, 2017.
- ^ Arrington, Michael (November 22, 2007). "Is Facebook Really Censoring Search When It Suits Them?". TechCrunch. AOL. Retrieved June 3, 2017.
- ^ Wong, Julia Carrie (March 18, 2019). "The Cambridge Analytica scandal changed the world – but it didn't change Facebook". The Guardian. Retrieved May 2, 2019.
- ^ Greenwald, Glenn; MacAskill, Ewen (June 7, 2013). "NSA Prism program taps in to user data of Apple, Google and others". The Guardian. Retrieved June 3, 2017.
- ^ Cadwalladr, Carole; Graham-Harrison, Emma (March 17, 2018). "How Cambridge Analytica turned Facebook 'likes' into a lucrative political tool". The Guardian. Retrieved August 26, 2022.
- ^ Setalvad, Ariha (August 7, 2015). "Why Facebook's video theft problem can't last". The Verge. Retrieved June 3, 2017.
- ^ "Facebook, Twitter and Google grilled by MPs over hate speech". BBC News. BBC. March 14, 2017. Retrieved June 3, 2017.
- ^ Toor, Amar (September 15, 2015). "Facebook will work with Germany to combat anti-refugee hate speech". The Verge. Retrieved June 3, 2017.
- ^ Sherwell, Philip (October 16, 2011). "Cyber anarchists blamed for unleashing a series of Facebook 'rape pages'". The Daily Telegraph. Retrieved June 3, 2017.
- ^ "Rohingya sue Facebook for $150bn over Myanmar hate speech". BBC News. December 7, 2021.
- ^ Glenn Greenwald (September 12, 2016). "Facebook Is Collaborating With the Israeli Government to Determine What Should Be Censored". The Intercept.
- ^ Sheera Frenkel (May 19, 2021). "Mob Violence Against Palestinians in Israel Is Fueled by Groups on WhatsApp". The New York Times.
- ^ "20,000 Israelis sue Facebook for ignoring Palestinian incitement". The Times of Israel. October 27, 2015. Retrieved June 3, 2017.
- ^ "Israel: Facebook's Zuckerberg has blood of slain Israeli teen on his hands". The Times of Israel. July 2, 2016. Retrieved June 3, 2017.
- ^ Burke, Samuel (November 19, 2016). "Zuckerberg: Facebook will develop tools to fight fake news". CNN. Retrieved June 3, 2017.
- ^ "Hillary Clinton says Facebook 'must prevent fake news from creating a new reality'". The Daily Telegraph. June 1, 2017. Archived from the original on January 12, 2022. Retrieved June 3, 2017.
- ^ Fiegerman, Seth (May 9, 2017). "Facebook's global fight against fake news". CNN. Retrieved June 3, 2017.
- ^ Grinberg, Emanuella; Said, Samira (March 22, 2017). "Police: At least 40 people watched teen's sexual assault on Facebook Live". CNN. Retrieved June 3, 2017.
- ^ Grinberg, Emanuella (January 5, 2017). "Chicago torture: Facebook Live video leads to 4 arrests". CNN. Retrieved June 3, 2017.
- ^ Sulleyman, Aatif (April 27, 2017). "Facebook Live killings: Why the criticism has been harsh". The Independent. Retrieved June 3, 2017.
- ^ Farivar, Cyrus (January 7, 2016). "Appeals court upholds deal allowing kids' images in Facebook ads". Ars Technica. Retrieved June 3, 2017.
- ^ Levine, Dan; Oreskovic, Alexei (March 12, 2012). "Yahoo sues Facebook for infringing 10 patents". Reuters. Retrieved June 3, 2017.
- ^ Wagner, Kurt (February 1, 2017). "Facebook lost its Oculus lawsuit and has to pay $500 million". Recode. Retrieved June 3, 2017.
- ^ Brandom, Rusell (May 19, 2016). "Lawsuit claims Facebook illegally scanned private messages". The Verge. Retrieved June 3, 2017.
- ^ Tryhorn, Chris (July 25, 2007). "Facebook in court over ownership". The Guardian. Retrieved June 3, 2017.
- ^ Michels, Scott (July 20, 2007). "Facebook Founder Accused of Stealing Idea for Site". ABC News. ABC. Retrieved June 3, 2017.
- ^ Carlson, Nicholas (March 5, 2010). "How Mark Zuckerberg Hacked Into Rival ConnectU In 2004". Business Insider. Axel Springer SE. Retrieved June 3, 2017.
- ^ Arthur, Charles (February 12, 2009). "Facebook paid up to $65m to founder Mark Zuckerberg's ex-classmates". The Guardian. Retrieved June 3, 2017.
- ^ Singel, Ryan (April 11, 2011). "Court Tells Winklevoss Twins to Quit Their Facebook Whining". Wired. Retrieved June 3, 2017.
- ^ Stempel, Jonathan (July 22, 2011). "Facebook wins dismissal of second Winklevoss case". Reuters. Retrieved June 3, 2017.
- ^ Oweis, Khaled Yacoub (November 23, 2007). "Syria blocks Facebook in Internet crackdown". Reuters. Retrieved June 3, 2017.
- ^ Wauters, Robin (July 7, 2009). "China Blocks Access To Twitter, Facebook After Riots". TechCrunch. AOL. Retrieved June 3, 2017.
- ^ "Iranian government blocks Facebook access". The Guardian. May 24, 2009. Retrieved June 3, 2017.
- ^ Kelly, Makena (March 11, 2019). "Facebook proves Elizabeth Warren's point by deleting her ads about breaking up Facebook". The Verge. Retrieved March 21, 2022.
- ^ "Is Facebook Censoring Posts Critical of the Social Media Giant?". Haaretz. Retrieved March 21, 2022.
- ^ "Facebook moderators tell of strict scrutiny and PTSD symptoms". the Guardian. February 26, 2019. Retrieved March 21, 2022.
- ^ "Ex-Facebook worker claims disturbing content led to PTSD". the Guardian. December 4, 2019. Retrieved March 21, 2022.
- ^ Nycyk, Michael (January 1, 2020). Facebook: Exploring the Social Network and its Challenges.
- ^ "Is Facebook Really Censoring Search When It Suits Them?". TechCrunch. November 23, 2007. Retrieved March 21, 2022.
- ^ "After battling ISIS, Kurds find new foe in Facebook". The World from PRX. October 7, 2015. Retrieved March 21, 2022.
- ^ Webmaster (July 3, 2017). "Facebook's Kurdish problem?". The Stream - Al Jazeera English. Archived from the original on July 3, 2017. Retrieved March 21, 2022.
- ^ "Facebook under fire for 'censoring' Kashmir-related posts and accounts". the Guardian. July 19, 2016. Retrieved March 21, 2022.
- ^ "Mark Zuckerberg admits Facebook censored Hunter Biden laptop story during 2020 U. S. elections The Hindu Net Desk". The Hindu. August 26, 2022. Retrieved August 26, 2022.
- ^ "Meta removes ICE-tracking Facebook page in Chicago at the request of the Justice Department". AP News. October 15, 2025.
- ^ Feiner, Lauren (October 14, 2025). "Facebook removes ICE-tracking page after US government 'outreach'". The Verge.
- ^ @AGPamBondi (October 14, 2025). "Today following outreach from @thejusticedept , Facebook removed a large group page that was being used to dox and target @ICEgov agents in Chicago. The wave of violence against ICE has been driven by online apps and social media campaigns designed to put ICE officers at risk just for doing their jobs. The Department of Justice will continue engaging tech companies to eliminate platforms where radicals can incite imminent violence against federal law enforcement" (Tweet). Archived from the original on October 16, 2025 – via Twitter.
- ^ "Facebook under fire as human rights groups claim 'censorship' of pro-Palestine posts". the Guardian. May 26, 2021. Retrieved March 21, 2022.
- ^ "Inside Facebook's Meeting With Palestinian Prime Minister". Time. Retrieved March 21, 2022.
- ^ Facebook Praise, Support and Representation Moderation Guidelines (Reproduced Snapshot), The Intercept, October 12, 2021, archived from the original on April 8, 2022, retrieved March 21, 2022
- ^ Biddle, Sam (October 12, 2021). "Revealed: Facebook's Secret Blacklist of "Dangerous Individuals and Organizations"". The Intercept. Retrieved March 21, 2022.
- ^ Frier, Sarah (August 13, 2019). "Facebook Paid Contractors to Transcribe Users' Audio Chats". Bloomberg News.
- ^ "Facebook paid hundreds of contractors to transcribe users' audio". Los Angeles Times. August 13, 2019. Retrieved May 8, 2020.
- ^ Haselton, Todd (August 13, 2019). "Facebook hired people to transcribe voice calls made on Messenger". CNBC. Retrieved May 8, 2020.
- ^ Nesse, Randolph; Williams, George C. (1994). Why We Get Sick: The New Science of Darwinian Medicine. New York: Vintage Books. p. 9. ISBN 978-0-679-74674-4.
- ^ Nesse, Randolph M. (2005). "32. Evolutionary Psychology and Mental Health". In Buss, David M. (ed.). The Handbook of Evolutionary Psychology (1st ed.). Hoboken, NJ: Wiley. pp. 904–905. ISBN 978-0-471-26403-3.
- ^ Nesse, Randolph M. (2016) [2005]. "43. Evolutionary Psychology and Mental Health". In Buss, David M. (ed.). The Handbook of Evolutionary Psychology, Volume 2: Integrations (2nd ed.). Hoboken, NJ: Wiley. pp. 1008–1009. ISBN 978-1-118-75580-8.
- ^ Nesse, Randolph (2019). Good Reasons for Bad Feelings: Insights from the Frontier of Evolutionary Psychiatry. Dutton. pp. 31–36. ISBN 978-1-101-98566-3.
- ^ Statistical Abstract of the United States: 1955 (PDF) (Report). Statistical Abstract of the United States (76 ed.). U.S. Census Bureau. 1955. p. 554. Retrieved June 29, 2021.
- ^ a b File, Thom (May 2013). Computer and Internet Use in the United States (PDF) (Report). Current Population Survey Reports. Washington, D.C.: U.S. Census Bureau. Retrieved February 11, 2020.
- ^ Tuckel, Peter; O'Neill, Harry (2005). Ownership and Usage Patterns of Cell Phones: 2000–2005 (PDF) (Report). JSM Proceedings, Survey Research Methods Section. Alexandria, VA: American Statistical Association. p. 4002. Retrieved September 25, 2020.
- ^ "Demographics of Internet and Home Broadband Usage in the United States". Pew Research Center. April 7, 2021. Retrieved May 19, 2021.
- ^ "Demographics of Mobile Device Ownership and Adoption in the United States". Pew Research Center. April 7, 2021. Retrieved May 19, 2021.
- ^ Hough, Andrew (April 8, 2011). "Student 'addiction' to technology 'similar to drug cravings', study finds". London.
- ^ "Facebook and Twitter 'more addictive than tobacco and alcohol'". London. February 1, 2012. Archived from the original on February 2, 2012.
- ^ Edwards, Ashton (August 1, 2014). "Facebook goes down for 30 minutes, 911 calls pour in". Fox13. Retrieved August 2, 2016.
- ^ Lenhart, Amanda (April 9, 2015). "Teens, Social Media & Technology Overview 2015". Pew Research Center. Retrieved July 8, 2020.
- ^ Turel, Ofir; Bechara, Antoine (2016). "Social Networking Site Use While Driving: ADHD and the Mediating Roles of Stress, Self-Esteem and Craving". Frontiers in Psychology. 7: 455. doi:10.3389/fpsyg.2016.00455. PMC 4812103. PMID 27065923.
- ^ Settanni, Michele; Marengo, Davide; Fabris, Matteo Angelo; Longobardi, Claudio (2018). "The interplay between ADHD symptoms and time perspective in addictive social media use: A study of adolescent Facebook users". Children and Youth Services Review. 89. Elsevier: 165–170. doi:10.1016/j.childyouth.2018.04.031. S2CID 149795392.
- ^ Paul, Kari (November 27, 2023). "Meta designed platforms to get children addicted, court documents allege". The Guardian. Archived from the original on November 27, 2023. Retrieved November 28, 2023.
- ^ Milmo, Dan; Paul, Kari (October 6, 2021). "Facebook harms children and is damaging democracy, claims whistleblower". The Guardian. Archived from the original on June 29, 2023. Retrieved November 28, 2023.
- ^ Savage, Michael (January 26, 2019). "Health secretary tells social media firms to protect children after girl's death". The Guardian. Retrieved January 30, 2019.
- ^ a b c Adams, Richard (January 30, 2019). "Social media urged to take 'moment to reflect' after girl's death". The Guardian. Retrieved January 30, 2019.
- ^ "Potential for Facebook addiction and consequences". July 15, 2012. Archived from the original on October 29, 2012. Retrieved July 15, 2012.
- ^ "The Anti-Social Network". Slate. January 26, 2011.
- ^ "How Facebook Breeds Jealousy". Discovery.com. February 10, 2010. Archived from the original on September 29, 2012. Retrieved February 12, 2011.
- ^ "Study: Facebook makes lovers jealous". CNET. August 11, 2009. Archived from the original on October 26, 2012. Retrieved February 12, 2011.
- ^ "Jealous much? MySpace, Facebook can spark it". NBC News. July 31, 2007.
- ^ a b "Facebook Causes Jealousy, Hampers Romance, Study Finds". University of Guelph. February 13, 2007.
- ^ a b "Facebook jealousy sparks asthma attacks in dumped boy". USA Today. November 19, 2010.
- ^ Caers, Ralf; Castelyns, Vanessa (2011). "LinkedIn and Facebook in Belgium: The Influences and Biases of Social Network Sites in Recruitment and Selection Procedures". Social Science Computer Review. 29 (4). SAGE Publications: 437–448. doi:10.1177/0894439310386567. S2CID 60557417.
- ^ Sharone, Ofer (2017). "LinkedIn or LinkedOut? How Social Networking Sites are Reshaping the Labor Market". In Vallas, Steven (ed.). Emerging Conceptions of Work, Management and the Labor Market. Research in the Sociology of Work. Vol. 30. Bingley, UK: Emerald Publishing Ltd. pp. 1–31. doi:10.1108/S0277-283320170000030001. ISBN 978-1-78714-460-6.
- ^ Hanna Krasnova; Helena Wenninger; Thomas Widjaja; Peter Buxmann (January 23, 2013). "Envy on Facebook: A Hidden Threat to Users' Life Satisfaction?" (PDF). 11th International Conference on Wirtschaftsinformatik, February 27 – March 1, 2013, Leipzig, Germany. Archived from the original (PDF) on June 1, 2014. Retrieved June 13, 2014.
- ^ BBC News – Facebook use 'makes people feel worse about themselves'. BBC.co.uk (August 15, 2013). Retrieved September 4, 2013.
- ^ Myung Suh Lim; Junghyun Kim (June 4, 2018). "Facebook users' loneliness based on different types of interpersonal relationships: Links to grandiosity and envy". Information Technology & People. 31 (3): 646–665. doi:10.1108/ITP-04-2016-0095. ISSN 0959-3845.
- ^ Divorce cases get the Facebook factor Archived March 31, 2012, at the Wayback Machine. – MEN Media. Published January 19, 2011. Retrieved March 13, 2012.
- ^ Facebook's Other Top Trend of 2009: Divorce Archived January 12, 2012, at the Wayback Machine – Network World. Published December 22, 2009. Retrieved March 13, 2012.
- ^ "Facebook to Blame for Divorce Boom". Fox News Channel. April 12, 2010. Archived from the original on April 15, 2010. Retrieved January 3, 2012.
- ^ Facebook is divorce lawyers' new best friend – MSNBC. Published June 28, 2010. Retrieved March 13, 2012.
- ^ "Facebook flirting triggers divorces". The Times of India. January 1, 2012. Archived from the original on May 18, 2013.
- ^ a b Clayton, Russell B.; Nagurney, Alexander; Smith, Jessica R. (June 7, 2013). "Cheating, Breakup, and Divorce: Is Facebook Use to Blame?". Cyberpsychology, Behavior, and Social Networking. 16 (10): 717–720. doi:10.1089/cyber.2012.0424. ISSN 2152-2715. PMID 23745615.
- ^ Utz, Sonja; Beukeboom, Camiel J. (July 1, 2011). "The Role of Social Network Sites in Romantic Relationships: Effects on Jealousy and Relationship Happiness". Journal of Computer-Mediated Communication. 16 (4): 511–527. doi:10.1111/j.1083-6101.2011.01552.x. ISSN 1083-6101.
- ^ Tokunaga, Robert S. (2011). "Social networking site or social surveillance site? Understanding the use of interpersonal electronic surveillance in romantic relationships". Computers in Human Behavior. 27 (2): 705–713. doi:10.1016/j.chb.2010.08.014.
- ^ Muise, Amy; Christofides, Emily; Desmarais, Serge (April 15, 2009). "More Information than You Ever Wanted: Does Facebook Bring Out the Green-Eyed Monster of Jealousy?". CyberPsychology & Behavior. 12 (4): 441–444. doi:10.1089/cpb.2008.0263. ISSN 1094-9313. PMID 19366318. S2CID 16219949.
- ^ Kerkhof, Peter; Finkenauer, Catrin; Muusses, Linda D. (April 1, 2011). "Relational Consequences of Compulsive Internet Use: A Longitudinal Study Among Newlyweds" (PDF). Human Communication Research. 37 (2): 147–173. doi:10.1111/j.1468-2958.2010.01397.x. hdl:1871/35795. ISSN 1468-2958.
- ^ Papp, Lauren M.; Danielewicz, Jennifer; Cayemberg, Crystal (October 11, 2011). ""Are We Facebook Official?" Implications of Dating Partners' Facebook Use and Profiles for Intimate Relationship Satisfaction". Cyberpsychology, Behavior, and Social Networking. 15 (2): 85–90. doi:10.1089/cyber.2011.0291. ISSN 2152-2715. PMID 21988733.
- ^ "Does Facebook Stress You Out?". Webpronews.com. February 17, 2010. Archived from the original on February 18, 2011.
- ^ Maier, C., Laumer, S., Eckhardt, A., and Weitzel, T. Online Social Networks as a Source and Symbol of Stress: An Empirical Analysis Proceedings of the 33rd International Conference on Information Systems (ICIS) 2012, Orlando (FL)
- ^ Maier, C.; Laumer, S.; Eckhardt, A.; Weitzel, T. (2014). "Giving too much Social Support: Social Overload on Social Networking Sites". European Journal of Information Systems. 24 (5): 447–464. doi:10.1057/ejis.2014.3. S2CID 205122288.
- ^ McCain, Jessica L.; Campbell, W. Keith (2018). "Narcissism and Social Media Use: A Meta-Analytic Review". Psychology of Popular Media Culture. 7 (3). American Psychological Association: 308–327. doi:10.1037/ppm0000137. S2CID 152057114. Retrieved June 9, 2020.
- ^ Gnambs, Timo; Appel, Markus (2018). "Narcissism and Social Networking Behavior: A Meta-Analysis". Journal of Personality. 86 (2). Wiley-Blackwell: 200–212. doi:10.1111/jopy.12305. PMID 28170106.
- ^ Brailovskaia, Julia; Bierhoff, Hans-Werner (2020). "The Narcissistic Millennial Generation: A Study of Personality Traits and Online Behavior on Facebook". Journal of Adult Development. 27 (1). Springer Science+Business Media: 23–35. doi:10.1007/s10804-018-9321-1. S2CID 149564334.
- ^ Casale, Silvia; Banchi, Vanessa (2020). "Narcissism and problematic social media use: A systematic literature review". Addictive Behaviors Reports. 11 100252. Elsevier. doi:10.1016/j.abrep.2020.100252. PMC 7244927. PMID 32467841.
- ^ Lukianoff, Greg; Haidt, Jonathan (2018). The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure. New York: Penguin Press. p. 147. ISBN 978-0-7352-2489-6.
- ^ "Facebook MAU worldwide 2020". Statista. Retrieved January 6, 2021.
- ^ Harari, Yuval Noah (2017), "Danksagung", Homo Deus, Verlag C.H.BECK oHG, pp. 539–540, doi:10.17104/9783406704024-539, ISBN 978-3-406-70402-4
- ^ Reviglio, Urbano (2017), "Serendipity by Design? How to Turn from Diversity Exposure to Diversity Experience to Face Filter Bubbles in Social Media", Internet Science, Lecture Notes in Computer Science, vol. 10673, Cham: Springer International Publishing, pp. 281–300, doi:10.1007/978-3-319-70284-1_22, ISBN 978-3-319-70283-4
- ^ Eslami, Motahhare; Rickman, Aimee; Vaccaro, Kristen; Aleyasen, Amirhossein; Vuong, Andy; Karahalios, Karrie; Hamilton, Kevin; Sandvig, Christian (April 18, 2015). ""I always assumed that I wasn't really that close to [her]"". Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. Seoul Republic of Korea: ACM. pp. 153–162. doi:10.1145/2702123.2702556. ISBN 978-1-4503-3145-6. S2CID 15264571.
- ^ Adee, Sally (November 2016). "Burst the filter bubble". New Scientist. 232 (3101): 24–25. doi:10.1016/S0262-4079(16)32182-0.
- ^ Tufekci, Zeynep (2015). "Facebook said its algorithms do help form echo chambers, and the tech press missed it". New Perspectives Quarterly. 32 (3): 9–12. doi:10.1111/npqu.11519 – via Wiley Online Library.
- ^ Eytan, Bakshy; Messing, Solomon; Adamic, Lada A (2015). "Exposure to ideologically diverse news and opinion on Facebook". Science. 348 (6239): 1130–1132. Bibcode:2015Sci...348.1130B. doi:10.1126/science.aaa1160. PMID 25953820. S2CID 206632821.
- ^ Lukianoff, Greg; Haidt, Jonathan (2018). The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure. New York: Penguin Press. pp. 126–132. ISBN 978-0-7352-2489-6.
- ^ Lee, Sangwon; Xenos, Michael (2019). "Social distraction? Social media use and political knowledge in two U.S. Presidential elections". Computers in Human Behavior. 90: 18–25. doi:10.1016/j.chb.2018.08.006. S2CID 53734285.
- ^ Kelly, Paul (July 20, 2019). "America's Uncivil War on Democracy". TheAustralian.com. The Australian. Retrieved July 20, 2019. Access by subscription only (February 2021).
- ^ Dwoskin, Elizabeth (February 16, 2021). "Facebook To Scale Back Politics In Users' News Feeds". Here and Now (Interview). Interviewed by Tonya Mosley. WBUR. Retrieved August 9, 2021.
- ^ Bromley, Alanna (2011). "Are social networking sites breeding antisocial young people?" (PDF). Journal of Digital Research and Publishing.
- ^ "Students Take On Cyberbullying". November 22, 2011. Archived from the original on December 21, 2021 – via YouTube.
- ^ Baron, Naomi S. (2007). "My Best Day: Presentation of Self and Social Manipulation in Facebook and IM" (PDF). Archived from the original (PDF) on May 23, 2013.
- ^ Turkle, Sherry (2011): Alone Together. Why We Expect More from Technology and Less from Each Other. New York: Basic Books.
- ^ Robert M. Bond; Christopher J. Fariss; Jason J. Jones; Adam D. I. Kramer; Cameron Marlow; Jaime E. Settle; James H. Fowler (2012). "A 61-million-person experiment in social influence and political mobilization". Nature. 489 (7415): 295–298. Bibcode:2012Natur.489..295B. doi:10.1038/nature11421. PMC 3834737. PMID 22972300.
- ^ Robert Booth (2014). "Facebook reveals news feed experiment to control emotions". The Guardian. Retrieved June 30, 2014.
- ^ Adam D. I. Kramer, Jamie E. Guillory. Jeffrey T. Hancock (2014). "Experimental evidence of massive-scale emotional contagion through social networks". Proceedings of the National Academy of Sciences of the United States of America. 111 (24): 8788–8790. Bibcode:2014PNAS..111.8788K. doi:10.1073/pnas.1320040111. PMC 4066473. PMID 24889601.
- ^ Adam D. I. Kramer (June 29, 2014). "Facebook update". Facebook. Archived from the original on June 23, 2015. Retrieved July 14, 2019.(subscription required)
- ^ David Goldman (July 2, 2014). "Facebook still won't say 'sorry' for mind games experiment". CNNMoney. Retrieved July 3, 2014.
- ^ Guynn, Jessica (July 3, 2014). "Privacy watchdog files complaint over Facebook study". USA Today. Retrieved July 5, 2014.
- ^ Grohol, John. "Emotional Contagion on Facebook? More Like Bad Research Methods". Psych Central. PsychCentral. Archived from the original on July 12, 2014. Retrieved July 12, 2014.
- ^ Sciences, National Academy of (July 22, 2014). "Editorial Expression of Concern: Experimental evidence of massivescale emotional contagion through social networks". Proceedings of the National Academy of Sciences. 111 (29): 10779. Bibcode:2014PNAS..11110779.. doi:10.1073/pnas.1412469111. ISSN 0027-8424. PMC 4115552. PMID 24994898.
- ^ Rudder, Christian (July 28, 2014). "We experiment on human beings". okcupid.com. Archived from the original on January 23, 2015. Retrieved July 14, 2019.
- ^ Grimmelmann, James (September 23, 2014). "Illegal, immoral, and mood-altering: How Facebook and OkCupid broke the law when they experimented on users". Retrieved September 24, 2014.
- ^ "Facebook's 'experiment' was socially irresponsible". The Guardian. July 1, 2014. Retrieved August 4, 2014.
- ^ Neate, Rupert (December 23, 2012). "Facebook paid £2.9m tax on £840m profits made outside US, figures show". The Guardian. Retrieved October 25, 2016.
- ^ "Paradise Papers reveal hidden wealth of global elite". The Express Tribune. November 6, 2017.
- ^ van Noort, Wouter (November 11, 2017). "Belastingontwijking is simpel op te lossen" [Tax avoidance can easily be solved]. NRC Handelsblad (in Dutch). Retrieved July 14, 2019. The quote, as heading of the article, comes from the French economist Gabriel Zucman.
- ^ "Facebook paid £4,327 corporation tax in 2014". BBC. October 12, 2015. Retrieved October 25, 2016.
- ^ a b c Tang, Paul (September 2017). "EU Tax Revenue Loss from Google and Facebook" (PDF).
- ^ 26 U.S.C. § 7602.
- ^ Seth Fiegerman, "Facebook is being investigated by the IRS", July 7, 2016, CNN, at [1].
- ^ United States of America v. Facebook, Inc. and Subsidiaries, case no. 16-cv-03777, U.S. District Court for the Northern District of California (San Francisco Div.).
- ^ "Facebook paid just €30m tax in Ireland despite earning €12bn". Irish Independent. November 29, 2017.
- ^ "Facebook Ireland pays tax of just €30m on €12.6bn". Irish Examiner. November 29, 2017.
- ^ David Ingram (April 18, 2018). "Exclusive: Facebook to put 1.5 billion users out of reach of new EU privacy law". Reuters.
- ^ Peter Hamilton (November 28, 2018). "Facebook Ireland pays €38m tax on €18.7 billion of revenue channeled through Ireland in 2017". The Irish Times.
The social media giant channelled €18.7 billion in revenue through its Irish subsidiary, an increase of 48 per cent from the €12.6 billion recorded in 2016. While gross profit amounted to €18.1 billion, administrative expenses of €17.8 billion meant profit before tax increased 44 per cent to €251 million.
- ^ a b c Newton, Casey (February 25, 2019). "THE TRAUMA FLOOR: The secret lives of Facebook moderators in America". The Verge. Retrieved February 25, 2019.
- ^ a b O'Connell, Jennifer (March 30, 2019). "Facebook's dirty work in Ireland: 'I had to watch footage of a person being beaten to death'". The Irish Times. Retrieved June 21, 2019.
- ^ a b Newton, Casey (June 19, 2019). "Three Facebook moderators break their NDAs to expose a company in crisis". The Verge. Retrieved June 21, 2019.
- ^ Wong, Queenie (June 19, 2019). "Murders and suicides: Here's who keeps them off your Facebook feed". CNET. Retrieved June 21, 2019.
- ^ [160][161][162][163]
- ^ Eadicicco, Lisa (June 19, 2019). "A Facebook content moderator died after suffering heart attack on the job". San Antonio Express-News. Retrieved June 20, 2019.
- ^ Maiberg, Emanuel; Koebler, Jason; Cox, Joseph (September 24, 2018). "A Former Content Moderator Is Suing Facebook Because the Job Reportedly Gave Her PTSD". Vice. Retrieved June 21, 2019.
- ^ Gray, Chris; Hern, Alex (December 4, 2019). "Ex-Facebook worker claims disturbing content led to PTSD". The Guardian. Retrieved February 25, 2020.
- ^ "Facebook sued by Tampa workers who say they suffered trauma from watching videos". Tampa Bay Times. Retrieved May 8, 2020.
- ^ "Meta's content moderators face worst conditions yet at secret Ghana…". TBIJ. Retrieved May 4, 2025.
- ^ Leprince-Ringuet, Daphne. "Facebook's approach to content moderation slammed by EU commissioners". ZDNet. Retrieved February 19, 2020.
- ^ Newton, Casey (May 12, 2020). "Facebook will pay $52 million in settlement with moderators who developed PTSD on the job". The Verge. Retrieved June 1, 2020.
- ^ Allyn, Bobby (May 12, 2020). "In Settlement, Facebook To Pay $52 Million To Content Moderators With PTSD". NPR. Retrieved June 1, 2020.
- ^ Paul, Kari (May 13, 2020). "Facebook to pay $52m for failing to protect moderators from 'horrors' of graphic content". The Guardian. Retrieved June 1, 2020.
- ^ Streitfeld, David (March 21, 2018). "Welcome to Zucktown. Where Everything Is Just Zucky". The New York Times. Retrieved February 25, 2019.
- ^ "Facebook faces US investigation for 'systemic" racial bias in hiring". The Guardian. March 6, 2021. Retrieved March 6, 2021.
- ^ Pepitone, Julianne. "Facebook vs. Google fight turns nasty". CNNMoney. Retrieved February 23, 2019.
- ^ "EFF Calls Facebook's Criticism of Apple's Pro-Privacy Tracking Change 'Laughable'". MacRumors. December 19, 2020. Retrieved February 9, 2021.
- ^ a b Lorenz, Taylor; Harwell, Drew (March 30, 2022). "Facebook paid GOP firm to malign TikTok". The Washington Post. Retrieved March 31, 2022.
- ^ "Emails show Mark Zuckerberg feared app startups were building faster than Facebook in 2012". July 30, 2020.
- ^ "Email Chain Between Facebook Executives" (PDF). House Judiciary Committee. Archived from the original (PDF) on August 23, 2022. Retrieved December 30, 2023.
- ^ "Facebook Employees Are Tired of Cloning Apps and Features". April 23, 2021.
- ^ "The 'Stories' product that Facebook copied from Snapchat is now Facebook's future". October 30, 2018.
- ^ "As Facebook Launches TikTok Clone, A Look Back at 6 Other Rival Products It Copied". Forbes.
- ^ "Facebook's latest experiment is Hobbi, an app to document your personal projects". February 13, 2020.
- ^ "Facebook's Clubhouse competitor starts rolling out in the US today". June 21, 2021.
- ^ "Did Facebook's faulty data push news publishers to make terrible decisions on video?". Nieman Lab. Retrieved March 27, 2024.
- ^ Welch, Chris (October 17, 2018). "Facebook may have knowingly inflated its video metrics for over a year". The Verge. Retrieved March 27, 2024.
- ^ Copley, Caroline (March 4, 2016). "German court rules Facebook may block pseudonyms". Reuters. Retrieved June 3, 2017.
- ^ a b Ortutay, Barbara (May 25, 2009). "Real users caught in Facebook fake-name purge". San Francisco Chronicle. Hearst Communications. Retrieved June 3, 2017.
- ^ Levy, Karyne (October 1, 2014). "Facebook Apologizes For 'Real Name' Policy That Forced Drag Queens To Change Their Profiles". Business Insider. Axel Springer SE. Retrieved March 23, 2017.
- ^ Crook, Jordan (October 1, 2014). "Facebook Apologizes To LGBT Community And Promises Changes To Real Name Policy". TechCrunch. AOL. Retrieved June 3, 2017.
- ^ Osofsky, Jason; Gage, Todd (December 15, 2015). "Community Support FYI: Improving the Names Process on Facebook". Facebook Newsroom. Retrieved December 16, 2015 – via Facebook.
- ^ AFP (December 16, 2015). "Facebook modifies 'real names' policy, testing use of assumed names". CTV News. Retrieved December 16, 2015.
- ^ Holpuch, Amanda (December 15, 2015). "Facebook adjusts controversial 'real name' policy in wake of criticism". The Guardian. Retrieved March 23, 2017.
- ^ Halliday, Josh (July 6, 2013). "Facebook apologises for deleting free speech group's post on Syrian torture". The Guardian. London. Retrieved June 4, 2013.
- ^ "Jealous Wives Are Getting Courtney Stodden Banned on Facebook – Softpedia". News.softpedia.com. October 14, 2011. Retrieved July 31, 2012.
- ^ "When good lulz go bad: unpicking the ugly business of online harassment". Wired. January 27, 2014. Retrieved August 23, 2017.
- ^ "Caroline McCarthy, "Facebook outage draws more security questions", CNET News.com, ZDNet Asia, August 2, 2007". Zdnetasia.com. August 2, 2007. Archived from the original on May 31, 2008. Retrieved March 23, 2010.
- ^ "David Hamilton, "Facebook Outage Hits Some Countries", Web Host Industry Review, Jun. 26, 2008". Thewhir.com. Archived from the original on April 2, 2010. Retrieved March 23, 2010.
- ^ "K.C. Jones, "Facebook, MySpace More Reliable Than Peers", Information Week, February 19, 2009". InformationWeek. Archived from the original on March 14, 2009. Retrieved March 23, 2010.
- ^ "Facebook Outage and Facebook Down September 18 2009". Archived from the original on August 9, 2010. Retrieved August 30, 2010.
- ^ McCarthy, Caroline (October 8, 2009). "Facebook's mounting customer service crisis | The Social – CNET News". CNET. Archived from the original on February 20, 2011. Retrieved December 13, 2009.
- ^ McCarthy, Caroline (October 10, 2009). "Downed Facebook accounts still haven't returned | The Social – CNET News". CNET. Archived from the original on October 7, 2010. Retrieved December 13, 2009.
- ^ "Facebook Outage Silences 150,000 Users". PC World. October 13, 2009. Archived from the original on December 25, 2009. Retrieved December 13, 2009.
- ^ Gaudin, Sharon (October 13, 2009). "Facebook deals with missing accounts, 150,000 angry users". Computerworld. Archived from the original on July 18, 2014. Retrieved December 13, 2009.
- ^ a b Salter, Jim (October 4, 2021). "Facebook, Instagram, WhatsApp, and Oculus are down. Here's what we know". Ars Technica. Retrieved October 4, 2021.
- ^ "Mapillary is currently experiencing an outage". Twitter. Archived from the original on October 4, 2021. Retrieved October 4, 2021.
- ^ Patnaik, Subrat; Mathews, Eva (October 4, 2021). "Facebook, Instagram, WhatsApp hit by global outage". Reuters. Retrieved October 4, 2021.
- ^ Barrett, Brian. "Why Facebook, Instagram, and WhatsApp All Went Down Today". Wired. ISSN 1059-1028. Retrieved October 5, 2021.
- ^ Patnaik, Subrat; Mathews, Eva (October 4, 2021). "Facebook, Instagram, WhatsApp hit by global outage". Reuters. Retrieved October 4, 2021.
- ^ Vaughan-Nichols, Steven J. "What took Facebook down: Major global outage drags on". ZDNet. Retrieved October 4, 2021.
- ^ Snelling, Dave (March 5, 2024). "Facebook, Instagram, Messenger down: Meta platforms suddenly stop working in huge outage". The Independent. Retrieved March 5, 2024.
- ^ Capoot, Ashley (March 5, 2024). "Facebook, Threads and Instagram back online after outage". CNBC. Retrieved March 24, 2024.
- ^ Trí, Dân (March 7, 2024). "Hé lộ lý do khiến Facebook, Instagram bị "sập" trên toàn cầu". Báo điện tử Dân Trí (in Vietnamese). Retrieved March 24, 2024.
- ^ Elon, Musk (March 5, 2024). "If you're reading this post, it's because our servers are working". Twitter. Retrieved March 24, 2024.
- ^ Musk, Elon (March 5, 2024). "**image**". Twitter. Retrieved March 24, 2024.
- ^ "Facebook and Instagram down: Elon Musk mocks Meta with sarcastic post on X". MARCA. March 5, 2024. Retrieved March 24, 2024.
- ^ Musk, Elon (March 5, 2024). "we know why you're all here rn". Twitter. Retrieved March 24, 2024.
- ^ Reisinger, Don (May 18, 2012). "Facebook sued for $15 billion over alleged privacy infractions". CNET. Retrieved February 23, 2014.
- ^ "After privacy ruling, Facebook now requires Belgium users to log in to view pages". The Verge. Retrieved December 17, 2015.
- ^ Gordon, Whitson. "Facebook Changed Everyone's Email to @Facebook.com; Here's How to Fix Yours". Lifehacker.com. Retrieved October 25, 2016.
- ^ Johnston, Casey (July 2, 2012). "@facebook.com e-mail plague chokes phone address books". Ars Technica. Retrieved June 14, 2017.
- ^ Hamburger, Ellis (February 24, 2014). "Facebook retires its troubled @facebook.com email service". The Verge. Retrieved October 25, 2016.
- ^ "Facebook mistakenly asked people if they were in Pakistan following a deadly explosion". Tech Insider. Retrieved March 27, 2016.
- ^ "Facebook's Safety Check malfunctions after Pakistan bombing". CNET. Retrieved March 27, 2016.
- ^ a b c d Hamilton, Fiona (May 21, 2021). "MI5 chief Ken McCallum accuses Facebook of giving 'free pass' to terrorists". The Times.
- ^ Dearden, Lizzie (February 10, 2021). "Facebook encryption will create 'hidden space' for paedophiles to abuse children, National Crime Agency warns". The Independent.
- ^ Davis, Margaret (May 25, 2021). "Up to 850,000 people in UK pose sexual threat to children, says NCA". London Evening Standard.
- ^ a b Hern, Alex (January 21, 2021). "Facebook admits encryption will harm efforts to prevent child exploitation". The Guardian.
- ^ a b Abbot, Rachelle (May 21, 2021). "Fed's crypto crackdown: save some of those epic gains for tax". London Evening Standard.
- ^ Middleton, Joe (May 21, 2021). "MI5 chief accuses Facebook of giving 'free pass' to terrorists". The Independent.
- ^ Fraser, Graham; Rahman-Jones, Imran (July 3, 2025). "'There is a problem': Facebook and Instagram users complain of account bans". BBC. Retrieved July 4, 2025.
- ^ Fraser, Graham (August 15, 2025). "Angry, confused and worried about police – behind Instagram bans". BBC. Retrieved October 29, 2025.
- ^ Yacoub Oweis, Khaled (November 23, 2007). "Syria blocks Facebook in Internet crackdown". Reuters. Retrieved March 5, 2008.
- ^ "China's Facebook Status: Blocked". ABC News. July 8, 2009. Archived from the original on July 11, 2009. Retrieved July 13, 2009.
- ^ "Facebook Faces Censorship in Iran". American Islamic Congress. August 29, 2007. Archived from the original on April 24, 2008. Retrieved April 30, 2008.
- ^ ODPS (2010). "Isle of Man ODPS issues Facebook Guidance booklet" (PDF). Office of the Data Protection Supervisor. Archived from the original (PDF) on November 2, 2012. Retrieved May 1, 2013.
- ^ "Pakistan court orders Facebook ban". Belfasttelegraph.
- ^ Crilly, Rob (May 19, 2010). "Facebook blocked in Pakistan over Prophet Mohammed cartoon row". The Daily Telegraph. London. Archived from the original on January 12, 2022.
- ^ "Pakistan blocks YouTube, Facebook over 'sacrilegious content' – CNN". May 21, 2010.
- ^ "Pakistan blocks YouTube over blasphemous material". GEO.tv. May 20, 2010. Retrieved August 7, 2010.
- ^ "Home – Pakistan Telecommunication Authority". Pta.gov.pk. Retrieved August 7, 2010.
- ^ "LHC moved for ban on Facebook". The News International. Archived from the original on July 18, 2018. Retrieved December 16, 2018.
- ^ "Permanently banning Facebook: Court seeks record of previous petitions". The Express Tribune. May 6, 2011. Retrieved December 16, 2018.
- ^ "Nigeria fines Meta $220 million for violating data protection and consumer rights laws". AP News. July 19, 2024. Retrieved September 20, 2024.
- ^ "Organizations blocking Facebook". CTV news. Archived from the original on January 15, 2013.
- ^ Benzie, Robert (May 3, 2007). "Facebook banned for Ontario staffers". Toronto Star. Retrieved March 5, 2008.
- ^ "Ontario politicians close the book on Facebook". Blog Campaigning. May 23, 2007. Archived from the original on March 14, 2008. Retrieved March 5, 2008.
- ^ "Facebook banned for council staff". BBC News. September 1, 2009. Retrieved February 2, 2010.
- ^ "Tietoturvauhan poistuminen voi avata naamakirjan Kokkolassa (In Finnish)". Archived from the original on February 22, 2012. Retrieved February 2, 2010.
- ^ "Immediate Ban of Internet Social Networking Sites (SNS) On Marine Corps Enterprise Network (MCEN) NIPRNET". Archived from the original on December 25, 2009. Retrieved February 2, 2010.
- ^ "Facebook kiellettiin Keski-Suomen sairaanhoitopiirissä (In Finnish)". Archived from the original on October 25, 2009. Retrieved February 2, 2010.
- ^ "Sairaanhoitopiirin työntekijöille kielto nettiyhteisöihin (In Finnish)". Archived from the original on July 20, 2011. Retrieved February 2, 2010.
- ^ Fort, Caleb (October 12, 2005). "CIRT blocks access to Facebook.com". Daily Lobo (University of New Mexico). Archived from the original on September 6, 2012. Retrieved April 3, 2006.
- ^ "Popular web site, Facebook.com, back online at UNM". University of New Mexico. January 19, 2006. Archived from the original on February 12, 2007. Retrieved April 15, 2007.
- ^ Loew, Ryan (June 22, 2006). "Kent banning athlete Web profiles". The Columbus Dispatch. Retrieved October 6, 2006. [dead link]
- ^ "The Summer Kent Stater 5 July 2006 — Kent State University". dks.library.kent.edu. Retrieved October 8, 2020.
- ^ "Closed Social Networks as a Gilded Cage". August 6, 2007. Archived from the original on October 29, 2013. Retrieved February 23, 2009.
- ^ see NSTeens NSTeens video about private social networking Archived March 10, 2010, at the Wayback Machine
- ^ Lapeira's post (October 16, 2008) Three types of social networking [dead link]
- ^ "Openbook – Connect and share whether you want to or not". Youropenbook.org. May 12, 2010. Archived from the original on August 3, 2010. Retrieved August 7, 2010.
- ^ Whitson, Gordon (August 5, 2010). "F. B. Purity Hides Annoying Facebook Applications and News Feed Updates". Lifehacker. Retrieved August 27, 2023.
- ^ Gold, Jon (December 19, 2012). "Facebook bans developer of timeline-cleaning browser extension". Network World. Archived from the original on May 14, 2013. Retrieved August 27, 2023.
- ^ Barclay, Louis (October 7, 2021). "Facebook Banned Me for Life Because I Help People Use It Less". Slate Magazine. Retrieved October 11, 2021.
- ^ "Facebook Banned the Creator of 'Unfollow Everything' and Sent Him a Cease and Desist Letter". Gizmodo. October 8, 2021. Retrieved October 11, 2021.
- ^ Vincent, James (October 8, 2021). "Facebook bans developer behind Unfollow Everything tool". The Verge. Retrieved October 11, 2021.
- ^ Claburn, Thomas (May 2, 2024). "Prof sues Meta to protect his Unfollow Everything 2.0 extension from Facebook". The Register. Archived from the original on May 2, 2024. Retrieved March 5, 2025.
- ^ Cope, Sophia; Greene, David; Mackey, Aaron (September 24, 2024). "EFF to Federal Trial Court: Section 230's Little-Known Third Immunity for User-Empowerment Tools Covers Unfollow Everything 2.0". Electronic Frontier Foundation. Retrieved March 5, 2025.
- ^ Zuckerman v. Meta Platforms, Inc., 3:24-cv-02596, ECF No. 43, page 1 (N.D. Cal. November 22, 2024) ("Professor Zuckerman's request for declaratory relief is not ripe for adjudication and seeks an unconstitutional advisory opinion."), archived from the original on March 5, 2025.
- ^ Feiner, Lauren (January 22, 2021). "Facebook spent more on lobbying than any other Big Tech company in 2020". CNBC. Retrieved April 30, 2022.
- ^ Jardin, Xeni (January 23, 2020). "Google spent ~$150 million on US lobbying over last decade, followed by Facebook at ~$81M, Amazon almost $80M: Federal filings". Boing Boing. Retrieved April 30, 2022.
- ^ Romm, Tony (January 22, 2020). "Tech giants led by Amazon, Facebook and Google spent nearly half a billion on lobbying over the past decade, new data shows". The Washington Post. Retrieved April 30, 2022.
- ^ "Facebook, Google Fund Groups Shaping Federal Privacy Debate (3)". news.bloomberglaw.com. Retrieved April 30, 2022.
- ^ Feinberg, Ashley (March 14, 2019). "Facebook, Axios And NBC Paid This Guy To Whitewash Wikipedia Pages". HuffPost. Archived from the original on April 8, 2019. Retrieved April 8, 2019.
- ^ Cohen, Noam (April 7, 2019). "Want to Know How to Build a Better Democracy? Ask Wikipedia". Wired. Archived from the original on April 8, 2019. Retrieved April 8, 2019.
- ^ Horwitz, Keach Hagey, Georgia Wells, Emily Glazer, Deepa Seetharaman and Jeff (December 29, 2021). "Facebook's Pushback: Stem the Leaks, Spin the Politics, Don't Say Sorry". Wall Street Journal – via www.wsj.com.
{{cite news}}: CS1 maint: multiple names: authors list (link) - ^ "Facebook reportedly told Republicans whistleblower was 'trying to help Democrats'". news.yahoo.com. December 29, 2021.
- ^ a b c "Niet compatibele browser". Retrieved August 7, 2010 – via Facebook.
- ^ a b c "Facebook Privacy Change Sparks Federal Complaint". PC World. Archived from the original on April 9, 2009. Retrieved March 5, 2009.
- ^ "Facebook's New Terms Of Service: "We Can Do Anything We Want With Your Content. Forever."". Consumerist. Consumer Media LLC. Archived from the original on October 8, 2009. Retrieved February 20, 2009.
- ^ "Improving Your Ability to Share and Connect". Retrieved March 5, 2009 – via Facebook.
- ^ a b c Haugen, Austin (October 23, 2009). "facebook DEVELOPERS". Archived from the original on December 23, 2009. Retrieved October 25, 2009 – via Facebook.
- ^ "Facebook Town Hall: Proposed Facebook Principles". Archived from the original on February 27, 2009. Retrieved March 5, 2009 – via Facebook.
- ^ "Facebook Town Hall: Proposed Statement of Rights and Responsibilities". Archived from the original on February 27, 2009. Retrieved March 5, 2009 – via Facebook.
- ^ "Governing the Facebook Service in an Open and Transparent Way". Retrieved March 5, 2009 – via Facebook.
- ^ "Rewriting Facebook's Terms of Service". PC World. Archived from the original on March 2, 2009. Retrieved March 5, 2009.
- ^ "Democracy Theatre on Facebook". University of Cambridge. March 29, 2009. Retrieved April 4, 2009.
- ^ "Facebook's theatrical rights and wrongs". Open Rights Group. Archived from the original on April 6, 2009. Retrieved April 4, 2009.
- ^ "Complaint, Request for Investigation, Injunction, and Other Relief" (PDF). Epic.org. Retrieved December 16, 2018.
- ^ "Supplemental Materials in Support of Pending Complaint and Request for Injunction, Request for Investigation and for Other Relief" (PDF). Epic.org. Retrieved December 16, 2018.
- ^ Puzzanghera, Jim (March 1, 2011). "Facebook reconsiders allowing third-party applications to ask minors for private information". Los Angeles Times.
- ^ Electronic Privacy Information Center. "EPIC – Facebook Resumes Plan to Disclose User Home Addresses and Mobile Phone Numbers". epic.org.
- ^ Baker, Gavin (May 27, 2008). "Free software vs. software-as-a-service: Is the GPL too weak for the Web?". Free Software Magazine. Archived from the original on May 17, 2013. Retrieved June 29, 2009.
- ^ "Statement of Rights and Responsibilities". May 1, 2009. Retrieved June 29, 2009 – via Facebook.
- ^ Calore, Michael (December 1, 2008). "As Facebook Connect Expands, OpenID's Challenges Grow". Wired. Retrieved June 29, 2009.
Facebook Connect was developed independently using proprietary code, so Facebook's system and OpenID are not interoperable. ... This is a clear threat to the vision of the Open Web, a future when data is freely shared between social websites using open source technologies.
- ^ Thompson, Nicholas. "What Facebook Can Sell". The New Yorker. Retrieved May 18, 2014.
- ^ Barnett, Emma (May 23, 2012). "Facebook Settles Lawsuit With Angry Users". The Telegraph. London. Archived from the original on January 12, 2022. Retrieved May 18, 2014.
- ^ a b c d Dijck 2013, p. 47.
- ^ Farber, Dan. "Facebook Beacon Update: No Activities Published Without Users Proactively Consenting". ZDNet. Retrieved May 18, 2014.
- ^ Sinker, Daniel (February 17, 2009). "Face/Off: How a Little Change in Facebook's User Policy is Making People Rethink the Rights They Give Away Online". HuffPost. Retrieved May 28, 2014.
- ^ Dijck 2013, p. 48.
- ^ Brunton, Finn (2011). "Vernacular Resistance to Data Collection and Analysis: A Political Theory of Obfuscation". First Monday. doi:10.5210/fm.v16i5.3493. S2CID 46500367. Archived from the original on August 30, 2022. Retrieved May 18, 2014.
- ^ a b "BBB Review of Facebook". Retrieved December 12, 2010.[dead link]
- ^ "TrustLink Review of Facebook". Archived from the original on June 13, 2010. Retrieved May 5, 2010.
- ^ Emery, Daniel (July 29, 2010). "Details of 100 m Facebook users collected and published". BBC. Retrieved August 7, 2010.
- ^ Nicole Perlroth (June 3, 2013). "Bits: Malware That Drains Your Bank Account Thriving on Facebook". The New York Times. Retrieved June 9, 2013.
- ^ Bort, Julie (April 20, 2011). "Researcher: Facebook Ignored the Bug I Found Until I Used It to Hack Zuckerberg". Yahoo! Finance. Retrieved August 19, 2013.
- ^ "Zuckerberg's Facebook page hacked to prove security exploit". CNN. May 14, 2013. Retrieved August 19, 2013.
- ^ Tom Warren (August 1, 2013). "Facebook ignored security bug, researcher used it to post details on Zuckerberg's wall". The Verge. Retrieved August 19, 2013.
- ^ "Hacker who exposed Facebook bug to get reward from unexpected source". Yahoo! Finance. Reuters. August 20, 2013. Archived from the original on August 21, 2013. Retrieved August 22, 2013.
- ^ Rogoway, Mike (January 21, 2010). "Facebook picks Prineville for its first data center". The Oregonian. Retrieved January 21, 2010.
- ^ Kaufman, Leslie (September 17, 2010). "You're 'So Coal': Angling to Shame Facebook". The New York Times.
- ^ Albanesius, Chloe (September 17, 2010). "Greenpeace Attacks Facebook on Coal-Powered Data Center". PC Magazine.
- ^ "Facebook update: Switch to renewable energy now Greening Facebook from within". Greenpeace. February 17, 2010.
- ^ Tonelli, Carla (September 1, 2010). "'Friendly' push for Facebook to dump coal". Reuters. Archived from the original on October 13, 2010. Retrieved February 23, 2014.
- ^ "Dirty Data Report Card" (PDF). Greenpeace. Retrieved August 22, 2013.
- ^ "Facebook and Greenpeace settle Clean Energy Feud". Techcrunch. December 15, 2011. Retrieved August 22, 2013.
- ^ "Facebook Commits to Clean Energy Future". Greenpeace. Retrieved August 22, 2013.
- ^ Clifford, Catherine (April 12, 2022). "Stripe teams up with major tech companies to commit $925 million toward carbon capture". CNBC. Retrieved July 6, 2022.
- ^ Brigham, Katie (June 28, 2022). "Why Big Tech is pouring money into carbon removal". CNBC. Retrieved July 6, 2022.
- ^ Clifford, Catherine (January 18, 2023). "Amazon, Meta and Google buy more clean energy than any other companies". CNBC. Retrieved January 18, 2023.
- ^ a b "Startup Claims 80% Of Its Facebook Ad Clicks Are Coming From Bots". TechCrunch.com. January 4, 2011. Retrieved July 31, 2012.
- ^ Rodriguez, Salvador (July 30, 2012). "Start-up says 80% of its Facebook ad clicks came from bots". Los Angeles Times. Retrieved July 31, 2012.
- ^ Sengupta, Somini (April 23, 2012). "Bots Raise Their Heads Again on Facebook". Bits.blogs.nytimes.com. Retrieved July 31, 2012.
- ^ Hof, Robert. "Stung By Click Fraud Allegations, Facebook Reveals How It's Fighting Back". Forbes. Retrieved December 16, 2018.
- ^ "Guide to the Ads Create Tool". Retrieved June 11, 2014 – via Facebook.
- ^ a b "Facebook Advertisers Complain Of A Wave Of Fake Likes Rendering Their Pages Useless". Business Insider. February 11, 2014. Retrieved June 11, 2014.
- ^ Kirtiş, A. Kazım; Karahan, Filiz (October 5, 2011). "Efficient Marketing Strategy". Procedia - Social and Behavioral Sciences. 24: 260–268. doi:10.1016/j.sbspro.2011.09.083.
- ^ "Are 40% Of Life Science Company Facebook Page 'Likes' From Fake Users?". Comprendia. August 2012. Retrieved June 7, 2014.
- ^ "Facebook, Inc. Form 10K". United States Securities and Exchange Commission. January 28, 2014. Retrieved June 7, 2014.
- ^ "What Do Facebook "likes" of Companies Mean?". PubChase. January 23, 2014. Archived from the original on July 3, 2014. Retrieved June 7, 2014.
- ^ "Facebook Fraud". February 10, 2014. Archived from the original on December 21, 2021. Retrieved June 11, 2014 – via YouTube.
- ^ "Firms withdraw BNP Facebook ads". BBC News. August 3, 2007. Retrieved April 30, 2010.
- ^ Horwitz, Jeff (November 6, 2025). "Meta is earning a fortune on a deluge of fraudulent ads, documents show". Reuters.
- ^ a b "Facebook halts ads that exclude racial and ethnic groups". USA Today. Retrieved March 29, 2019.
- ^ a b Brandom, Russell (March 28, 2019). "Facebook has been charged with housing discrimination by the US government". The Verge. Retrieved March 29, 2019.
- ^ a b Julia Angwin, Ariana Tobin (November 21, 2017). "Facebook (Still) Letting Housing Advertisers Exclude Users by Race". ProPublica. Retrieved March 29, 2019.
- ^ Robertson, Adi (April 4, 2019). "Facebook's ad delivery could be inherently discriminatory, researchers say". The Verge. Retrieved April 8, 2019.
- ^ Julia Angwin; Terry Parris Jr (October 28, 2016). "Facebook Lets Advertisers Exclude Users by Race". ProPublica. Retrieved March 29, 2019.
- ^ "Improving Enforcement and Promoting Diversity: Updates to Ads Policies and Tools". February 8, 2017. Retrieved March 29, 2019 – via Facebook.
- ^ Statt, Nick (July 24, 2018). "Facebook signs agreement saying it won't let housing advertisers exclude users by race". The Verge. Retrieved March 29, 2019.
- ^ Statt, Nick (August 21, 2018). "Facebook will remove 5,000 ad targeting categories to prevent discrimination". The Verge. Retrieved March 29, 2019.
- ^ "Facebook agrees to overhaul targeted advertising system for job, housing and loan ads after discrimination complaints". The Washington Post. March 19, 2019. Retrieved March 29, 2019.
- ^ Madrigal, Alexis C. (March 20, 2019). "Facebook Does Have to Respect Civil-Rights Legislation, After All". The Atlantic. Retrieved March 29, 2019.
- ^ Yurieff, Kaya (March 28, 2019). "HUD charges Facebook with housing discrimination in ads". CNN. Retrieved March 29, 2019.
- ^ "Facebook: About 83 million accounts are fake". USA Today. August 3, 2012. Retrieved August 4, 2012.
- ^ "Unreal: Facebook reveals 83 million fake profiles". The Sydney Morning Herald. Retrieved August 4, 2012.
- ^ Rushe, Dominic (August 2, 2012). "Facebook share price slumps below $20 amid fake account flap". The Guardian. London. Retrieved August 4, 2012.
- ^ Gupta, Aditi (2017). "Towards detecting fake user accounts in facebook". 2017 ISEA Asia Security and Privacy (ISEASP). pp. 1–6. doi:10.1109/ISEASP.2017.7976996. ISBN 978-1-5090-5942-3. S2CID 37561110.
- ^ "Facebook Takes 4 Years to Remove A Woman's Butthole as a Business Page". HITS 106.1.
- ^ "The Facebook Blog – Moving to the new Facebook". Archived from the original on October 29, 2008.
- ^ "Facebook Newsroom". newsroom.fb.com.
- ^ "Petition against Facebook redesign fails as old version disabled". Archived from the original on September 12, 2012.
- ^ a b c "Facebook's New Privacy Changes: The Good, The Bad, and The Ugly | Electronic Frontier Foundation". Eff.org. December 9, 2009. Retrieved August 7, 2010.
- ^ a b "Gawker.com". Gawker.com. December 13, 2009. Archived from the original on May 17, 2013. Retrieved June 11, 2013.
- ^ "What Does Facebook's Privacy Transition Mean for You? | ACLUNC dotRights". Dotrights.org. December 4, 2009. Archived from the original on December 12, 2009. Retrieved December 13, 2009.
- ^ "Facebook faces criticism on privacy change". BBC News. December 10, 2008. Retrieved December 13, 2009.
- ^ "ACLU.org". Secure.aclu.org. Archived from the original on February 24, 2012. Retrieved June 11, 2013.
- ^ "Facebook CEO's Private Photos Exposed by the New 'Open' Facebook". Gawker.com. Archived from the original on December 14, 2009. Retrieved December 13, 2009.
- ^ McCarthy, Caroline. "Facebook backtracks on public friend lists | The Social – CNET News". CNET. Archived from the original on December 22, 2009. Retrieved December 13, 2009.
- ^ "Mediactive.com". Mediactive.com. December 12, 2009. Retrieved June 11, 2013.
- ^ Oremus, Will. "TheBigMoney.com". TheBigMoney.com. Archived from the original on July 24, 2011. Retrieved June 11, 2013.
- ^ "ReadWriteWeb.com". ReadWriteWeb.com. Archived from the original on January 13, 2010. Retrieved June 11, 2013.
- ^ Benny Evangelista (January 27, 2010). "Home". San Francisco Chronicle. Archived from the original on January 24, 2011. Retrieved February 23, 2014.
- ^ Deppa, Seetharaman (January 11, 2018). "Facebook to Rank News Sources by Quality to Battle Misinformation". The New York Times. Retrieved March 5, 2018.
- ^ a b c Mark Zuckerberg, [2], Facebook, January 12, 2018
- ^ Isaac, Mike (January 11, 2018). "Facebook Overhauls News Feed to Focus on What Friends and Family Share". The New York Times. Retrieved March 5, 2018.
- ^ Mosseri, Adam (January 11, 2018). "News Feed FYI: Bringing People Closer Together". Facebook newsroom. Retrieved March 5, 2018.
- ^ ENGEL BROMWICH, JONAH; HAAG, MATTHEW (January 12, 2018). "Facebook Is Changing. What Does That Mean for Your News Feed?". The New York Times. Retrieved March 5, 2018.
- ^ a b Bell, Emily (January 21, 2018). "Why Facebook's news feed changes are bad news for democracy". The Guardian. Retrieved March 11, 2018.
- ^ a b c Dojčinović, Stevan (November 15, 2017). "Hey, Mark Zuckerberg: My Democracy Isn't Your Laboratory". The New York Times. Retrieved March 11, 2018.
- ^ Shields, Mike (February 28, 2018). "Facebook's algorithm has wiped out a once flourishing digital publisher". The New York Times. Retrieved March 12, 2018.
- ^ "The top 10 facts about FreeBasics". December 28, 2015. Archived from the original on March 2, 2016.
- ^ "Free Basics by Facebook". Internet.org. August 25, 2015.
- ^ "TRAI Releases the 'Prohibition of Discriminatory Tariffs for Data Services Regulations, 2016'" (PDF). TRAI. February 8, 2016. Archived from the original (PDF) on February 8, 2016.
- ^ "How India Pierced Facebook's Free Internet Program". Wired. Backchannel. February 1, 2016.
- ^ "TRAI letter to Facebook" (PDF). Archived from the original (PDF) on February 19, 2016.
- ^ "Trai to Seek Specific Replies From Facebook Free Basic Supporters". Press Trust of India. December 31, 2015.
- ^ Brühl, Jannis; Tanriverdi, Hakan (2018). "Gut für die Welt, aber nicht für uns". Süddeutsche Zeitung (in German). ISSN 0174-4917. Retrieved December 10, 2018.
- ^ "Tech bosses grilled over claims of 'harmful' power". BBC News. July 30, 2020. Retrieved July 30, 2020.
- ^ Brian Fung (July 29, 2020). "Congress grilled the CEOs of Amazon, Apple, Facebook and Google. Here are the big takeaways". CNN. Retrieved July 30, 2020.
- ^ "What Did Cambridge Analytica Do During The 2016 Election?". NPR.org. Retrieved April 30, 2022.
- ^ Thompson, Anne (August 1, 2019). "'The Great Hack' Terrified Sundance Audiences, and Then the Documentary Got Even Scarier". IndieWire. Retrieved April 30, 2022.
- ^ Power, Ed. "The Great Hack: The story of Cambridge Analytica, Trump and Brexit". The Irish Times. Retrieved April 30, 2022.
- ^ "Meta starts blocking news in Canada over law on paying publishers". Reuters. August 1, 2023. Archived from the original on August 22, 2023. Retrieved August 24, 2023.
- ^ Lindeman, Tracey (August 4, 2023). "'Disaster': warning for democracy as experts condemn Meta over Canada news ban". The Guardian. Archived from the original on August 26, 2023. Retrieved August 24, 2023.
- ^ a b Ljunggren, David (August 18, 2023). "Canada demands Meta lift news ban to allow wildfire info sharing". Reuters. Archived from the original on August 24, 2023. Retrieved August 24, 2023.
- ^ a b Woolf, Marie; Walsh, Marieke; Smith, Alanna (August 21, 2023). "Trudeau accuses Facebook of prioritizing profits by blocking news access during wildfires". The Globe and Mail. With a report from The Canadian Press. Archived from the original on August 24, 2023. Retrieved August 24, 2023.
- ^ Gillies, Rob (August 21, 2023). "Prime Minister Justin Trudeau slams Facebook for blocking Canada wildfire news". Associated Press. Archived from the original on August 25, 2023. Retrieved August 24, 2023.
- ^ a b Evans, Pete (August 18, 2023). "N.W.T. wildfire evacuees say Facebook's news ban 'dangerous' in emergency situation". CBC News. Archived from the original on August 24, 2023. Retrieved August 24, 2023.
- ^ Alam, Hina (August 22, 2023). "Lack of local media, Meta's news block impact Northwest Territories residents' access to information". The Globe and Mail. The Canadian Press. Archived from the original on August 24, 2023. Retrieved August 24, 2023.
Further reading
[edit]- Mims, Christopher (June 1, 2011). "How Facebook Leveraged Publishers' Desperation to Build a Web-Wide Tracking System". Technology Review. Archived from the original on February 9, 2012. Retrieved June 1, 2011.
- "Facebook: Friend or Foe?". LifeIvy. May 15, 2013
- Funk, McKenzie (November 19, 2016). "The Secret Agenda of a Facebook Quiz". The New York Times. Retrieved January 25, 2017.
- How Facebook's tentacles reach further than you think (May 26, 2017), BBC
- Lanchester, John (August 2017), "You Are the Product", London Review of Books, 39 (16): 3–10
- Oremus, Will (April 2018), "Are You Really the Product? The history of a dangerous idea", Slate, 39 (16)
- Greenspan, Aaron (January 24, 2019), Reality Check:Facebook, Inc.
External links
[edit]
Media related to Criticism of Facebook at Wikimedia Commons
Criticism of Facebook
View on GrokipediaContent Moderation and Censorship
Alleged Ideological Bias in Moderation
Critics, predominantly conservatives and Republican lawmakers, have long alleged that Facebook's content moderation disproportionately targets right-leaning viewpoints, enforcing policies that favor liberal perspectives through algorithmic demotion, fact-checking, and account restrictions.[11] These claims intensified after high-profile incidents, such as the platform's decision in October 2020 to limit sharing of a New York Post article detailing Hunter Biden's laptop contents, which was flagged internally following an FBI warning about potential Russian disinformation; Meta CEO Mark Zuckerberg later confirmed the story's distribution was artificially suppressed as a precautionary measure, though he maintained it was not outright censorship.[12] [13] In response to such accusations, Facebook commissioned an external audit in 2019 to investigate claims of anti-conservative bias, prompted by complaints from right-wing users and media outlets that algorithms systematically reduced visibility of conservative news sources compared to liberal ones.[14] Internal documents and whistleblower accounts have further fueled allegations, including revelations of employee political leanings skewing enforcement—such as a 2018 civil rights audit that overlapped with bias reviews but highlighted uneven application of rules against viewpoint discrimination.[15] Prominent conservatives, including former President Donald Trump and commentators like Diamond and Silk, faced temporary suspensions or reduced reach, which proponents of the bias narrative attribute to ideological gatekeeping rather than violations of neutral community standards.[11] Zuckerberg has acknowledged flaws in prior moderation practices, particularly in fact-checking, which he described in January 2025 as prone to expert biases that manifested in selective censorship of legitimate political discourse, leading Meta to phase out third-party fact-checkers in favor of a user-driven Community Notes system aimed at incorporating diverse perspectives.[13] He stated that the program "too often became a tool to censor" and reflected over-enforcement that limited political debate, marking an explicit pivot from what he termed mistaken ideological overreach.[13] While some academic analyses, including a 2021 NYU Stern study, concluded no statistical evidence of anti-conservative algorithmic bias—citing higher engagement for right-leaning content in certain periods—these findings have been contested by critics who argue they overlook opaque moderation decisions and rely on self-reported platform data potentially skewed by left-leaning institutional influences in tech and academia.[11]Suppression of Political Dissent
In October 2020, Facebook restricted the distribution of a New York Post article alleging corruption involving Hunter Biden based on contents from a laptop purportedly belonging to him, citing concerns over potentially hacked materials following an FBI warning about Russian disinformation campaigns.[12] [16] Mark Zuckerberg later confirmed in 2022 that the platform demoted the story's visibility across its systems due to this advisory, which influenced algorithmic and human moderation decisions.[12] This action drew accusations of suppressing politically sensitive information ahead of the U.S. presidential election, with internal communications revealing debates among executives about balancing fact-checking delays against rapid dissemination risks.[17] During the COVID-19 pandemic, Facebook removed or throttled content deemed misinformation, including posts questioning vaccine efficacy or public health mandates, often in response to external pressures.[18] In a 2024 letter to the U.S. House Judiciary Committee, Zuckerberg disclosed that senior Biden administration officials repeatedly pressured Meta to censor such content, including humorous memes, leading the company to adjust policies and demote posts between 2021 and 2022; he described this interference as "wrong" and expressed regret for complying.[19] [20] Internal documents from congressional investigations further indicated coordination between White House officials and platform executives to prioritize removal of dissenting views on lockdowns and treatments, amplifying claims of viewpoint discrimination against skeptics often aligned with conservative critiques.[21] Broader allegations of suppressing conservative dissent emerged from congressional hearings and leaked internal research, where Republicans cited disparities in content moderation, such as higher removal rates for right-leaning pages on topics like immigration and election integrity.[22] A 2021 Wall Street Journal review of Facebook's internal files revealed executives shelved algorithms designed to reduce polarizing content after determining they disproportionately affected conservative-leaning material, prioritizing growth over mitigation of echo chambers.[23] While Facebook maintained no systemic ideological bias in its processes, these disclosures fueled distrust among conservatives, evidenced by surveys showing 70% of Republicans in 2019 believing the platform discriminated against their views.[24] Such practices, critics argued, stifled political discourse by enforcing consensus on contested issues, though empirical studies on bias remained contested, with some analyses finding algorithmic preferences anecdotal rather than structural.[25]Alignment with Government Pressures
Facebook has faced criticism for yielding to pressures from various governments to remove or restrict content, often prioritizing regulatory compliance and market access over commitments to free expression. In a letter dated August 26, 2024, to the U.S. House Judiciary Committee, Meta CEO Mark Zuckerberg acknowledged that senior Biden administration officials repeatedly pressured the company to censor COVID-19-related content, including humorous posts and accurate information on vaccine side effects, during 2021. Zuckerberg noted that officials expressed frustration through aggressive communications and threatened antitrust investigations, leading Meta to temporarily demote such content despite internal reservations.[26] Critics, including members of Congress, argue this alignment enabled government overreach into private moderation decisions, potentially violating First Amendment principles by coercing platforms into suppressing dissenting views on public health policy.[27] A May 2024 report by the U.S. House Judiciary Committee's Select Subcommittee on the Weaponization of the Federal Government detailed how Biden White House officials coordinated with Facebook and other platforms to influence content moderation on topics like COVID-19 origins, vaccine efficacy, and election integrity, resulting in policy changes that amplified government-favored narratives.[27] The report cited internal communications showing platforms adjusting algorithms in response to federal criticism, with Facebook altering its approach to lab-leak theories after White House rebukes. While Meta maintains it resisted some demands, such as not fully removing satire, detractors contend the company's partial compliance fostered a "censorship-industrial complex" where unelected bureaucrats shaped public discourse indirectly.[18] This dynamic was partially addressed in the 2024 Supreme Court case Murthy v. Missouri, where plaintiffs alleged unconstitutional jawboning, though the Court remanded on standing grounds without resolving the coercion claims. Internationally, Facebook's compliance with government takedown requests has drawn scrutiny for enabling authoritarian control. In India, the platform faced accusations of selectively enforcing policies under pressure from Prime Minister Narendra Modi's administration, allowing anti-Muslim hate speech and propaganda to proliferate while restricting critics, as revealed in a September 2023 Washington Post investigation based on internal documents.[28] Meta's compliance rate for Indian government requests for user data rose from 51% in early 2021 to 68% by mid-2022, amid broader demands for content removals targeting opposition voices.[29] Similar patterns emerged in Brazil, where Facebook adhered to judicial orders in 2018 to block accounts spreading election misinformation, and in Turkey, where it restricted content under the 2007 Internet Act, often citing personal rights violations but effectively curbing political dissent.[30] In Vietnam, Meta reportedly complied with 96% of government takedown requests as of 2025, prioritizing operational continuity in high-user markets.[31] Human Rights Watch and the Electronic Frontier Foundation have condemned these practices for lacking transparency and enabling offline harms, such as reduced visibility of protests or abuses, arguing that high compliance rates—often exceeding 90% in sensitive cases—undermine global human rights standards.[32][33] Meta's transparency reports outline a review process balancing local laws with human rights due diligence, but critics assert this framework insufficiently resists overbroad demands, as evidenced by restricted notifications to users and selective publications via databases like Lumen.[34]Global Censorship Practices
Facebook's global censorship practices have drawn criticism for prioritizing compliance with local government demands over consistent enforcement of free speech, resulting in the removal of content critical of authorities in numerous countries. Meta's transparency reports indicate that the company processes thousands of government requests for content takedowns each period, often complying at rates exceeding 80% in select nations with restrictive speech environments. For example, in Turkey, Meta acceded to about 92% of removal requests in 2020, including blocks on posts criticizing the government during protests.[35] Such high compliance has been faulted by human rights advocates for enabling suppression of dissent without robust internal challenges to authoritarian overreach.[36] In India, Facebook has faced particular scrutiny for yielding to pressure from the Modi administration, removing 442 pieces of content in the first half of 2021 alone at direct government direction, often targeting criticism of policies like the 2020 farm laws.[37] Reports from leaked internal documents reveal that executives sometimes overrode moderation teams to preserve relations with New Delhi, allowing pro-government propaganda while censoring opposition voices, a pattern critics link to broader influence operations favoring ruling parties.[28] India has consistently topped lists of countries issuing the most takedown requests to the platform since at least 2014.[38] Brazil provides another case, where judicial mandates have compelled the removal of political posts and account suspensions, especially around the 2018 and 2022 elections; courts ordered blocks on content deemed misinformation by electoral authorities, with Facebook complying under threat of fines or shutdowns.[39] This has led to accusations of facilitating censorship of conservative and Bolsonaro-aligned figures, mirroring patterns in other Latin American contexts where platforms bow to temporary blocks on apps amid political unrest.[40] Efforts to expand into censored markets have amplified concerns; in 2016, Facebook secretly built a tool to geotarget and suppress posts, ostensibly to re-enter China—where it remains banned—prompting backlash for designing infrastructure tailored to Beijing's surveillance state.[41] Globally, Meta imposed 24 external content restrictions in 2021 across countries like Vietnam and Thailand, blocking access to pages critical of monarchies or communist parties.[29] Detractors argue these localized policies create a fragmented regime where censorship varies by jurisdiction, eroding the platform's claimed commitment to open discourse and aiding regimes in stifling information flows.[37] Even in democratic regions, such as the EU, compliance with laws like the Digital Services Act has fueled internal critiques, with CEO Mark Zuckerberg asserting in 2025 that such regulations enforce undue censorship.[42]Shifts Toward Reduced Moderation (Post-2020)
In May 2021, Facebook updated its policies to no longer remove content claiming that COVID-19 was man-made or originated from a laboratory, reversing an earlier classification of such claims as misinformation following renewed scrutiny and assessments by U.S. intelligence agencies.[43][44] This adjustment came amid broader debates over the virus's origins and represented an early post-2020 relaxation in enforcement against specific debunked narratives previously targeted for removal. In January 2023, Meta announced the reinstatement of former U.S. President Donald Trump's Facebook and Instagram accounts, which had been suspended indefinitely following the January 6, 2021, Capitol riot, subject to enhanced guardrails to deter repeat violations of community standards.[45] By July 2024, Meta further lifted remaining restrictions on these accounts ahead of the U.S. presidential election, citing equal treatment for political figures and the expiration of prior penalties.[46][47] A more sweeping overhaul occurred in January 2025, when Meta discontinued its third-party fact-checking program across Facebook, Instagram, and Threads, replacing it with a user-driven Community Notes system modeled after X's approach to crowd-sourced context.[13][48] CEO Mark Zuckerberg described the shift as a return to the platforms' foundational emphasis on free expression, aiming to simplify policies, minimize erroneous removals, and reduce what he termed overreach in prior moderation efforts.[49] These changes, including loosened enforcement on certain content categories, resulted in a reported decline in overall removals, with Meta claiming fewer mistakes without a corresponding rise in harmful material.[50] The 2025 reforms also reversed elements of earlier interventions, such as a 2021 temporary reduction in the visibility of political content implemented post-Capitol riot to curb potential incitement.[51] This evolution followed disclosures, including Zuckerberg's August 2024 letter to Congress detailing Biden administration pressures to suppress COVID-19 content like humorous critiques of mandates, which Meta had partially accommodated at the time.[26] Critics, including Meta's independent Oversight Board, contended that the rapid 2025 policy pivots were enacted hastily without adequate evaluation of human rights implications or stakeholder input, potentially exacerbating risks for at-risk groups.[52] Advocacy groups like the Electronic Frontier Foundation warned that diminished proactive moderation could amplify hate speech, misinformation, and harm to vulnerable users, arguing the community notes model lacks the reliability of expert verification.[53] Conversely, proponents viewed the adjustments as corrective responses to documented over-censorship, particularly on politically sensitive topics, aligning with empirical evidence of prior biases in fact-checking partnerships.[54][55]Privacy and Data Practices
Extensive User Surveillance and Tracking
Facebook (now Meta Platforms) employs extensive tracking mechanisms to monitor user behavior both on and off its platforms, primarily to fuel targeted advertising. This includes the use of cookies, tracking pixels, and device fingerprinting to collect data on browsing habits, location, purchases, and interactions across third-party websites and apps.[56] A 2024 Federal Trade Commission staff report described these practices as "vast surveillance," noting that Meta and similar companies gather far more personal data than users typically understand or consent to, often without clear disclosure.[56] Critics argue this enables behavioral prediction and micro-targeting, commodifying user data in what Harvard professor Shoshana Zuboff terms "surveillance capitalism," where private experiences are extracted for profit without adequate user sovereignty.[57] A core component is the creation of "shadow profiles" for non-users, compiled from data uploaded by Facebook users (such as contacts) and inferred from third-party sources like public records or partner apps. A 2022 study analyzing a representative sample of U.S. internet users found that 52% had shadow profiles containing sensitive information, including inferred demographics, political affiliations, and health indicators, regardless of whether they had ever engaged with the platform.[58] Facebook has acknowledged collecting such data for security and fraud prevention but faces criticism for retaining it indefinitely and using it to enhance ad targeting upon potential sign-up.[59] Privacy advocates, including the Electronic Frontier Foundation, contend this erodes anonymity and exposes individuals to risks like identity theft or unwanted profiling without recourse.[60] Off-platform tracking via the Facebook Pixel—a snippet of code embedded on millions of websites—exacerbates these concerns by transmitting user interactions, such as page views and form submissions, back to Meta servers, often including sensitive details like medical searches or financial data.[61] The FTC has highlighted pixels' invisibility to users, enabling surreptitious data flows that violate expectations of privacy on non-Facebook sites.[61] In 2018 congressional hearings, CEO Mark Zuckerberg admitted the company tracks logged-out users and non-users through these tools, prompting backlash over the lack of granular opt-out controls.[59] Regulatory scrutiny has intensified, with European Union fines under GDPR for inadequate consent mechanisms in pixel usage, as seen in a 2020 Irish Data Protection Commission ruling against Meta for processing behavioral ad data without valid legal basis.[62] These practices have drawn bipartisan criticism for prioritizing revenue—Meta's advertising accounted for over 97% of its $134.9 billion revenue in 2023—over user autonomy, potentially enabling discriminatory targeting or amplifying echo chambers.[57] While Meta introduced tools like "Off-Facebook Activity" in 2019 to display and disconnect some tracking, independent analyses reveal incomplete coverage, as much data aggregation occurs server-side beyond user visibility.[59] Detractors, including FTC commissioners, warn that such surveillance threatens democratic freedoms by fostering dependency on opaque algorithms that infer and influence preferences without transparency.[56] Despite pledges to limit third-party data use post-2018 scandals, ongoing FTC monitoring and 2024 reports indicate persistent expansion in data collection scopes.[56]Major Data Scandals and Breaches
One of the earliest significant data scandals involved Facebook's Beacon advertising program, launched in November 2007, which automatically shared users' purchases and activities from partner websites on their news feeds without explicit, informed consent.[63] The feature affected millions of users, leading to widespread complaints about privacy invasions, including instances where sensitive purchases like holiday gifts were disclosed prematurely.[64] Facebook CEO Mark Zuckerberg issued a public apology on December 5, 2007, acknowledging insufficient privacy controls, and introduced opt-out options, though critics argued the initial implementation prioritized revenue over user autonomy.[63] A class-action lawsuit followed, resulting in Beacon's redesign and eventual shutdown in September 2009, with Facebook agreeing to a $9.5 million settlement for privacy advocacy groups.[65] The Cambridge Analytica scandal, exposed in March 2018, revealed that the British political consulting firm harvested personal data from up to 87 million Facebook users through a third-party personality quiz app called "thisisyourdigitiallife," developed by researcher Aleksandr Kogan.[4] The app, used by approximately 270,000 individuals, exploited Facebook's API to access not only those users' data but also that of their friends, collecting details like likes, statuses, and inferred psychological profiles without clear consent.[66] Cambridge Analytica, linked to the Trump 2016 campaign and Brexit efforts, used this data for targeted political advertising, spending nearly $1 million on collection efforts.[4] Facebook faced accusations of lax oversight of app developers, leading to a $5 billion fine from the U.S. Federal Trade Commission in July 2019 for privacy violations, the largest such penalty at the time, alongside restrictions on data practices.[2] In December 2022, Meta settled related class-action lawsuits for $725 million.[67] In April 2019, security firm UpGuard discovered over 540 million Facebook user records exposed on unsecured Amazon cloud servers belonging to third-party apps, including Mexican media company Cultura Colectiva's database containing 146 GB of data on 540 million individuals.[68] The exposed information included names, comments, likes, and Instagram tokens, stemming from misconfigured servers by app developers rather than a direct Facebook hack.[69] Facebook stated it had suspended the implicated apps and worked to secure the data, but the incident highlighted ongoing risks from ecosystem partners.[70] A related exposure occurred in April 2021, when data from 533 million Facebook users—scraped via a vulnerability in the platform's contact importer tool before its fix in September 2019—was posted for free on a hacking forum.[71] The leaked dataset included phone numbers, full names, locations, birthdates, and bios for users across 106 countries, enabling potential phishing and identity theft.[72] Facebook declined to notify affected users, arguing the breach predated the patch and no evidence showed misuse at the time of scraping, though Ireland's Data Protection Commission fined Meta €265 million in November 2022 for GDPR violations tied to the incident.[73][74] These events underscored persistent criticisms of Facebook's data retention practices and vulnerability to scraping, contributing to eroded user trust and regulatory scrutiny.[75]Violations of Data Protection Laws
In July 2019, the U.S. Federal Trade Commission (FTC) settled charges against Facebook for deceiving users about their ability to control privacy settings and for violating a 2012 FTC order by sharing user data without consent, resulting in a record $5 billion civil penalty—the largest ever for privacy violations at the time—and mandates for enhanced privacy program oversight, including an independent privacy committee.[2] The settlement stemmed from practices like the Cambridge Analytica incident, where data from up to 87 million users was improperly accessed via third-party apps, though the FTC emphasized broader failures in safeguarding user information and prioritizing growth over privacy promises.[76] Under the European Union's General Data Protection Regulation (GDPR), Meta Platforms Ireland Limited—Facebook's EU entity—has faced escalating fines for systemic data handling deficiencies. In May 2023, the Irish Data Protection Commission (DPC), following a binding decision from the European Data Protection Board (EDPB), imposed a €1.2 billion fine, the highest under GDPR, for unlawfully transferring Facebook user data from the EU to the U.S. after the 2020 Schrems II ruling invalidated the EU-U.S. Privacy Shield framework; Meta's reliance on standard contractual clauses was deemed inadequate against U.S. surveillance laws, prompting an order to suspend such transfers and delete prior data.[77] This violation affected millions of users by exposing personal data to potential government access without sufficient safeguards.[78] Additional GDPR penalties highlight persistent issues in consent and processing. In January 2023, the DPC fined Meta €390 million for breaches on Facebook and Instagram, including using pre-ticked consent boxes and contractual terms instead of freely given, specific consent for behavioral advertising, violating GDPR Articles 7 and 8 on consent requirements.[79] In December 2024, another €251 million fine was levied for failures in protecting minors' data and inadequate age verification processes under GDPR's child data protections.[80] Earlier, in 2022, Meta paid €18.6 million for 12 data breach notifications in 2018 that violated GDPR transparency and security obligations.[81] These cumulative fines, exceeding €2 billion since GDPR's 2018 enforcement, underscore regulators' findings of inadequate data minimization, purpose limitation, and accountability, with Meta frequently contesting the rulings through appeals.[82]End-to-End Encryption Shortcomings
Meta's implementation of end-to-end encryption (E2EE) across its messaging services, including full adoption on WhatsApp since 2016 and default enablement on Facebook Messenger beginning December 2023, has drawn criticism for undermining content moderation and law enforcement capabilities. E2EE ensures that only the sender and recipient can access message contents, preventing Meta from scanning communications for illegal material in real time. Critics argue this creates "going dark" scenarios where platforms cannot proactively detect and report child sexual abuse material (CSAM), with Meta having submitted 27.6 million CSAM reports to the National Center for Missing & Exploited Children (NCMEC) in 2022, accounting for 95% of the organization's total reports that year.[83][84] Child safety organizations and law enforcement have highlighted the risks to vulnerable users, particularly children, as encrypted channels facilitate undetected distribution of exploitative content. The NCMEC described the Messenger rollout as a "devastating blow to child protection," noting that over 20 million CSAM incidents were reported from Facebook and Messenger in 2022 alone, many of which relied on platform scanning now rendered impossible by default E2EE.[84][85] In the UK, the National Crime Agency warned that E2EE hampers detection of grooming and abuse, while the National Police Chiefs' Council stated it would have a "dangerous impact" by eliminating Meta's ability to identify child victims proactively.[86][87] The FBI similarly urged Meta to delay the changes in April 2023, citing increased risks of child abuse imagery circulating without detection.[88] Additional technical limitations exacerbate these concerns, as E2EE on Meta platforms does not extend to metadata—such as sender/recipient identities, timestamps, and location data—which remains accessible to the company and authorities via warrants. WhatsApp backups to cloud services like iCloud or Google Drive are often unencrypted by default, potentially exposing message histories unless users manually enable encryption, a setting many overlook.[89] Business accounts and payment details on WhatsApp also bypass full E2EE protections, allowing storage outside the encrypted channel. Critics, including European police chiefs, contend that these gaps, combined with the inability to access content, severely impede investigations into serious crimes like terrorism and child exploitation, prompting calls for legislative solutions to mandate access mechanisms without universal backdoors.[89][90] Meta has countered by investing in alternative safety tools, such as client-side hashing for CSAM detection and user reporting features that allow forwarding suspicious messages to safety teams, but these measures are viewed by detractors as insufficient substitutes for server-side scanning. Governments, including the UK and EU, have pressured Meta to balance privacy with safety, with ongoing debates over proposals like the UK's Online Safety Bill seeking to compel platforms to scan encrypted traffic for illegal content. Despite resistance from Meta and allies like Apple in legal challenges, the rollout has intensified tensions, as evidenced by WhatsApp's support for encryption in a 2025 UK court case against government data access demands.[91][92][93]User Psychological and Sociological Effects
Addiction Mechanisms and Usage Dependency
Facebook's platform incorporates design elements that leverage variable reinforcement schedules, akin to slot machines, to promote repeated engagement. Notifications for likes, comments, and messages deliver unpredictable social rewards, triggering dopamine release in the brain's reward pathways and encouraging compulsive checking behaviors.[94][95] This intermittent reinforcement exploits the brain's anticipation of reward, fostering dependency similar to behavioral addictions observed in gambling.[96] Social-communication features, such as messaging and sharing, exhibit the highest addictive potential among Facebook's functions, correlating with elevated "wanting" responses—motivational drives distinct from mere liking—that predict problematic use.[97] Infinite scrolling and algorithmic feeds minimize friction in content consumption, extending session durations by presenting tailored stimuli that capitalize on users' fear of missing out (FOMO) and social validation needs.[98] Peer-reviewed analyses confirm these mechanisms contribute to excessive use, with passive consumption (e.g., browsing without interaction) linked to diminished well-being and heightened dependency over active engagement.[98] Empirical measures of usage dependency include the Bergen Facebook Addiction Scale, which assesses symptoms like salience, mood modification, tolerance, withdrawal, conflict, and relapse, revealing prevalence rates up to 39.7% in certain student populations.[99][100] Longitudinal studies indicate that prolonged exposure alters dopamine pathways, reducing sensitivity to natural rewards and entrenching habitual platform reliance, with users reporting interference in daily functioning and academic performance.[96][101] Personality factors, including low conscientiousness and openness, further moderate vulnerability to this dependency, as meta-analyses show negative correlations with addiction proneness.[102]Links to Mental Health Decline
Internal research conducted by Facebook in 2019 revealed that Instagram exacerbates body image issues for approximately one in three teenage girls, with 32% of surveyed teen girls reporting that the platform made them feel worse about their bodies.[103] This effect was attributed to features promoting social comparison, such as idealized images and filters, leading to heightened anxiety and depressive symptoms among users vulnerable to such content.[104] The company suppressed these findings for over two years despite awareness of the risks, particularly for adolescent females who comprised a significant portion of Instagram's user base.[105] Longitudinal studies have established predictive associations between intensive social media use, including on Facebook platforms, and subsequent mental health deterioration. A 2022 analysis of U.S. college students found that the introduction of Facebook correlated with a statistically significant increase in mental health decline, including higher rates of depression and anxiety, compared to non-users or campuses without the platform.[106] Similarly, a 2025 cohort study tracking early adolescents showed that greater time spent on social media predicted elevated depressive symptoms over subsequent years, with effects persisting after controlling for baseline mental health and demographics.[107] These patterns align with broader epidemiological trends: U.S. teen depression rates rose 60% from 2007 to 2017, coinciding with widespread adoption of Facebook and Instagram, while emergency visits for mental health crises among youth aged 10-19 doubled in the same period.[108] The U.S. Surgeon General's 2023 advisory highlighted social media's role in youth mental health harms, noting that up to 95% of adolescents aged 13-17 use platforms like Instagram daily, with heavy use (over 3 hours) doubling the risk of anxiety and depression symptoms.[108] Experimental evidence supports causality: randomized trials reducing social media exposure for teens have demonstrated improvements in self-reported well-being and reduced internalizing disorders.[109] Critics of correlational interpretations, such as psychologist Jonathan Haidt, argue that the temporal precedence—sharp mental health declines post-2010 smartphone proliferation—and international variations (e.g., slower rises in regions delaying social media access) indicate platforms like Facebook contribute causally via mechanisms like displaced sleep, cyberbullying, and addictive algorithms prioritizing engagement over user welfare. Discrepancies in findings, often from self-selected samples or short-term surveys, underscore the need for continued scrutiny, but converging evidence from internal data and rigorous designs points to non-trivial harms, disproportionately affecting girls through envy-inducing content dynamics.[110]Erosion of Social Connections and Envy Dynamics
Research indicates that extensive Facebook use can erode the depth of interpersonal relationships by prioritizing superficial, low-effort interactions over meaningful face-to-face engagements. A 2018 analysis of social media dynamics found that platforms like Facebook facilitate a higher volume of interactions but diminish their quality, leading to weaker strong ties as users substitute online breadth for offline depth.[111] This displacement effect is supported by longitudinal data showing that increased time on social networking sites correlates with reduced in-person socializing, as virtual connections crowd out opportunities for building robust relational bonds.[112] Empirical studies further link passive Facebook consumption—such as scrolling through feeds without active engagement—to heightened loneliness and diminished social connectedness. For instance, a 2017 investigation unpacked the "Facebook paradox," revealing that while active communication on the platform may bolster certain ties, passive browsing often exacerbates feelings of isolation by fostering one-sided observations rather than reciprocal exchanges.[113] Communication with weak ties, prevalent on Facebook, yields minimal well-being benefits compared to interactions with strong ties, potentially weakening users' reliance on close relationships in real life.[114] Facebook's design amplifies envy through mechanisms of social comparison, where users encounter curated portrayals of others' lives, prompting upward evaluations that evoke inferiority. A 2015 study demonstrated that time spent on Facebook positively predicts social comparison tendencies, which mediate links to envy and subsequent depressive symptoms, with passive use showing the strongest associations.[115] This dynamic aligns with social comparison theory, as evidenced by a meta-analysis confirming that exposure to idealized upward comparisons on social media reliably generates envy, anxiety, and reduced life satisfaction.[10] Users with lower self-esteem are particularly susceptible, experiencing elevated envy from perceived disparities in others' successes, as low self-regard intensifies unflattering self-assessments during browsing.[116] Critics attribute these envy dynamics to Facebook's algorithmic emphasis on high-engagement content, which surfaces aspirational posts and dilutes authentic sharing, fostering chronic dissatisfaction. A critical review of over 40 studies affirmed that envy on social networking sites like Facebook is pervasive and tied to diminished well-being, often stemming from benign but frequent comparisons to peers' highlight reels.[117] Experimental evidence from quasi-natural settings, such as temporary Facebook deactivations, has shown reductions in envy and improvements in subjective happiness, underscoring the platform's causal role in these emotional costs.[118]Distortion of Information Ecosystems
Facebook's algorithmic curation of content, designed to maximize user engagement through metrics like likes, shares, and comments, systematically prioritizes sensational and emotionally charged material over factual reporting, thereby distorting the broader information landscape by elevating misinformation and divisive narratives. Empirical analysis of user interactions during the 2020 U.S. presidential election revealed that posts containing misinformation received approximately six times more clicks than factual news, as the platform's recommendation systems amplified content that provoked outrage or novelty, regardless of veracity.[119] This engagement-driven model, rooted in the platform's business incentives, fosters a feedback loop where low-quality sources gain visibility, crowding out balanced discourse and skewing public perception toward extremes.[120] The prevalence of like-minded content feeds, often termed echo chambers, further exacerbates this distortion, as users are disproportionately exposed to ideologically aligned sources that reinforce preexisting beliefs rather than challenging them with diverse viewpoints. A large-scale study of over 10,000 U.S. Facebook users in 2020 found that while cross-partisan exposure occurs, the algorithm's personalization results in users encountering primarily homophilous networks, with conservatives and liberals inhabiting distinct news ecosystems that limit corrective information flow.[121] This structural bias, compounded by the platform's scale—reaching hundreds of millions daily—amplifies polarization, as evidenced by heightened affective divides in user interactions, though some analyses debate the causal intensity compared to offline factors.[122] Peer-reviewed research underscores that such dynamics persist despite moderation efforts, with algorithmic tweaks post-2018 intended to reduce emotional content inadvertently sustaining partisan silos.[123] In high-stakes contexts like elections and public health crises, these mechanisms have demonstrably warped information ecosystems, enabling rapid dissemination of falsehoods with real-world consequences. During the COVID-19 pandemic, false claims—such as the efficacy of unproven treatments like hydroxychloroquine—spread virally on Facebook, contributing to vaccine hesitancy and undermining health authority guidance, as internal data later confirmed the platform's role in amplifying unverified narratives over scientific consensus.[6] Similarly, in the 2016 and 2020 U.S. elections, algorithmic promotion of fabricated stories outperformed legitimate journalism in reach, with Russian-linked disinformation campaigns exploiting these vulnerabilities to influence voter behavior, as documented in platform transparency reports and subsequent investigations.[124] Moderation inconsistencies, including external pressures to suppress dissenting COVID-19 content (e.g., lab-leak hypotheses later deemed plausible), illustrate how selective enforcement distorts ecosystems by favoring institutional narratives over emergent evidence, a pattern critiqued for reflecting biases in content oversight teams.[26] Academic sources on these effects, often from left-leaning institutions, may underemphasize symmetric misinformation from progressive outlets, yet the empirical data on engagement disparities remain robust across methodologies.[125] Overall, Facebook's information distortion arises from causal interplay between profit-maximizing algorithms and user behaviors, yielding ecosystems where truth is secondary to virality, as quantified by diffusion models showing false news traveling farther and faster than accurate equivalents. Interventions like fact-checking labels have shown marginal efficacy, reducing shares by only 1-2% in controlled experiments, insufficient to counteract the platform's inherent tilt toward distortion.[126] This systemic issue persists, with recent analyses indicating that even reduced moderation post-2020 has not mitigated amplification of polarizing or erroneous content, highlighting the platform's foundational design flaws.[127]Business Ethics and Practices
Tax Minimization and Offshore Strategies
Facebook, operating through subsidiaries in Ireland, routed substantial advertising revenues from Europe and other regions through low-tax entities, effectively minimizing its global tax liability. In 2016, for instance, the company channeled €12.6 billion in European revenue via Ireland while paying only €29.5 million in Irish corporate taxes, yielding an effective rate of approximately 0.23%.[128] This structure relied on the "Double Irish" arrangement, a base erosion and profit-shifting (BEPS) technique that involved two Irish-incorporated subsidiaries—one taxed at Ireland's 12.5% rate and another claiming management in the Cayman Islands to access near-zero taxation—often combined with Dutch intermediaries in a "Dutch Sandwich" variant.[129] Critics, including tax policy analysts, argued this eroded tax bases in high-revenue markets, shifting burdens to domestic taxpayers and undermining public finances, though the company maintained compliance with prevailing laws.[130] A core element involved transferring intellectual property rights to an Irish subsidiary in 2010, licensing the technology back to the U.S. parent for royalties that shifted profits offshore. Between 2010 and 2017, this facilitated at least $19 billion in profit shifting, with Facebook reporting $50 billion in pre-tax income but paying just $3.9 billion in U.S. taxes—an effective rate of 8%.[131] The U.S. Internal Revenue Service (IRS) challenged the transfer pricing as undervalued, initially estimating a $7 billion underpayment in 2015 (revised to $20 billion by 2019) and seeking up to $9 billion in additional taxes plus penalties.[131] In a 2025 U.S. Tax Court ruling, the IRS's income method for valuing the cost-sharing agreement was upheld as appropriate, though adjustments reduced the determined underpayment to $1.486 billion, enabling potential broader enforcement of commensurate-with-income standards against ongoing offshore profit allocation.[132] Detractors, such as economists at the Institute on Taxation and Economic Policy, highlighted this as emblematic of aggressive avoidance exceeding typical corporate practices, risking reputational damage and shareholder value amid regulatory scrutiny.[133] Additional strategies included exploiting the U.S. stock option deduction loophole, which allowed Facebook to claim $5.8 billion in tax savings from 2010 to 2015 by treating employee stock compensation as deductible expenses at market value rather than cost basis, further lowering its effective U.S. rate to 16.5% on $14.8 billion in domestic profits during that period—half the statutory rate—and resulting in zero U.S. taxes in three of those years.[133] Ireland phased out the Double Irish by 2020 under international pressure, prompting Facebook to liquidate Cayman-linked entities and close controversial holdings, though it set aside €1 billion for potential EU fines and paid €35 million to settle Irish tax disputes in 2021.[129][134] Ongoing probes, such as Italy's 2024 investigation into €887.6 million in alleged digital services tax evasion by Meta Platforms, underscore persistent criticisms of opaque offshore profit funneling despite reforms like the OECD's 15% global minimum tax agreement.[135] Proponents of reform contend these tactics, while legal, exploit mismatches in international tax rules, incentivizing investment distortion and revenue loss estimated in billions annually across jurisdictions.[130]Employee and Moderator Treatment
Content moderators at Facebook, often employed as contractors through outsourcing firms, are required to review millions of posts daily containing graphic violence, child sexual abuse, and other disturbing material, leading to widespread reports of psychological trauma including post-traumatic stress disorder (PTSD).[136] In a 2020 class-action settlement, Facebook agreed to pay $52 million to over 10,000 U.S.-based moderators who developed PTSD and other conditions from exposure to such content, acknowledging the job's hazards but providing limited ongoing support.[137] Despite company pledges to improve conditions, moderators have reported insufficient mental health resources, with many experiencing nightmares, anxiety, and long-term disability without adequate counseling or breaks from traumatic material.[138] [139] Outsourced moderators in regions like Kenya have faced particularly harsh conditions, including reviewing unspeakably graphic videos in under-resourced facilities with minimal protective measures, resulting in over 140 individuals diagnosed with severe PTSD by medical professionals at Kenyatta National Hospital.[140] In December 2024, former Kenyan moderators filed lawsuits against Meta and outsourcing partner Samasource, alleging lifelong trauma, exploitative low wages, and denial of workers' compensation for occupationally induced mental health issues.[141] [142] These cases highlight systemic issues in content moderation labor, where roles essential to platform usability are filled by low-paid contractors in developing countries, often without the benefits or oversight afforded to core Meta staff.[143] Broader employee treatment at Meta has drawn criticism for aggressive layoff strategies and performance evaluation practices that prioritize cost-cutting over fairness. In 2025, Meta conducted multiple rounds of layoffs, including approximately 600 roles in AI divisions and broader cuts targeting "low performers" through mandated low ratings in large teams, which former employees and experts have disputed as arbitrary and damaging to morale.[144] [145] Internal memos indicated some positions were eliminated in favor of automation for "routine decisions," leaving affected workers with severance but facing career stigma and abrupt notifications.[146] Critics, including fired staff, have described these processes as a "gutting" of teams, including those monitoring user privacy risks, exacerbating perceptions of ruthless efficiency over employee welfare.[147] Whistleblower accounts, such as those from former product manager Frances Haugen in 2021, have further revealed an internal culture where profit incentives allegedly overshadowed employee concerns about ethical product impacts, though employee reactions to such disclosures were divided.[148]Anticompetitive Tactics Against Rivals
Facebook has faced allegations of employing acquisitions to neutralize emerging competitors in the social networking space. In December 2012, the company acquired Instagram for approximately $1 billion, a platform that had rapidly grown to challenge Facebook's photo-sharing dominance.[149] The U.S. Federal Trade Commission (FTC) later contended in a 2020 lawsuit that this purchase, along with the 2014 acquisition of WhatsApp for $19 billion, was designed to maintain monopoly power by eliminating potential rivals rather than fostering innovation.[149] Internal communications revealed during the ensuing antitrust trial, which began in April 2025, showed CEO Mark Zuckerberg viewing Instagram as a strategic threat due to its mobile-first appeal and independent growth trajectory.[150] Complementing acquisitions, Facebook has been accused of imitating key features from competitors to erode their market position. In August 2016, Instagram launched "Stories," a direct replication of Snapchat's ephemeral photo and video sharing format, which had propelled Snapchat's user growth since 2013.[151] During congressional testimony in July 2020, Zuckerberg acknowledged that Facebook had "adapted features that others have led in," including those from Snapchat, amid claims that such copying, combined with superior distribution via Facebook's vast user base, stifled innovation by rivals.[152] Snapchat's CEO Evan Spiegel reportedly provided evidence to the FTC under "Project Voldemort," highlighting Facebook's internal efforts to clone Snapchat functionalities while allegedly discouraging partnerships.[153] These tactics have drawn regulatory scrutiny beyond acquisitions and imitation. The FTC's 2020 complaint further alleged that Facebook imposed conditions on third-party apps, such as requiring data-sharing access that disadvantaged non-acquired rivals, thereby reinforcing barriers to entry in personal social networking services.[8] In the European Union, while cases have focused more on data practices and tying (e.g., a €797.72 million fine in November 2024 for bundling Facebook Marketplace abusively), broader antitrust probes have examined Meta's overall conduct in foreclosing competition.[154] As of October 2025, the U.S. trial remains unresolved, with Meta defending its actions as pro-competitive responses to market dynamics rather than monopolistic suppression.[155]Deceptive Metrics for Advertisers
In September 2016, Facebook disclosed an error in its video viewership metrics, admitting that average viewing times had been overstated by 60 to 80 percent for two years due to a miscalculation in how partial views—those lasting at least three seconds—were factored into averages, leading advertisers to perceive higher engagement than actual.[156][157] The company apologized publicly, attributing the issue to a "bug" in its reporting system, and committed to providing adjusted metrics to affected advertisers while emphasizing that total video views remained accurate under the three-second threshold standard.[158] However, internal documents later revealed in lawsuits indicated that Facebook had been aware of measurement discrepancies with third-party tools for over a year prior to the admission, during which it continued promoting video advertising as a high-engagement alternative to television.[159] This incident extended beyond video metrics; in December 2016, Facebook acknowledged further miscalculations in additional advertising indicators, including web share of sessions and live video views, which understated referral traffic and overstated live engagement rates for advertisers.[160] Critics, including ad industry executives, argued these errors systematically inflated perceptions of platform efficacy to lure budgets from traditional media, with one lawsuit alleging the company prioritized growth over accuracy by delaying corrections.[161] In response to related litigation, Facebook agreed to a proposed $40 million settlement in a class-action suit over video metrics discrepancies, compensating advertisers for alleged overpayments based on unreliable data.[162] A more persistent controversy involves the "Potential Reach" metric, central to Facebook's auction-based advertising system, where a class-action lawsuit filed by advertisers claims Meta systematically inflated this figure by measuring unique logins or accounts rather than unique individuals, incorporating duplicates, fake profiles, and bot activity to exaggerate audience size by up to 400 percent in some cases.[163][164] Court documents from 2021 revealed internal awareness of the metric's flaws, including executive dismissal of proposals to refine it for greater accuracy, resulting in advertisers paying premium prices for placements on an ostensibly larger but misleadingly quantified audience.[165] In March 2024, a U.S. appeals court upheld the case's progression to class-action status against Meta, rejecting arguments that advertisers should have independently verified metrics and estimating potential damages exceeding $7 billion for U.S. clients using tools like Ads Manager from 2015 onward.[166][167] Meta has defended the metric as a standard industry estimate of deliverable ad impressions, not a guarantee of unique human reach, though plaintiffs contend this distinction enabled overcharging without transparency on known inflationary factors like non-human accounts.[168]Technical Reliability and User Experience
Account Policies and Arbitrary Bans
Facebook's account policies, outlined in its Community Standards, permit temporary restrictions, shadowbans, or permanent suspensions for perceived violations including spam, misinformation, hate speech, and coordination of harmful activities. Enforcement relies heavily on automated AI systems supplemented by human reviewers, which Meta reports processed billions of content actions quarterly, with action rates exceeding 95% for certain violations like child exploitation. Critics argue this framework enables opaque and inconsistent application, as users often receive automated notifications citing vague policy breaches without specific evidence or post references, complicating appeals.[169] Widespread complaints emerged in 2025 regarding arbitrary bans, particularly during "ban waves" triggered by algorithmic overreach or technical glitches, with a significant surge in wrongful account disables across Facebook and Instagram largely due to errors in AI-powered content moderation. This affected personal, business, and ad accounts with vague violation claims, such as child exploitation, where users reported sudden disables often without clear explanations. On June 24, 2025, Meta erroneously suspended or deleted thousands of Facebook groups due to a technical error, prompting user backlash and Meta's acknowledgment of the issue with promises of restoration. A subsequent wave in July 2025 affected individual accounts across Facebook and Instagram, with users reporting permanent disables without prior violations; over 25,000 signed a petition decrying the pattern as a systemic problem impacting personal and business pages. Media outlets documented dozens to hundreds of such cases, including U.S. broadcasters receiving 77 complaints of wrongful closures in the prior six months.[170][171][172][173] The appeal process has drawn particular scrutiny for its inefficacy, often limited to automated reviews yielding generic denials and rare human intervention. Meta provides a standard 180-day window to appeal a suspended or disabled account on Facebook and Instagram; if no appeal is submitted within 180 days or the appeal fails, the account becomes permanently disabled, a policy that remained unchanged through 2025 and into 2026.[174] Businesses reliant on the platforms, such as Montreal-based music ventures, reported month-long suspensions without explanation, leading to revenue losses until external media pressure prompted reversals. Meta admitted errors in targeted crackdowns, such as a 2025 initiative banning 600,000 accounts linked to predatory teen behavior, where legitimate users were collateral damage. While Meta claimed a 50% reduction in enforcement mistakes by Q1 2025 through AI refinements, ongoing user reports, the persistence of issues into early 2026, and the absence of independent audits suggest persistent over-enforcement, disproportionately affecting smaller creators without resources for escalation; these developments prompted reviews by Meta's Oversight Board.[175][176][13][177] Critics, including digital rights advocates, highlight how policy ambiguity—such as broad definitions of "coordinated inauthentic behavior"—facilitates discretionary enforcement, potentially amplifying biases in training data or moderator guidelines, though Meta denies viewpoint discrimination. High-profile restorations, often requiring journalistic intervention rather than internal mechanisms, underscore the lack of due process, with some users regaining access only after public campaigns. These issues have fueled calls for regulatory oversight on moderation transparency, as erroneous bans erode user trust and platform utility without verifiable recourse.[178][179]Outages, Bugs, and Support Deficiencies
Facebook has experienced multiple large-scale outages that disrupted access for millions of users worldwide. On October 4, 2021, a routine maintenance command inadvertently severed backbone connections across Meta's data centers, causing Facebook, Instagram, and WhatsApp to be unavailable for approximately six hours and affecting an estimated 3.5 billion users.[180] [181] Similar backbone routing failures recurred in subsequent years, including a March 2024 incident attributed to backend misconfigurations that halted services for several hours.[182] More recently, on December 11, 2024, a technical issue rendered Facebook and Instagram inaccessible for users globally, with reports peaking on monitoring sites indicating login and feed-loading failures.[183] These events highlight recurring vulnerabilities in Meta's centralized infrastructure, where single points of failure amplify downtime impacts on business communications and personal interactions.[184] Software bugs have compounded reliability concerns, often exposing user data or impairing core functions. In June 2018, a glitch in the platform's privacy controls automatically switched new posts from "only me" or "friends" settings to public visibility for up to 14 million users over several days before detection.[185] [186] Persistent glitches reported in 2024-2025 include overlapping ad audio with video playback, erratic posting restrictions that flag non-violative content, and moderation errors inconsistently applied across devices.[187] Such issues stem from rapid feature rollouts prioritizing growth over stability, leading to degraded user experience as evidenced by elevated complaint volumes on outage trackers.[188] Customer support deficiencies exacerbate these technical shortcomings, with users frequently unable to obtain timely human assistance. Meta provides no direct telephone support for individual accounts, directing complaints to self-service Help Center forms or automated in-app reporting, which often yield generic responses without resolution.[189] [190] For business users, tools like Meta Business Suite suffer from unresolved bugs met with vague acknowledgments and no accountability timelines, fostering frustration among advertisers reliant on platform uptime.[187] This structure, designed for scalability over personalization, results in prolonged account lockouts or payment disputes, as users navigate opaque appeal processes without escalation options, underscoring a systemic prioritization of operational efficiency over user remediation.[191]Enabling Harassment and Safety Failures
Facebook has faced substantial criticism for its inadequate content moderation systems, which have permitted widespread harassment, bullying, and threats to user safety. Internal documents and whistleblower testimonies reveal that despite billions invested in safety measures, the platform's algorithms and human moderators often fail to detect or remove abusive content in a timely manner, exacerbating harms such as cyberbullying, stalking, and coordinated attacks. For instance, in May 2025, Meta reported a rise in the prevalence of bullying and harassment following adjustments to its content enforcement policies, attributing the increase partly to reduced proactive removals, even as enforcement errors decreased by 50%. Critics argue these changes reflect a prioritization of user engagement over safety, allowing harmful interactions to proliferate unchecked.[192][193] A prominent area of failure involves the platform's handling of live-streamed violence and atrocities, where delays in intervention have enabled real-time broadcasts of crimes. Launched in 2016, Facebook Live facilitated at least 45 documented incidents of violence by mid-2017, including murders, assaults, and torture sessions viewed by thousands before removal. Notable cases include the April 2017 livestreamed execution of Robert Godwin Sr. by Steve Stephens in Cleveland, Ohio, which garnered over 2 million views in the two hours it remained online, and the January 2017 torture of a mentally disabled teenager in Chicago, streamed to highlight racial tensions and resulting in hate crime charges against four perpetrators. These events underscore systemic delays, with content often persisting for hours due to reliance on user reports rather than real-time AI detection, prompting calls for mandatory review delays on live features.[194][195][196] Child safety represents another critical domain of lapses, with whistleblowers testifying to Meta's knowledge of vulnerabilities exploited for grooming, exploitation, and exposure to harmful content. Internal research and warnings from researchers revealed that approximately 500,000 children experienced inappropriate or predatory interactions daily on Instagram and Facebook, indicating significant shortcomings in protecting minors from exploitation despite internal awareness.[197] In September 2025, former Meta employees, including those from VR divisions, detailed during U.S. Senate Judiciary Committee hearings how the company ignored internal warnings about predators using platforms to target minors, including failures to implement age verification or restrict direct messaging for under-16 users. Arturo Bejar, a former safety executive, testified in November 2023 that executives dismissed data showing millions of young users encountering unwanted sexual advances, prioritizing growth metrics over protective tools. These revelations contributed to multibillion-dollar lawsuits alleging the platforms' designs addict and endanger children, with internal research indicating that features like Instagram's algorithmic feeds amplify bullying and self-harm promotion among teens.[198][199][200] Broader harassment issues persist, particularly for vulnerable groups, where moderation inconsistencies allow hate speech and doxxing to evade removal. Reports from 2024 highlighted Meta's failure to promptly address extreme targeted abuse, such as on political figures' pages, with some hate speech reports lingering unresolved for over three days despite violations of community standards. While Meta's transparency reports claim removal of over 90% of detected hate speech proactively, independent audits and user complaints indicate under-detection in non-English languages and algorithmic biases that permit coordinated campaigns, as seen in delayed responses to anti-Semitic or misogynistic harassment spikes. These shortcomings have drawn regulatory scrutiny, with U.S. senators in January 2024 confronting CEO Mark Zuckerberg over platform accountability for youth suicides linked to unchecked bullying.[201][202][203]Advertising Flaws and Fraud
Facebook's advertising platform has faced scrutiny for systemic flaws in ad delivery and measurement, leading to advertiser complaints of wasted spend on invalid traffic such as bot-generated clicks and views. In 2017, the company admitted to its tenth measurement error since September of the previous year, including overcharging advertisers for link-based video carousel ads on mobile sites that were clicked inadvertently.[204] A 2012 class-action lawsuit accused Facebook of failing to prevent and disclose invalid clicks from competitors or fraudulent sources, arguing that the platform's practices eroded trust in ad performance metrics.[205] Advertisers have reported ongoing issues with click fraud, where bots inflate engagement, distort return on ad spend (ROAS), and contaminate lead data, with some estimating up to 70% of clicks as non-human in certain campaigns as of 2024.[206] While Meta maintains a policy of not charging for detected invalid traffic, critics contend that detection lags behind sophisticated fraud, resulting in unrecovered budgets and skewed algorithmic learning.[207] A more pervasive criticism centers on the platform's facilitation of fraudulent advertising, particularly scam ads that proliferate due to inadequate pre-approval and moderation processes. In the first half of 2023, over 50% of consumer fraud losses reported to the FTC originated from social media investment scams, many advertised on Facebook. By 2022, FTC data showed consumers losing more than $1.2 billion to fraud initiating on social media, exceeding other channels, prompting regulatory orders for platforms like Facebook to detail scam prevention efforts.[208] A July 2025 Wall Street Journal investigation revealed that 70% of new ads on Meta platforms promoted scams, low-quality products, or illicit goods, with internal reluctance to impose stricter ad-buying barriers amid a 22% revenue surge from advertising.[209] This issue escalated in political advertising, where a October 2025 Tech Transparency Project report identified 63 scam operators running over 150,000 deepfake-laden ads on Facebook and Instagram, spending $49 million while evading prohibitions on deceptive content.[210] Legal actions underscore allegations that Meta knowingly profits from fraudulent ads through lax enforcement. In October 2025, a class-action lawsuit filed by Scott+Scott accused Meta of deriving revenue from scam impersonation ads on Facebook, Instagram, and WhatsApp, claiming the company failed to implement adequate safeguards despite awareness of the schemes.[211] Another September 2025 suit argued that Meta's systems encourage scammers by approving fraudulent ads that deceive users, seeking remedies under false advertising laws.[212] A parallel October 2025 class action targeted Meta for permitting ads impersonating financial professionals to lure victims into fraudulent investments, asserting the platform's automated tools prioritize speed over scrutiny.[213] Internal documents revealed in Reuters investigations from late 2025 indicated that Meta projected approximately 10% of its 2024 advertising revenue—around $16 billion—derived from ads for scams and banned goods. The company developed playbooks to fend off regulatory pressures for crackdowns, including strategies to make high-risk scam ads less detectable during searches, and expanded tolerance for fraud originating from Chinese ad agencies to protect billions in revenue despite acknowledged risks.[214][215][216] These cases highlight a causal tension: Meta's ad revenue model, reliant on high-volume approvals, incentivizes tolerance of fraud until post-harm detection, as evidenced by the platform's removal of millions of violating ads quarterly but persistent epidemic-scale complaints from employees and regulators.[217]Platform Integrity Issues
Proliferation of Fake Accounts and Bots
Meta, the parent company of Facebook, has reported removing billions of fake accounts annually, underscoring the persistent proliferation of such entities on the platform. In the third quarter of 2024, for example, the company actioned 1.1 billion fake accounts, a figure down slightly from 1.2 billion in the prior quarter, with the majority detected proactively through automated systems before user reports.[218][219] These removals reflect a high volume of account creation attempts, often involving automated scripts or coordinated networks that evade initial safeguards, leading to temporary infiltration before detection. Fake accounts frequently impersonate legitimate users, organizations, or public figures to facilitate scams, spam, and deceptive engagement. During the first half of 2025, Meta dismantled approximately 10 million profiles masquerading as major content producers, alongside over 21,000 pages and accounts posing as customer support to perpetrate fraud.[220][221] In the same period, efforts targeted 8 million scam-oriented accounts specifically exploiting older adults through phishing and financial schemes.[221] Such proliferation has been linked to lax identity verification, where basic email or phone sign-ups enable rapid scaling of inauthentic profiles, often from regions with low enforcement overhead. Bots, automated fake accounts driven by software, amplify this issue by generating synthetic interactions at scale, including likes, shares, and comments to boost visibility or manipulate algorithms. Research on Facebook content from reliable sources has identified bot-generated comments at a prevalence of roughly one in ten, contributing to distorted discourse and the amplification of low-quality or misleading material.[222] While Meta estimates fake accounts comprise about 5% of monthly active users—a figure stable since around 2019 despite massive removals—critics contend this understates the problem, as undetected accounts may inflate user metrics and ad performance, deceiving advertisers who bid on ostensibly genuine reach.[223][224][1] The economic structure of the platform incentivizes proliferation, as fake engagement sustains ad revenue without proportional content moderation costs, prompting accusations that Meta tolerates a baseline of inauthenticity to maintain growth narratives.[225] Independent analyses highlight the lack of verifiable prevalence data, with calls for external auditing to address uncertainties in self-reported figures, which may overlook sophisticated bots using AI-generated profiles or human-like behavior patterns.[226] In coordinated campaigns, such as those tied to foreign influence operations or commercial fraud, bots and fakes have evaded takedowns long enough to seed scams and misinformation, eroding user trust and platform utility.[227] Despite iterative AI-driven defenses, the sheer scale—evidenced by quarterly spam removals exceeding 165 million pieces in Q2 2025—indicates systemic challenges in stemming creation rates.[228]Facilitation of Fraudulent Engagement
Facebook's platform has been criticized for enabling networks of inauthentic accounts that generate artificial likes, comments, shares, and views to inflate content popularity, often for political or commercial gain. These operations exploit the site's algorithms, which boost visibility based on raw engagement signals without stringent verification of authenticity, thereby amplifying fraudulent interactions and distorting user perceptions of genuine support. Data scientist Sophie Zhang, employed on Facebook's Site Integrity fake engagement team until her dismissal in August 2020, documented dozens of such campaigns across more than 25 countries, highlighting systemic delays in enforcement that allowed manipulation to persist.[229][230] In Honduras, Zhang identified a network of fake accounts bolstering President Juan Orlando Hernández's page with 59,100 fabricated likes—78% of its total—over six weeks in 2018, contributing to coordinated inauthentic behavior that evaded detection for nearly a year until July 2019. Similar tactics proliferated elsewhere: in Azerbaijan, operatives generated 2.1 million harassing comments over 90 days in 2019 to suppress opposition; Mexico hosted over 10,000 fake accounts by August 2020; and India featured at least 4,000 inauthentic profiles. These networks leveraged loopholes, such as creating fake Pages to mimic legitimate ones, which Facebook initially permitted despite internal proposals to ban the practice. Critics, including Zhang, contend that the company's prioritization of U.S.-centric threats and resource excuses—despite $70.7 billion in 2019 revenue—facilitated ongoing abuse, as executives like Guy Rosen dismissed civic manipulation as lower priority compared to commercial spam.[229][231] The platform's design further exacerbates the issue by rewarding volume over veracity; studies indicate bots can elevate overall engagement metrics but diminish substantive discourse, as artificial signals crowd out authentic interactions. Facebook reported disabling 448 million fake or duplicate accounts in Q2 2020 alone, yet inclusion of such entities in advertiser reach estimates—claiming access to more U.S. teenagers than the actual population in 2017—has drawn accusations of deceptive practices that indirectly sustain fraudulent ecosystems. While Meta has filed lawsuits against fake engagement providers, such as four individuals in October 2020 for selling likes and followers via automation tools, detractors argue these reactive steps fail to address root causes like algorithmic amplification, allowing bad actors to continually adapt and profit from the platform's scale.[232][226][233][234]Regulatory and Legal Repercussions
Antitrust Lawsuits and Monopoly Probes
In December 2020, the U.S. Federal Trade Commission (FTC), joined by 46 states and the District of Columbia, filed an antitrust lawsuit against Facebook (now Meta Platforms, Inc.), alleging that the company had unlawfully maintained a monopoly in the "personal social networking services" market through a series of anticompetitive acquisitions and exclusionary practices.[8] The complaint specifically targeted Facebook's 2012 acquisition of Instagram for $1 billion and its 2014 purchase of WhatsApp for $19 billion, claiming these deals eliminated nascent threats to Facebook's dominance and prevented competitors from emerging.[8] The FTC further accused Meta of imposing restrictive terms on software developers and hardware partners, such as limiting access to APIs unless they refrained from competing social networking features, thereby entrenching its market power.[8] Meta moved to dismiss the case twice, first in June 2021 and again in April 2024, arguing that the FTC failed to adequately define a relevant antitrust market and demonstrate actual consumer harm or monopoly power under the Sherman Act.[235] U.S. District Judge James E. Boasberg denied both motions, ruling in January 2022 and November 2024 that the allegations plausibly stated claims of monopolization, though he narrowed the case by excluding certain pre-2012 conduct due to statutes of limitations.[236] The trial commenced on April 14, 2025, in the U.S. District Court for the District of Columbia, featuring testimony from Meta CEO Mark Zuckerberg, who defended the acquisitions as pro-competitive moves that enhanced user experience and innovation amid uncertain startup viability.[237] Prosecutors presented evidence of internal Meta documents discussing competitive threats from Instagram and WhatsApp, while Meta countered with data showing robust competition from platforms like TikTok and Snapchat, which have since captured significant market share among younger users.[238] The trial concluded in late May 2025 without a definitive ruling from Judge Boasberg as of October 2025, though expert analyses have highlighted weaknesses in the FTC's case, including failure to empirically prove monopoly pricing, reduced innovation, or consumer harm—key elements required under antitrust precedents like United States v. Microsoft.[235] Economic evidence introduced during the proceedings, such as user surveys and market share metrics, suggested that network effects and user preferences naturally favor incumbents like Meta rather than indicating illegal exclusion, with TikTok's rapid growth post-2018 illustrating dynamic entry barriers lower than alleged.[239] A potential remedy, if the FTC prevails, could involve divestitures of Instagram and WhatsApp, though Meta has vowed to appeal any adverse decision, citing risks to platform interoperability and user privacy.[240] Beyond the FTC action, U.S. Department of Justice (DOJ) scrutiny of Meta has focused less on social networking monopoly and more on adjacent areas, such as a 2023 settlement resolving behavioral remedies from prior merger reviews rather than a standalone monopoly suit.[241] Internationally, the European Union's Digital Markets Act (DMA), effective March 2024, designated Meta as a "gatekeeper" platform, subjecting it to ex-ante probes into self-preferencing practices across Facebook, Instagram, and WhatsApp, with ongoing investigations into compliance failures that could yield fines up to 10% of global revenue.[242] These probes echo U.S. concerns by targeting Meta's ecosystem lock-in but emphasize interoperability mandates over divestiture, reflecting a regulatory emphasis on curbing gatekeeper abuses without proving retrospective monopolization.[235] Critics of Meta's position argue that such probes underscore systemic advantages from data moats and ad targeting, while defenders contend they overlook voluntary user retention and the absence of supra-competitive pricing in digital markets.[243]Fines and International Regulatory Actions
In response to repeated violations of data protection laws, Meta Platforms, Inc. (formerly Facebook, Inc.) has faced substantial fines from European regulators under the General Data Protection Regulation (GDPR). On May 22, 2023, the Irish Data Protection Commission (DPC), Meta's lead EU privacy regulator, imposed a record €1.2 billion penalty for unlawfully transferring personal data of EU Facebook users to the United States without adequate safeguards, relying on standard contractual clauses invalidated by the Schrems II court ruling in 2020.[77] This fine, the largest under GDPR to date, stemmed from an investigation initiated in 2020 and finalized after a binding decision by the European Data Protection Board (EDPB) overriding the Irish DPC's initial leniency. Meta appealed the decision to the Court of Justice of the European Union, arguing it exceeded proportionality, but the penalty remains in effect pending resolution.[77] Earlier GDPR enforcement included a €405 million fine on September 5, 2022, by the Irish DPC for Meta's practices in handling data of minors on Facebook and Instagram, such as default public settings for accounts under 18 and insufficient age verification, violating principles of data minimization and lawful processing.[244] In January 2023, the same authority levied €390 million—€210 million against Facebook and €180 million against Instagram—for breaches in personalized advertising, including processing special category data like inferred sexual orientation or political views without explicit consent or necessity.[245] These actions reflect coordinated EU scrutiny, with other national authorities like France's CNIL pushing for stricter penalties via EDPB interventions, highlighting systemic concerns over Meta's data handling amid Schrems litigation exposing U.S. surveillance risks.[82] Beyond privacy, EU antitrust regulators under the Digital Markets Act (DMA) fined Meta hundreds of millions of euros in April 2025 for non-compliance with gatekeeper obligations, including failure to enable effective user choice in data usage for advertising and interoperability.[246] This followed probes into Meta's "pay or consent" model, where users must either pay for ad-free access or consent to tracking, deemed potentially coercive and under review for further daily fines up to 5% of global turnover if unresolved.[247] In the United States, the Federal Trade Commission (FTC) imposed a $5 billion civil penalty on July 24, 2019—the largest ever for privacy violations—against Facebook for deceiving users on data controls and failing to comply with a 2012 consent order, exacerbated by incidents like the Cambridge Analytica scandal involving unauthorized data sharing with third parties.[2] The settlement mandated sweeping reforms, including an independent privacy committee and biennial audits, though critics noted it lacked structural remedies for ongoing data practices. Separate antitrust litigation persists, with the FTC alleging monopolistic acquisitions like Instagram and WhatsApp to stifle competition, but no additional fines have materialized as of October 2025 beyond the privacy penalty.[248]| Date | Regulator | Amount | Primary Violation |
|---|---|---|---|
| July 24, 2019 | U.S. FTC | $5 billion | Privacy misrepresentations and consent order breaches[2] |
| September 5, 2022 | Irish DPC (EU GDPR) | €405 million | Minors' data handling on Facebook/Instagram[244] |
| January 2023 | Irish DPC (EU GDPR) | €390 million | Personalized ads without valid basis[245] |
| May 22, 2023 | Irish DPC/EDPB (EU GDPR) | €1.2 billion | Unlawful EU-U.S. data transfers[77] |
| April 2025 | European Commission (DMA) | Hundreds of millions of euros | Antitrust gatekeeper non-compliance[246] |
