Hubbry Logo
FacebookFacebookMain
Open search
Facebook
Community hub
Facebook
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Facebook
Facebook
from Wikipedia

Key Information

Facebook is an American social media and social networking service owned by the American technology conglomerate Meta. Created in 2004 by Mark Zuckerberg with four other Harvard College students and roommates, Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, its name derives from the face book directories often given to American university students. Membership was initially limited to Harvard students, gradually expanding to other North American universities.

Since 2006, Facebook allows everyone to register from 13 years old, except in the case of a handful of nations, where the age requirement is 14 years.[6] As of December 2023, Facebook claimed almost 3.07 billion monthly active users worldwide.[7] As of July 2025, Facebook ranked as the third-most-visited website in the world, with 23% of its traffic coming from the United States.[8] It was the most downloaded mobile app of the 2010s.[9]

Facebook can be accessed from devices with Internet connectivity, such as personal computers, tablets and smartphones. After registering, users can create a profile revealing personal information about themselves. They can post text, photos and multimedia which are shared with any other users who have agreed to be their friend or, with different privacy settings, publicly. Users can also communicate directly with each other with Messenger, edit messages (within 15 minutes after sending),[10][11] join common-interest groups, and receive notifications on the activities of their Facebook friends and the pages they follow.

Facebook has often been criticized over issues such as user privacy (as with the Facebook–Cambridge Analytica data scandal), political manipulation (as with the 2016 U.S. elections) and mass surveillance.[12] The company has also been subject to criticism over its psychological effects such as addiction and low self-esteem, and over content such as fake news, conspiracy theories, copyright infringement, and hate speech.[13] Commentators have accused Facebook of willingly facilitating the spread of such content, as well as exaggerating its number of users to appeal to advertisers.[14]

History

[edit]
Mark Zuckerberg, co-creator of Facebook, in his Harvard dorm room, November 2005

The history of Facebook traces its growth from a college networking site to a global social networking service.[15]

While attending Phillips Exeter in the early 2000s, Zuckerberg met Kris Tillery. Tillery, a one-time project collaborator with Zuckerberg, would create a school-based social networking project called Photo Address Book. Photo Address Book was a digital face book, created through a linked database composed of student information derived from the official records of the Exeter Student Council. The database contained linkages such as name, dorm-specific landline numbers, and student headshots.[16]

Mark Zuckerberg built a website called "Facemash" in 2003 while attending Harvard University. The site was comparable to Hot or Not and used photos from online face books, asking users to choose the 'hotter' person".[17] Zuckerberg was reported and faced expulsion, but the charges were dropped.[17]

A "face book" is a student directory featuring photos and personal information. In January 2004, Zuckerberg coded a new site known as "TheFacebook", stating, "It is clear that the technology needed to create a centralized Website is readily available ... the benefits are many." Zuckerberg met with Harvard student Eduardo Saverin, and each agreed to invest $1,000.[18] On February 4, 2004, Zuckerberg launched "TheFacebook".[19]

Membership was initially restricted to students of Harvard College. Dustin Moskovitz, Andrew McCollum, and Chris Hughes joined Zuckerberg to help manage the growth of the site.[20] It became available successively to most universities in the US and Canada.[21][22] In 2004, Napster co-founder Sean Parker became company president[23] and the company moved to Palo Alto, California.[24] PayPal co-founder Peter Thiel gave Facebook its first investment.[25][26] In 2005, the company dropped "the" from its name after purchasing the domain name Facebook.com.[27]

In 2006, Facebook opened to everyone at least 13 years old with a valid email address.[28][29][30] Facebook introduced key features like the News Feed, which became central to user engagement. By late 2007, Facebook had 100,000 pages on which companies promoted themselves.[31] Facebook had surpassed MySpace in global traffic and became the world's most popular social media platform. Microsoft announced that it had purchased a 1.6% share of Facebook for $240 million ($364 million in 2024 dollars[32]), giving Facebook an implied value of around $15 billion ($22.7 billion in 2024 dollars[32]). Facebook focused on generating revenue through targeted advertising based on user data, a model that drove its rapid financial growth. In 2012, Facebook went public with one of the largest IPOs in tech history. Acquisitions played a significant role in Facebook's dominance. In 2012, it purchased Instagram, followed by WhatsApp and Oculus VR in 2014, extending its influence beyond social networking into messaging and virtual reality. Mark Zuckerberg announces $60 billion investment in Meta AI according to Mashable.

The Facebook–Cambridge Analytica data scandal in 2018 revealed misuse of user data to influence elections, sparking global outcry and leading to regulatory fines and hearings. Facebook's role in global events, including its use in organizing movements like the Arab Spring and its impact on events like the Rohingya genocide in Myanmar, highlighted its dual nature as a tool for both empowerment and harm. In 2021, Facebook rebranded as Meta, reflecting its shift toward building the "metaverse" and focusing on virtual reality and augmented reality technologies.

Features

[edit]

Facebook does not officially publish a maximum character limit for posts; however, User posts can be lengthy, with unofficial sources suggesting a high character limit. Posts may also include images and videos. According to Facebook's official business documentation, videos can be up to 240 minutes long and 10 GB in file size, with supported resolutions up to 1080p.[33]

Users can "friend" users, both sides must agree to being friends.[34] Posts can be changed to be seen by everyone (public), friends, people in a certain group (group) or by selected friends (private).[35] Users can join groups.[36] Groups are composed of persons with shared interests. For example, they might go to the same sporting club, live in the same suburb, have the same breed of pet or share a hobby.[36] Posts posted in a group can be seen only by those in a group, unless set to public.[37]

Users are able to buy, sell, and swap things on Facebook Marketplace or in a Buy, Swap and Sell group.[38][39] Facebook users may advertise events, which can be offline, on a website other than Facebook, or on Facebook.[40]

Website

[edit]
Profile shown on Thefacebook in 2005
Former Facebook logo in use from August 23, 2005, until July 1, 2015

Technical aspects

[edit]

The site's primary color is blue as Zuckerberg is red–green colorblind, a realization that occurred after a test taken around 2007.[41][42] Facebook was initially built using PHP, a popular scripting language designed for web development.[43] PHP was used to create dynamic content and manage data on the server side of the Facebook application. Zuckerberg and co-founders chose PHP for its simplicity and ease of use, which allowed them to quickly develop and deploy the initial version of Facebook. As Facebook grew in user base and functionality, the company encountered scalability and performance challenges with PHP. In response, Facebook engineers developed tools and technologies to optimize PHP performance. One of the most significant was the creation of the HipHop Virtual Machine (HHVM). This significantly improved the performance and efficiency of PHP code execution on Facebook's servers.

The site upgraded from HTTP to the more secure HTTPS in January 2011.[44]

2012 architecture

[edit]

Facebook is developed as one monolithic application. According to an interview in 2012 with Facebook build engineer Chuck Rossi, Facebook compiles into a 1.5 GB binary blob which is then distributed to the servers using a custom BitTorrent-based release system. Rossi stated that it takes about 15 minutes to build and 15 minutes to release to the servers. The build and release process has zero downtime. Changes to Facebook are rolled out daily.[45]

Facebook used a combination platform based on HBase to store data across distributed machines. Using a tailing architecture, events are stored in log files, and the logs are tailed. The system rolls these events up and writes them to storage. The user interface then pulls the data out and displays it to users. Facebook handles requests as AJAX behavior. These requests are written to a log file using Scribe (developed by Facebook).[46]

Data is read from these log files using Ptail, an internally built tool to aggregate data from multiple Scribe stores. It tails the log files and pulls data out. Ptail data are separated into three streams and sent to clusters in different data centers (Plugin impression, News feed impressions, Actions (plugin + news feed)). Puma is used to manage periods of high data flow (Input/Output or IO). Data is processed in batches to lessen the number of times needed to read and write under high demand periods. (A hot article generates many impressions and news feed impressions that cause huge data skews.) Batches are taken every 1.5 seconds, limited by memory used when creating a hash table.[46]

Data is then output in PHP format. The backend is written in Java. Thrift is used as the messaging format so PHP programs can query Java services. Caching solutions display pages more quickly. The data is then sent to MapReduce servers where it is queried via Hive. This serves as a backup as the data can be recovered from Hive.[46]

Content delivery network (CDN)

[edit]

Facebook uses its own content delivery network or "edge network" under the domain fbcdn.net for serving static data.[47][48] Until the mid-2010s, Facebook also relied on Akamai for CDN services.[49][50][51]

Hack programming language

[edit]

On March 20, 2014, Facebook announced a new open-source programming language called Hack. Before public release, a large portion of Facebook was already running and "battle tested" using the new language.[52]

User profile/personal timeline

[edit]
Facebook login/signup screen

Each registered user on Facebook has a personal profile that shows their posts and content.[53] The format of individual user pages was revamped in September 2011 and became known as "Timeline", a chronological feed of a user's stories,[54][55] including status updates, photos, interactions with apps and events.[56] The layout let users add a "cover photo".[56] Users were given more privacy settings.[56] In 2007, Facebook launched Facebook Pages for brands and celebrities to interact with their fanbases.[57][58] In June 2009, Facebook introduced a "Usernames" feature, allowing users to choose a unique nickname used in the URL for their personal profile, for easier sharing.[59][60]

In February 2014, Facebook expanded the gender setting, adding a custom input field that allows users to choose from a wide range of gender identities. Users can also set which set of gender-specific pronoun should be used in reference to them throughout the site.[61][62][63] In May 2014, Facebook introduced a feature to allow users to ask for information not disclosed by other users on their profiles. If a user does not provide key information, such as location, hometown, or relationship status, other users can use a new "ask" button to send a message asking about that item to the user in a single click.[64][65]

News Feed

[edit]

News Feed appears on every user's homepage and highlights information including profile changes, upcoming events and friends' birthdays.[66] This enabled spammers and other users to manipulate these features by creating illegitimate events or posting fake birthdays to attract attention to their profile or cause.[67] Initially, the News Feed caused dissatisfaction among Facebook users; some complained it was too cluttered and full of undesired information, others were concerned that it made it too easy for others to track individual activities (such as relationship status changes, events, and conversations with other users).[68] Zuckerberg apologized for the site's failure to include appropriate privacy features. Users then gained control over what types of information are shared automatically with friends. Users are now able to prevent user-set categories of friends from seeing updates about certain types of activities, including profile changes, Wall posts and newly added friends.[69]

On February 23, 2010, Facebook was granted a patent[70] on certain aspects of its News Feed. The patent covers News Feeds in which links are provided so that one user can participate in the activity of another user.[71] The sorting and display of stories in a user's News Feed is governed by the EdgeRank algorithm.[72] The Photos application allows users to upload albums and photos.[73] Each album can contain 200 photos.[74] Privacy settings apply to individual albums. Users can "tag", or label, friends in a photo. The friend receives a notification about the tag with a link to the photo.[75] This photo tagging feature was developed by Aaron Sittig, now a Design Strategy Lead at Facebook, and former Facebook engineer Scott Marlette back in 2006 and was only granted a patent in 2011.[76][77]

On June 7, 2012, Facebook launched its App Center to help users find games and other applications.[78]

On May 13, 2015, Facebook in association with major news portals launched "Instant Articles" to provide news on the Facebook news feed without leaving the site.[79][80] In January 2017, Facebook launched Facebook Stories for iOS and Android in Ireland. The feature, following the format of Snapchat and Instagram stories, allows users to upload photos and videos that appear above friends' and followers' News Feeds and disappear after 24 hours.[81]

On October 11, 2017, Facebook introduced the 3D Posts feature to allow for uploading interactive 3D assets.[82] On January 11, 2018, Facebook announced that it would change News Feed to prioritize friends/family content and de-emphasize content from media companies.[83] In February 2020, Facebook announced it would spend $1 billion ($1.21 billion in 2024 dollars[32]) to license news material from publishers for the next three years; a pledge coming as the company falls under scrutiny from governments across the globe over not paying for news content appearing on the platform. The pledge would be in addition to the $600 million ($729 million in 2024 dollars[32]) paid since 2018 through deals with news companies such as The Guardian and Financial Times.[84][85][86]

In March and April 2021, in response to Apple announcing changes to its iOS device's Identifier for Advertisers policy, which included requiring app developers to directly request to users the ability to track on an opt-in basis, Facebook purchased full-page newspaper advertisements attempting to convince users to allow tracking, highlighting the effects targeted ads have on small businesses.[87] Facebook's efforts were ultimately unsuccessful, as Apple released iOS 14.5 in late April 2021, containing the feature for users in what has been deemed "App Tracking Transparency". Moreover, statistics from Verizon Communications subsidiary Flurry Analytics show 96% of all iOS users in the United States are not permitting tracking at all, and only 12% of worldwide iOS users are allowing tracking, which some news outlets deem "Facebook's nightmare", among similar terms.[88][89][90][91] Despite the news, Facebook stated that the new policy and software update would be "manageable".[92]

Like button

[edit]
The Facebook "like" button

The "like" button, stylized as a "thumbs up" icon, was first enabled on February 9, 2009,[93] and enables users to easily interact with status updates, comments, photos and videos, links shared by friends, and advertisements. Once clicked by a user, the designated content is more likely to appear in friends' News Feeds.[94][95] The button displays the number of other users who have liked the content.[96] The like button was extended to comments in June 2010.[97] In February 2016, Facebook expanded Like into "Reactions", allowing users to choose from five pre-defined emotions: "Love", "Haha", "Wow", "Sad", or "Angry".[98][99][100][101] In late April 2020, during the COVID-19 pandemic, a new "Care" reaction was added.[102]

Instant messaging

[edit]

Facebook Messenger is an instant messaging service and software application. It began as Facebook Chat in 2008,[103] was revamped in 2010[104] and eventually became a standalone mobile app in August 2011, while remaining part of the user page on browsers.[105] Complementing regular conversations, Messenger lets users make one-to-one[106] and group[107] voice[108] and video calls.[109] Its Android app has integrated support for SMS[110] and "Chat Heads", which are round profile photo icons appearing on-screen regardless of what app is open,[111] while both apps support multiple accounts,[112] conversations with optional end-to-end encryption[113] and "Instant Games".[114] Some features, including sending money[115] and requesting transportation,[116] are limited to the United States.[115] In 2017, Facebook added "Messenger Day", a feature that lets users share photos and videos in a story-format with all their friends with the content disappearing after 24 hours;[117] Reactions, which lets users tap and hold a message to add a reaction through an emoji;[118] and Mentions, which lets users in group conversations type @ to give a particular user a notification.[118]

In April 2020, Facebook began rolling out a new feature called Messenger Rooms, a video chat feature that allows users to chat with up to 50 people at a time.[119] In July 2020, Facebook added a new feature in Messenger that lets iOS users to use Face ID or Touch ID to lock their chats. The feature is called App Lock and is a part of several changes in Messenger regarding privacy and security.[120][121] On October 13, 2020, the Messenger application introduced cross-app messaging with Instagram, which was launched in September 2021.[122] In addition to the integrated messaging, the application announced the introduction of a new logo, which will be an amalgamation of the Messenger and Instagram logo.[123]

Businesses and users can interact through Messenger with features such as tracking purchases and receiving notifications, and interacting with customer service representatives. Third-party developers can integrate apps into Messenger, letting users enter an app while inside Messenger and optionally share details from the app into a chat.[124] Developers can build chatbots into Messenger, for uses such as news publishers building bots to distribute news.[125] Businesses like respond.io, Twilio, and Manychat also used the APIs to develop chatbots and automation platforms for commercial use.[126]

The M virtual assistant (U.S.) scans chats for keywords and suggests relevant actions, such as its payments system for users mentioning money.[127][128] Group chatbots appear in Messenger as "Chat Extensions". A "Discovery" tab allows finding bots, and enabling special, branded QR codes that, when scanned, take the user to a specific bot.[129]

Privacy policy

[edit]

Facebook's data policy outlines its policies for collecting, storing, and sharing user's data.[130] Facebook enables users to control access to individual posts and their profile[131] through privacy settings.[132] The user's name and profile picture (if applicable) are public.

Facebook's revenue depends on targeted advertising, which involves analyzing user data to decide which ads to show each user. Facebook buys data from third parties, gathered from both online and offline sources, to supplement its own data on users. Facebook maintains that it does not share data used for targeted advertising with the advertisers themselves.[133] The company states:

"We provide advertisers with reports about the kinds of people seeing their ads and how their ads are performing, but we don't share information that personally identifies you (information such as your name or email address that by itself can be used to contact you or identifies who you are) unless you give us permission. For example, we provide general demographic and interest information to advertisers (for example, that an ad was seen by a woman between the ages of 25 and 34 who lives in Madrid and likes software engineering) to help them better understand their audience. We also confirm which Facebook ads led you to make a purchase or take an action with an advertiser."[130]

As of October 2021, Facebook claims it uses the following policy for sharing user data with third parties:

Apps, websites, and third-party integrations on or using our Products.

When you choose to use third-party apps, websites, or other services that use, or are integrated with, our Products, they can receive information about what you post or share. For example, when you play a game with your Facebook friends or use a Facebook Comment or Share button on a website, the game developer or website can receive information about your activities in the game or receive a comment or link that you share from the website on Facebook. Also, when you download or use such third-party services, they can access your public profile on Facebook, and any information that you share with them. Apps and websites you use may receive your list of Facebook friends if you choose to share it with them. But apps and websites you use will not be able to receive any other information about your Facebook friends from you, or information about any of your Instagram followers (although your friends and followers may, of course, choose to share this information themselves). Information collected by these third-party services is subject to their own terms and policies, not this one.

Devices and operating systems providing native versions of Facebook and Instagram (i.e. where we have not developed our own first-party apps) will have access to all information you choose to share with them, including information your friends share with you, so they can provide our core functionality to you.

Note: We are in the process of restricting developers' data access even further to help prevent abuse. For example, we will remove developers' access to your Facebook and Instagram data if you haven't used their app in 3 months, and we are changing Login, so that in the next version, we will reduce the data that an app can request without app review to include only name, Instagram username and bio, profile photo and email address. Requesting any other data will require our approval.[130]

Facebook will also share data with law enforcement if needed to.[130]

Facebook's policies have changed repeatedly since the service's debut, amid a series of controversies covering everything from how well it secures user data, to what extent it allows users to control access, to the kinds of access given to third parties, including businesses, political campaigns and governments. These facilities vary according to country, as some nations require the company to make data available (and limit access to services), while the European Union's GDPR regulation mandates additional privacy protections.[134]

Bug Bounty Program

[edit]
A Facebook "White Hat" debit card, given to researchers who report security bugs, May 2014

On July 29, 2011, Facebook announced its Bug Bounty Program that paid security researchers a minimum of $500 ($699.00 in 2024 dollars[32]) for reporting security holes. The company promised not to pursue "white hat" hackers who identified such problems.[135][136] This led researchers in many countries to participate, particularly in India and Russia.[137]

Reception

[edit]

Userbase

[edit]

Facebook's rapid growth began as soon as it became available and continued through 2018, before beginning to decline. Facebook passed 100 million registered users in 2008,[138] and 500 million in July 2010.[139] According to the company's data at the July 2010 announcement, half of the site's membership used Facebook daily, for an average of 34 minutes, while 150 million users accessed the site by mobile.[140]

In October 2012, Facebook's monthly active users passed one billion,[141][142] with 600 million mobile users, 219 billion photo uploads, and 140 billion friend connections.[143] The 2 billion user mark was crossed in June 2017.[144][145] In November 2015, after skepticism about the accuracy of its "monthly active users" measurement, Facebook changed its definition to a logged-in member who visits the Facebook site through the web browser or mobile app, or uses the Facebook Messenger app, in the 30-day period prior to the measurement. This excluded the use of third-party services with Facebook integration, which was previously counted.[146]

From 2017 to 2019, the percentage of the U.S. population over the age of 12 who use Facebook has declined, from 67% to 61% (a decline of some 15 million U.S. users), with a higher drop-off among younger Americans (a decrease in the percentage of U.S. 12- to 34-year-olds who are users from 58% in 2015 to 29% in 2019).[147][148] The decline coincided with an increase in the popularity of Instagram, which is also owned by Meta.[147][148] The number of daily active users experienced a quarterly decline for the first time in the last quarter of 2021, down to 1.929 billion from 1.930 billion,[149] but increased again the next quarter despite being banned in Russia.[150]

Historically, commentators have offered predictions of Facebook's decline or end, based on causes such as a declining user base;[151] the legal difficulties of being a closed platform, inability to generate revenue, inability to offer user privacy, inability to adapt to mobile platforms, or Facebook ending itself to present a next generation replacement;[152] or Facebook's role in Russian interference in the 2016 United States elections.[153]

0500100015002000250030002004200720102013201620192022Value
Facebook popularity. Active users (in millions) of Facebook increased from just a million
in 2004 to 2.8 billion in 2020.[134]

Demographics

[edit]

The highest number of Facebook users as of April 2023 are from India and the United States, followed by Indonesia, Brazil, Mexico and the Philippines.[155] Region-wise, the highest number of users in 2018 are from Asia-Pacific (947 million) followed by Europe (381 million) and US-Canada (242 million). The rest of the world has 750 million users.[156]

Over the 2008–2018 period, the percentage of users under 34 declined to less than half of the total.[134]

Censorship

[edit]
Map showing the countries that are either currently blocking or have blocked Facebook in the past
  Currently blocked
  Formerly blocked

In many countries the social networking sites and mobile apps have been blocked temporarily, intermittently, or permanently, including: Brazil,[157] China,[158] Iran,[159] Vietnam,[160] Pakistan,[161] Syria,[162] and North Korea. In May 2018, the government of Papua New Guinea announced that it would ban Facebook for a month while it considered the impact of the website on the country, though no ban has since occurred.[163] In 2019, Facebook announced it would start enforcing its ban on users, including influencers, promoting any vape, tobacco products, or weapons on its platforms.[164]

Criticisms and controversies

[edit]

"I'm here today because I believe Facebook's products harm children, stoke division, and weaken our democracy. The company's leadership knows how to make Facebook and Instagram safer, but won't make the necessary changes because they have put their astronomical profits before people."

Frances Haugen, condemning lack of transparency around Facebook at a US congressional hearing (2021).[165]

"I don't believe private companies should make all of the decisions on their own. That's why we have advocated for updated internet regulations for several years now. I have testified in Congress multiple times and asked them to update these regulations. I've written op-eds outlining the areas of regulation we think are most important related to elections, harmful content, privacy, and competition."

—Mark Zuckerberg, responding to Frances Haugen's revelations (2021).[166]

Facebook's importance and scale has led to criticisms in many domains. Issues include Internet privacy, excessive retention of user information,[167] its facial recognition software, DeepFace[168][169] its addictive quality[170] and its role in the workplace, including employer access to employee accounts.[171]

Facebook has been criticized for electricity usage,[172] tax avoidance,[173] real-name user requirement policies,[174] censorship[175][176] and its involvement in the United States PRISM surveillance program.[177] According to The Express Tribune, Facebook "avoided billions of dollars in tax using offshore companies".[178]

Facebook is alleged to have harmful psychological effects on its users, including feelings of jealousy[179][180] and stress,[181][182] a lack of attention[183] and social media addiction.[184][185] According to Kaufmann et al., mothers' motivations for using social media are often related to their social and mental health.[186] European antitrust regulator Margrethe Vestager stated that Facebook's terms of service relating to private data were "unbalanced".[187]

Facebook has been criticized for allowing users to publish illegal or offensive material. Specifics include copyright and intellectual property infringement,[188] hate speech,[189][190] incitement of rape[191] and terrorism,[192][193] fake news,[194][195][196] and crimes, murders, and livestreaming violent incidents.[197][198][199] Commentators have accused Facebook of willingly facilitating the spread of such content.[200][201][202] Sri Lanka blocked both Facebook and WhatsApp in May 2019 after anti-Muslim riots, the worst in the country since the Easter Sunday bombing in the same year as a temporary measure to maintain peace in Sri Lanka.[203][204] Facebook removed 3 billion fake accounts only during the last quarter of 2018 and the first quarter of 2019;[205] in comparison, the social network reports 2.39 billion monthly active users.[205]

In late July 2019, the company announced it was under antitrust investigation by the Federal Trade Commission.[206]

The consumer advocacy group Which? claimed individuals were still utilizing Facebook to set up fraudulent five-star ratings for products. The group identified 14 communities that exchange reviews for either money or complimentary items such as watches, earbuds, and sprinklers.[207]

Privacy concerns

[edit]
Details of information collected via PRISM

Facebook has experienced a steady stream of controversies over how it handles user privacy, repeatedly adjusting its privacy settings and policies.[208] Since 2009, Facebook has been participating in the PRISM secret program, sharing with the US National Security Agency audio, video, photographs, e-mails, documents and connection logs from user profiles, among other social media services.[209][210]

On November 29, 2011, Facebook settled Federal Trade Commission charges that it deceived consumers by failing to keep privacy promises.[211] In August 2013 High-Tech Bridge published a study showing that links included in Facebook messaging service messages were being accessed by Facebook.[212] In January 2014 two users filed a lawsuit against Facebook alleging that their privacy had been violated by this practice.[213]

On June 7, 2018, Facebook announced that a bug had resulted in about 14 million Facebook users having their default sharing setting for all new posts set to "public".[214] Its data-sharing agreement with Chinese companies such as Huawei came under the scrutiny of US lawmakers, although the information accessed was not stored on Huawei servers and remained on users' phones.[215] On April 4, 2019, half a billion records of Facebook users were found exposed on Amazon cloud servers, containing information about users' friends, likes, groups, and checked-in locations, as well as names, passwords and email addresses.[216]

The phone numbers of at least 200 million Facebook users were found to be exposed on an open online database in September 2019. They included 133 million US users, 18 million from the UK, and 50 million from users in Vietnam. After removing duplicates, the 419 million records have been reduced to 219 million. The database went offline after TechCrunch contacted the web host. It is thought the records were amassed using a tool that Facebook disabled in April 2018 after the Cambridge Analytica controversy. A Facebook spokeswoman said in a statement: "The dataset is old and appears to have information obtained before we made changes last year...There is no evidence that Facebook accounts were compromised."[217]

Facebook's privacy problems resulted in companies like Viber Media and Mozilla discontinuing advertising on Facebook's platforms.[218][219] A January 2024 study by Consumer Reports found that among a self-selected group of volunteer participants, each user is monitored or tracked by over two thousand companies on average. LiveRamp, a San Francisco-based data broker, is responsible for 96 per cent of the data. Other companies such as Home Depot, Macy's, and Walmart are involved as well.[220]

In March 2024, a court in California released documents detailing Facebook's 2016 "Project Ghostbusters". The project was aimed at helping Facebook compete with Snapchat and involved Facebook trying to develop decryption tools to collect, decrypt, and analyze traffic that users generated when visiting Snapchat and, eventually, YouTube and Amazon. The company eventually used its tool Onavo to initiate man-in-the-middle attacks and read users' traffic before it was encrypted.[221]

Racial bias

[edit]

Facebook was accused of committing "systemic" racial bias by the Equal Employment Opportunity Commission based on the complaints of three rejected candidates and a current employee of the company. The three rejected employees along with the Operational Manager at Facebook as of March 2021 accused the firm of discriminating against Black people. The EEOC initiated an investigation into the case in March 2021.[222]

Shadow profiles

[edit]

A "shadow profile" refers to the data Facebook collects about individuals without their explicit permission. For example, the "like" button that appears on third-party websites allows the company to collect information about an individual's internet browsing habits, even if the individual is not a Facebook user.[223][224] Data can also be collected by other users. For example, a Facebook user can link their email account to their Facebook to find friends on the site, allowing the company to collect the email addresses of users and non-users alike.[225] Over time, countless data points about an individual are collected; any single data point perhaps cannot identify an individual, but together allows the company to form a unique "profile".

This practice has been criticized by those who believe people should be able to opt-out of involuntary data collection. Additionally, while Facebook users have the ability to download and inspect the data they provide to the site, data from the user's "shadow profile" is not included, and non-users of Facebook do not have access to this tool regardless. The company has also been unclear whether or not it is possible for a person to revoke Facebook's access to their "shadow profile".[223]

Cambridge Analytica

[edit]

Facebook customer Global Science Research sold information on over 87 million Facebook users to Cambridge Analytica, a political data analysis firm led by Alexander Nix.[226] While approximately 270,000 people used the app, Facebook's API permitted data collection from their friends without their knowledge.[227] At first Facebook downplayed the significance of the breach, and suggested that Cambridge Analytica no longer had access. Facebook then issued a statement expressing alarm and suspended Cambridge Analytica. Review of documents and interviews with former Facebook employees suggested that Cambridge Analytica still possessed the data.[228] This was a violation of Facebook's consent decree with the Federal Trade Commission. This violation potentially carried a penalty of $40,000 ($50,087 in 2024 dollars[32]) per occurrence, totalling trillions of dollars.[229]

According to The Guardian, both Facebook and Cambridge Analytica threatened to sue the newspaper if it published the story. After publication, Facebook claimed that it had been "lied to". On March 23, 2018, the English High Court granted an application by the Information Commissioner's Office for a warrant to search Cambridge Analytica's London offices, ending a standoff between Facebook and the Information Commissioner over responsibility.[230]

On March 25, Facebook published a statement by Zuckerberg in major UK and US newspapers apologizing over a "breach of trust".[231]

You may have heard about a quiz app built by a university researcher that leaked Facebook data of millions of people in 2014. This was a breach of trust, and I'm sorry we didn't do more at the time. We're now taking steps to make sure this doesn't happen again.

We've already stopped apps like this from getting so much information. Now we're limiting the data apps get when you sign in using Facebook.

We're also investigating every single app that had access to large amounts of data before we fixed this. We expect there are others. And when we find them, we will ban them and tell everyone affected.

Finally, we'll remind you which apps you've given access to your information – so you can shut off the ones you don't want anymore.

Thank you for believing in this community. I promise to do better for you.

On March 26, the Federal Trade Commission opened an investigation into the matter.[232] The controversy led Facebook to end its partnerships with data brokers who aid advertisers in targeting users.[208]

On April 24, 2019, Facebook said it could face a fine between $3 billion ($3.69 billion in 2024 dollars[32]) to $5 billion ($6.15 billion in 2024 dollars[32]) as the result of an investigation by the Federal Trade Commission.[233] On July 24, 2019, the FTC fined Facebook $5 billion, the largest penalty ever imposed on a company for violating consumer privacy. Additionally, Facebook had to implement a new privacy structure, follow a 20-year settlement order, and allow the FTC to monitor Facebook.[234] Cambridge Analytica's CEO and a developer faced restrictions on future business dealings and were ordered to destroy any personal information they collected. Cambridge Analytica filed for bankruptcy.[235] Facebook also implemented additional privacy controls and settings[236] in part to comply with the European Union's General Data Protection Regulation (GDPR), which took effect in May.[237] Facebook also ended its active opposition to the California Consumer Privacy Act.[238]

Some, such as Meghan McCain, have drawn an equivalence between the use of data by Cambridge Analytica and the Barack Obama's 2012 campaign, which, according to Investor's Business Daily, "encouraged supporters to download an Obama 2012 Facebook app that, when activated, let the campaign collect Facebook data both on users and their friends."[239][240][241] Carol Davidsen, the Obama for America (OFA) former director of integration and media analytics, wrote that "Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realised that was what we were doing".[240][241] PolitiFact has rated McCain's statements "Half-True", on the basis that "in Obama's case, direct users knew they were handing over their data to a political campaign" whereas with Cambridge Analytica, users thought they were only taking a personality quiz for academic purposes, and while the Obama campaign only used the data "to have their supporters contact their most persuadable friends", Cambridge Analytica "targeted users, friends and lookalikes directly with digital ads."[242]

DataSpii

[edit]

In July 2019, cybersecurity researcher Sam Jadali exposed a catastrophic data leak known as DataSpii involving data provider DDMR and marketing intelligence company Nacho Analytics (NA).[243][244] Branding itself as the "God mode for the internet", NA through DDMR, provided its members access to private Facebook photos and Facebook Messenger attachments including tax returns.[245] DataSpii harvested data from millions of Chrome and Firefox users through compromised browser extensions.[246] The NA website stated it collected data from millions of opt-in users. Jadali, along with journalists from Ars Technica and The Washington Post, interviewed impacted users, including a Washington Post staff member. According to the interviews, the impacted users did not consent to such collection.

DataSpii demonstrated how a compromised user exposed the data of others, including the private photos and Messenger attachments belonging to a Facebook user's network of friends.[245]

DataSpii exploited Facebook's practice of making private photos and Messenger attachments publicly accessible via unique URLs. To bolster security in this regard, Facebook appends query strings in the URLs so as to limit the period of accessibility.[245] Nevertheless, NA provided real-time access to these unique URLs, which were intended to be secure. This allowed NA members to access the private content within the restricted time frame designated by Facebook.

The Washington Post's Geoffrey Fowler, in collaboration with Jadali, opened Fowler's private Facebook photo in a browser with a compromised browser extension.[243] Within minutes, they anonymously retrieved the "private" photo. To validate this proof-of-concept, they searched for Fowler's name using NA, which yielded his photo as a search result. In addition, Jadali discovered Fowler's Washington Post colleague, Nick Mourtoupalas, was directly impacted by DataSpii. Jadali's investigation elucidated how DataSpii disseminated private data to additional third-parties, including foreign entities, within minutes of the data being acquired. In doing so, he identified the third-parties who were scraping, storing, and potentially enabling the facial-recognition of individuals in photos being furnished by DataSpii.[247]

Breaches

[edit]

On September 28, 2018, Facebook experienced a major breach in its security, exposing the data of 50 million users. The data breach started in July 2017 and was discovered on September 16.[248] Facebook notified users affected by the exploit and logged them out of their accounts.[249][250] In March 2019, Facebook confirmed a password compromise of millions of Facebook lite application users also affected millions of Instagram users. The reason cited was the storage of password as plain text instead of encryption which could be read by its employees.[251]

On December 19, 2019, security researcher Bob Diachenko discovered a database containing more than 267 million Facebook user IDs, phone numbers, and names that were left exposed on the web for anyone to access without a password or any other authentication.[252] In February 2020, Facebook encountered a major security breach in which its official Twitter account was hacked by a Saudi Arabia-based group called "OurMine". The group has a history of actively exposing high-profile social media profiles' vulnerabilities.[253]

In April 2021, The Guardian reported approximately half a billion users' data had been stolen including birthdates and phone numbers. Facebook alleged it was "old data" from a problem fixed in August 2019 despite the data's having been released a year and a half later only in 2021; it declined to speak with journalists, had apparently not notified regulators, called the problem "unfixable", and said it would not be advising users.[254] In September 2024, Meta paid a $101 million fine for storing up to 600 million passwords of Facebook and Instagram users in plain text. The practice was initially discovered in 2019, though reports indicate passwords were stored in plain text since 2012.[255]

Phone data and activity

[edit]
Facebook acquired Onavo's virtual private network to harvest usage data on its competitors.

After acquiring Onavo in 2013, Facebook used its Onavo Protect virtual private network (VPN) app to collect information on users' web traffic and app usage. This allowed Facebook to monitor its competitors' performance, and motivated Facebook to acquire WhatsApp in 2014.[256][257][258] Media outlets classified Onavo Protect as spyware.[259][260][261] In August 2018, Facebook removed the app in response to pressure from Apple, who asserted that it violated their guidelines.[262][263] The Australian Competition and Consumer Commission sued Facebook on December 16, 2020, for "false, misleading or deceptive conduct" in response to the company's unauthorized use of personal data obtained from Onavo for business purposes in contrast to Onavo's privacy-oriented marketing.[264][265]

In 2016, Facebook Research launched Project Atlas, offering some users between the ages of 13 and 35 up to $20 per month ($26.00 in 2024 dollars[32]) in exchange for their personal data, including their app usage, web browsing history, web search history, location history, personal messages, photos, videos, emails and Amazon order history.[266][267] In January 2019, TechCrunch reported on the project. This led Apple to temporarily revoke Facebook's Enterprise Developer Program certificates for one day, preventing Facebook Research from operating on iOS devices and disabling Facebook's internal iOS apps.[267][268][269]

Ars Technica reported in April 2018 that the Facebook Android app had been harvesting user data, including phone calls and text messages, since 2015.[270][271][272] In May 2018, several Android users filed a class action lawsuit against Facebook for invading their privacy.[273][274] In January 2020, Facebook launched the Off-Facebook Activity page, which allows users to see information collected by Facebook about their non-Facebook activities.[275] The Washington Post columnist Geoffrey A. Fowler found that this included what other apps he used on his phone, even while the Facebook app was closed, what other web sites he visited on his phone, and what in-store purchases he made from affiliated businesses, even while his phone was completely off.[276]

In November 2021, a report was published by Fairplay, Global Action Plan and Reset Australia detailing accusations that Facebook was continuing to manage their ad targeting system with data collected from teen users.[277] The accusations follow announcements by Facebook in July 2021 that they would cease ad targeting children.[278][279]

Public apologies

[edit]

The company first apologized for its privacy abuses in 2009.[280]

Facebook apologies have appeared in newspapers, television, blog posts and on Facebook.[281] On March 25, 2018, leading US and UK newspapers published full-page ads with a personal apology from Zuckerberg. Zuckerberg issued a verbal apology on CNN.[282] In May 2010, he apologized for discrepancies in privacy settings.[281]

Previously, Facebook had its privacy settings spread out over 20 pages, and has now put all of its privacy settings on one page, which makes it more difficult for third-party apps to access the user's personal information.[208] In addition to publicly apologizing, Facebook has said that it will be reviewing and auditing thousands of apps that display "suspicious activities" in an effort to ensure that this breach of privacy does not happen again.[283] In a 2010 report regarding privacy, a research project stated that not a lot of information is available regarding the consequences of what people disclose online so often what is available are just reports made available through popular media.[284] In 2017, a former Facebook executive went on the record to discuss how social media platforms have contributed to the unraveling of the "fabric of society".[285]

Content disputes and moderation

[edit]

Facebook relies on its users to generate the content that bonds its users to the service. The company has come under criticism both for allowing objectionable content, including conspiracy theories and fringe discourse,[286] and for prohibiting other content that it deems inappropriate.

Misinformation and fake news

[edit]
Criticism of Facebook during the Hands Off protests in Minneapolis on April 5, 2025

Facebook has been criticized as a vector for fake news, and has been accused of bearing responsibility for the conspiracy theory that the United States created ISIS,[287] false anti-Rohingya posts being used by Myanmar's military to fuel genocide and ethnic cleansing,[288][289] enabling climate change denial[290][291][292] and Sandy Hook Elementary School shooting conspiracy theorists,[293] and anti-refugee attacks in Germany.[294][295][296] The government of the Philippines has also used Facebook as a tool to attack its critics.[297]

In 2017, Facebook partnered with fact checkers from the Poynter Institute's international fact-checking network to identify and mark false content, though most ads from political candidates are exempt from this program.[298][299] As of 2018, Facebook had over 40 fact-checking partners across the world, including The Weekly Standard.[300] Critics of the program have accused Facebook of not doing enough to remove false information from its website.[300][301]

Facebook has repeatedly amended its content policies. In July 2018, it stated that it would "downrank" articles that its fact-checkers determined to be false, and remove misinformation that incited violence.[302] Facebook stated that content that receives "false" ratings from its fact-checkers can be demonetized and suffer dramatically reduced distribution. Specific posts and videos that violate community standards can be removed on Facebook.[303] In May 2019, Facebook banned a number of "dangerous" commentators from its platform, including Alex Jones, Louis Farrakhan, Milo Yiannopoulos, Paul Joseph Watson, Paul Nehlen, David Duke, and Laura Loomer, for allegedly engaging in "violence and hate".[304][305]

In May 2020, Facebook agreed to a preliminary settlement of $52 million ($63.2 million in 2024 dollars[32]) to compensate U.S.-based Facebook content moderators for their psychological trauma suffered on the job.[306][307] Other legal actions around the world, including in Ireland, await settlement.[308] In September 2020, the Government of Thailand utilized the Computer Crime Act for the first time to take action against Facebook and Twitter for ignoring requests to take down content and not complying with court orders.[309]

According to a report by Reuters, beginning in 2020, the United States military ran a propaganda campaign to spread disinformation about the Sinovac Chinese COVID-19 vaccine, including using fake social media accounts to spread the disinformation that the Sinovac vaccine contained pork-derived ingredients and was therefore haram under Islamic law.[310] The campaign was described as "payback" for COVID-19 disinformation by China directed against the U.S.[311] In summer 2020, Facebook asked the military to remove the accounts, stating that they violated Facebook's policies on fake accounts and on COVID-19 information.[310] The campaign continued until mid-2021.[310]

Threats and incitement

[edit]

Professor Ilya Somin reported that he had been the subject of death threats on Facebook in April 2018 from Cesar Sayoc, who threatened to kill Somin and his family and "feed the bodies to Florida alligators". Somin's Facebook friends reported the comments to Facebook, which did nothing except dispatch automated messages.[312] Sayoc was later arrested for the October 2018 United States mail bombing attempts directed at Democratic politicians.

Terrorism

[edit]

Force v. Facebook, Inc., 934 F.3d 53 (2nd Cir. 2019) was a case that alleged Facebook was profiting off recommendations for Hamas. In 2019, the US Second Circuit Appeals Court held that Section 230 bars civil terrorism claims against social media companies and internet service providers, the first federal appellate court to do so.

Hate speech

[edit]

In October 2020, Pakistani Prime Minister Imran Khan urged Mark Zuckerberg, through a letter posted on government's Twitter account, to ban Islamophobic content on Facebook, warning that it encouraged extremism and violence.[313] In October 2020, the company announced that it would ban Holocaust denial.[314]

In October 2022, Media Matters for America published a report that Facebook and Instagram were still profiting off advertisements using the slur "groomer" for LGBT people.[315] The article reported that Meta had previously confirmed that the use of this word for the LGBT community violates its hate speech policies.[315] The story was subsequently picked up by other news outlets such as the New York Daily News, PinkNews, and LGBTQ Nation.[316][317][318]

Violent erotica

[edit]

There are ads on Facebook and Instagram containing sexually explicit content, descriptions of graphic violence and content promoting acts of self harm. Many of the ads are for webnovel apps backed by tech giants Bytedance and Tencent.[319]

InfoWars

[edit]

Facebook was criticized for allowing InfoWars to publish falsehoods and conspiracy theories.[303][320][321][322][323] Facebook defended its actions in regard to InfoWars, saying "we just don't think banning Pages for sharing conspiracy theories or false news is the right way to go."[321] Facebook provided only six cases in which it fact-checked content on the InfoWars page over the period September 2017 to July 2018.[303] In 2018, InfoWars falsely claimed that the survivors of the Parkland shooting were "actors". Facebook pledged to remove InfoWars content making the claim, although InfoWars videos pushing the false claims were left up, even though Facebook had been contacted about the videos.[303] Facebook stated that the videos never explicitly called them actors.[303] Facebook also allowed InfoWars videos that shared the Pizzagate conspiracy theory to survive, despite specific assertions that it would purge Pizzagate content.[303] In late July 2018, Facebook suspended the personal profile of InfoWars head Alex Jones for 30 days.[324] In early August 2018, Facebook banned the four most active InfoWars-related pages for hate speech.[325]

Political manipulation

[edit]
Graffiti in Berlin of Facebook founder Mark Zuckerberg; the caption is a reference to George Orwell's novel Nineteen Eighty-Four, December 2008

As a dominant social-web service with massive outreach, Facebook has been used by identified or unidentified political operatives to affect public opinion. Some of these activities have been done in violation of the platform policies, creating "coordinated inauthentic behavior", support or attacks. These activities can be scripted or paid. Various such abusive campaign have been revealed in recent years, best known being the Russian interference in the 2016 United States elections. In 2021, former Facebook analyst within the Spam and Fake Engagement teams, Sophie Zhang, reported more than 25 political subversion operations and criticized the general slow reaction time, oversightless, laissez-faire attitude by Facebook.[326][327][328]

Influence Operations and Coordinated Inauthentic Behavior

[edit]

In 2018, Facebook said that during 2018 they had identified "coordinated inauthentic behavior" in "many Pages, Groups and accounts created to stir up political debate, including in the US, the Middle East, Russia and the UK."[329]

Campaigns operated by the British intelligence agency unit, called Joint Threat Research Intelligence Group, have broadly fallen into two categories; cyber attacks and propaganda efforts. The propaganda efforts utilize "mass messaging" and the "pushing [of] stories" via social media sites like Facebook.[330][331] Israel's Jewish Internet Defense Force, the Chinese Communist Party's 50 Cent Party and Turkey's AK Trolls also focus their attention on social media platforms like Facebook.[332][333][334] In July 2018, Samantha Bradshaw, co-author of the report from the Oxford Internet Institute (OII) at Oxford University, said that "The number of countries where formally organised social media manipulation occurs has greatly increased, from 28 to 48 countries globally. The majority of growth comes from political parties who spread disinformation and junk news around election periods."[335] In October 2018, The Daily Telegraph reported that Facebook "banned hundreds of pages and accounts that it says were fraudulently flooding its site with partisan political content – although they came from the United States instead of being associated with Russia."[336]

In December 2018, The Washington Post reported that "Facebook has suspended the account of Jonathon Morgan, the chief executive of a top social media research firm" New Knowledge, "after reports that he and others engaged in an operation to spread disinformation" on Facebook and Twitter during the 2017 United States Senate special election in Alabama.[337][338] In January 2019, Facebook said it has removed 783 Iran-linked accounts, pages and groups for engaging in what it called "coordinated inauthentic behaviour".[339] In March 2019, Facebook sued four Chinese firms for selling "fake accounts, likes and followers" to amplify Chinese state media outlets.[340]

In May 2019, Tel Aviv-based private intelligence agency Archimedes Group was banned from Facebook for "coordinated inauthentic behavior" after Facebook found fake users in countries in sub-Saharan Africa, Latin America and Southeast Asia.[341] Facebook investigations revealed that Archimedes had spent some $1.1 million ($1.35 million in 2024 dollars[32]) on fake ads, paid for in Brazilian reais, Israeli shekels and US dollars.[342] Facebook gave examples of Archimedes Group political interference in Nigeria, Senegal, Togo, Angola, Niger and Tunisia.[343] The Atlantic Council's Digital Forensic Research Lab said in a report that "The tactics employed by Archimedes Group, a private company, closely resemble the types of information warfare tactics often used by governments, and the Kremlin in particular."[344][345]

On May 23, 2019, Facebook released its Community Standards Enforcement Report highlighting that it has identified several fake accounts through artificial intelligence and human monitoring. In a period of six months, October 2018 – March 2019, the social media website removed a total of 3.39 billion fake accounts. The number of fake accounts was reported to be more than 2.4 billion real people on the platform.[346]

In July 2019, Facebook advanced its measures to counter deceptive political propaganda and other abuse of its services. The company removed more than 1,800 accounts and pages that were being operated from Russia, Thailand, Ukraine and Honduras.[347] After Russia's invasion of Ukraine in February 2022, it was announced that the internet regulatory committee would block access to Facebook.[348] On October 30, 2019, Facebook deleted several accounts of the employees working at the Israeli NSO Group, stating that the accounts were "deleted for not following our terms". The deletions came after WhatsApp sued the Israeli surveillance firm for targeting 1,400 devices with spyware.[349]

In 2020, Facebook helped found American Edge, an anti-regulation lobbying firm to fight anti-trust probes.[350] The group runs ads that "fail to mention what legislation concerns them, how those concerns could be fixed, or how the horrors they warn of could actually happen", and do not clearly disclose that they are funded by Facebook.[351]

In 2020, the government of Thailand forced Facebook to take down a Facebook group called Royalist Marketplace with one million members following potentially illegal posts shared. The authorities have also threatened Facebook with legal action. In response, Facebook is planning to take legal action against the Thai government for suppression of freedom of expression and violation of human rights.[352] In 2020, during the COVID-19 pandemic, Facebook found that troll farms from North Macedonia and the Philippines pushed coronavirus disinformation. The publisher, which used content from these farms, was banned.[353]

In the run-up to the 2020 United States elections, Eastern European troll farms operated popular Facebook pages showing content related to Christians and Blacks in America. They included more than 15,000 pages combined and were viewed by 140 million US users per month. This was in part due to how Facebook's algorithm and policies allow unoriginal viral content to be copied and spread in ways that still drive up user engagement. As of September 2021, some of the most popular pages were still active on Facebook despite the company's efforts to take down such content.[354]

In February 2021, Facebook removed the main page of the Myanmar military, after two protesters were shot and killed during the anti-coup protests. Facebook said that the page breached its guidelines that prohibit the incitement of violence.[355] On February 25, Facebook announced to ban all accounts of the Myanmar military, along with the "Tatmadaw-linked commercial entities". Citing the "exceptionally severe human rights abuses and the clear risk of future military-initiated violence in Myanmar", the tech giant also implemented the move on its subsidiary, Instagram.[356] In March 2021, The Wall Street Journal's editorial board criticized Facebook's decision to fact-check its op-ed titled "We'll Have Herd immunity by April" written by surgeon Marty Makary, calling it "counter-opinion masquerading as fact checking."[357]

Facebook guidelines allow users to call for the death of public figures, they also allow praise of mass killers and 'violent non-state actors' in some situations.[358][359] In 2021, former Facebook analyst within the Spam and Fake Engagement teams, Sophie Zhang, reported on more than 25 political subversion operations she uncovered while in Facebook, and the general laissez-faire by the private enterprise.[326][327][328]

In 2021, Facebook was cited as playing a role in the fomenting of the 2021 United States Capitol attack.[360][361]

Russian interference

[edit]

In 2018, Special Counsel Robert Mueller indicted 13 Russian nationals and three Russian organizations for "engaging in operations to interfere with U.S. political and electoral processes, including the 2016 presidential election."[362][363][364]

Mueller contacted Facebook subsequently to the company's disclosure that it had sold more than $100,000 ($131,018 in 2024 dollars[32]) worth of ads to a company (Internet Research Agency, owned by Russian billionaire and businessman Yevgeniy Prigozhin) with links to the Russian intelligence community before the 2016 United States presidential election.[365][366] In September 2017, Facebook's chief security officer Alex Stamos wrote the company "found approximately $100,000 in ad spending from June 2015 to May 2017 – associated with roughly 3,000 ads – that was connected to about 470 inauthentic accounts and Pages in violation of our policies. Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia."[367] Clinton and Trump campaigns spent $81 million ($106 million in 2024 dollars[32]) on Facebook ads.[368]

The company pledged full cooperation in Mueller's investigation, and provided all information about the Russian advertisements.[369] Members of the House and Senate Intelligence Committees have claimed that Facebook had withheld information that could illuminate the Russian propaganda campaign.[370] Russian operatives have used Facebook polarize the American public discourses, organizing both Black Lives Matter rallies[371][372] and anti-immigrant rallies on U.S. soil,[373] as well as anti-Clinton rallies[374] and rallies both for and against Donald Trump.[375][376] Facebook ads have also been used to exploit divisions over black political activism and Muslims by simultaneously sending contrary messages to different users based on their political and demographic characteristics in order to sow discord.[377][378][379] Zuckerberg has stated that he regrets having dismissed concerns over Russian interference in the 2016 U.S. presidential election.[380]

Russian-American billionaire Yuri Milner, who befriended Zuckerberg[381] between 2009 and 2011, had Kremlin backing for his investments in Facebook and Twitter.[382] In January 2019, Facebook removed 289 pages and 75 coordinated accounts linked to the Russian state-owned news agency Sputnik which had misrepresented themselves as independent news or general interest pages.[383][384] Facebook later identified and removed an additional 1,907 accounts linked to Russia found to be engaging in "coordinated inauthentic behaviour".[385] In 2018, a UK Department for Digital, Culture, Media and Sport (DCMS) select committee report had criticized Facebook for its reluctance to investigate abuse of its platform by the Russian government, and for downplaying the extent of the problem, referring to the company as 'digital gangsters'.[386][387][388]

"Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised 'dark adverts' from unidentifiable sources, delivered through the major social media platforms we use every day," Damian Collins, DCMS Committee Chair[388]

In February 2019, Glenn Greenwald wrote that a cybersecurity company New Knowledge, which is behind one of the Senate reports on Russian social media election interference, "was caught just six weeks ago engaging in a massive scam to create fictitious Russian troll accounts on Facebook and Twitter in order to claim that the Kremlin was working to defeat Democratic Senate nominee Doug Jones in Alabama. The New York Times, when exposing the scam, quoted a New Knowledge report that boasted of its fabrications..."[389][390]

Anti-Rohingya propaganda

[edit]

In 2018, Facebook took down 536 Facebook pages, 17 Facebook groups, 175 Facebook accounts, and 16 Instagram accounts linked to the Myanmar military. Collectively these were followed by over 10 million people.[391] The New York Times reported that:[392]

after months of reports about anti-Rohingya propaganda on Facebook, the company acknowledged that it had been too slow to act in Myanmar. By then, more than 700,000 Rohingya had fled the country in a year, in what United Nations officials called "a textbook example of ethnic cleansing".

Anti-Muslim propaganda and Hindu nationalism in India

[edit]

A 2019 book titled The Real Face of Facebook in India, co-authored by the journalists Paranjoy Guha Thakurta and Cyril Sam, alleged that Facebook helped enable and benefited from the rise of Narendra Modi's Hindu nationalist Bharatiya Janata Party (BJP) in India.[393] Ankhi Das, Facebook's policy director for India and South and Central Asia, apologized publicly in August 2020 for sharing a Facebook post which called Muslims in India a "degenerate community". She said she shared the post "to reflect my deep belief in celebrating feminism and civic participation".[394] She is reported to have prevented action by Facebook against anti-Muslim content[395][396] and supported the BJP in internal Facebook messages.[397][398]

In 2020, Facebook executives overrode their employees' recommendations that the BJP politician T. Raja Singh should be banned from the site for hate speech and rhetoric that could lead to violence. Singh had said on Facebook that Rohingya Muslim immigrants should be shot and had threatened to destroy mosques. Current and former Facebook employees told The Wall Street Journal that the decision was part of a pattern of favoritism by Facebook toward the BJP as it seeks more business in India.[396] Facebook also took no action after BJP politicians made posts accusing Muslims of intentionally spreading COVID-19, an employee said.[399]

In 2020, the Delhi Assembly began investigating whether Facebook bore blame for the 2020 religious riots in the city, claiming it had found Facebook "prima facie guilty of a role in the violence".[400][401] Following a summons by a Delhi Assembly Committee, Facebook India vice-president and managing director Ajit Mohan moved the Supreme Court,[402] which granted him relief and ordered a stay to the summons.[403][404][405] The Central government later backed the decision, and submitted in the court that Facebook could not be made accountable before any state assembly and the committee formed was unconstitutional.[406][407] Following a fresh notice by the Delhi Assembly panel in 2021 for failing to appear before it as a witness, Mohan challenged it saying that the 'right to silence' is a virtue in present 'noisy times' and the legislature had no authority to examine him in a law and order case.[408] In July 2021, the Supreme Court refused to quash the summons and asked Facebook to appear before the Delhi assembly panel.[409]

On September 23, 2023, it was reported that Facebook had delayed for about a year when in 2021, it removed a network of accounts ran by India's Chinar Corps which spread disinformation that would put Kashmiri journalists in danger. The delay and the previously not publicized takedown action were due a fear that its local employees would be targeted by authorities, and that it would hurt business prospects in the country.[410]

Company governance

[edit]

Early Facebook investor and former Zuckerberg mentor Roger McNamee described Facebook as having "the most centralized decision-making structure I have ever encountered in a large company."[411] Nathan Schneider, a professor of media studies at the University of Colorado Boulder argued in 2018 for transforming Facebook into a platform cooperative owned and governed by the users.[412]

Facebook co-founder Chris Hughes stated in 2019 that CEO Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. He called for the breakup of Facebook in an op-ed in The New York Times. Hughes says he is concerned that Zuckerberg has surrounded himself with a team that does not challenge him and that as a result, it is the U.S. government's job to hold him accountable and curb his "unchecked power".[413] Hughes also said that "Mark's power is unprecedented and un-American."[414] Several U.S. politicians agree with Hughes.[415] EU Commissioner for Competition Margrethe Vestager has stated that splitting Facebook should only be done as "a remedy of the very last resort", and that splitting Facebook would not solve Facebook's underlying problems.[416]

Customer support

[edit]

Facebook has been criticized for its lack of human customer support.[417] When users personal and business accounts are breached, many are forced to go through small claims court to regain access and restitution.[418]

Litigation

[edit]

The company has been subject to repeated litigation.[419][420][421][422] Its most prominent case addressed allegations that Zuckerberg broke an oral contract with Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra to build the then-named "HarvardConnection" social network in 2004.[423][424][425]

On March 6, 2018, BlackBerry sued Facebook and its Instagram and WhatsApp subdivision for ripping off key features of its messaging app.[426] In October 2018, a Texan woman sued Facebook, claiming she had been recruited into the sex trade at the age of 15 by a man who "friended" her on the social media network. Facebook responded that it works both internally and externally to ban sex traffickers.[427][428]

In 2019, British solicitors representing a bullied Syrian schoolboy, sued Facebook over false claims. They claimed that Facebook protected prominent figures from scrutiny instead of removing content that violates its rules and that the special treatment was financially driven.[429][430] The Federal Trade Commission and a coalition of New York state and 47 other state and regional governments filed separate suits against Facebook on December 9, 2020, seeking antitrust action based on its acquisitions of Instagram and WhatsUp among other companies, calling these practices as anticompetitive. The suits also assert that in acquiring these products, they weakened their privacy measures for their users. The suits, besides other fines, seek to unwind the acquisitions from Facebook.[431][432]

On January 6, 2022, France's data privacy regulatory body CNIL fined Facebook a 60 million euros for not allowing its internet users an easy refusal of cookies along with Google.[433] On December 22, 2022, the Quebec Court of Appeal approved a class-action lawsuit on behalf of Facebook users who claim they were discriminated against because the platform allows advertisers to target both job and housing advertisements based on various factors, including age, gender, and even race.[434] The lawsuit centers on the platform's practice of "micro targeting ads", claiming ads are ensured to appear only in the feeds of people who belong to certain targeted groups. Women, for example, would not see ads targeting men, while older generation men would not see an ad aimed at people between 18 and 45.[434]

The class action could include thousands of Quebec residents who have been using the platform as early as April 2016, who were seeking jobs or housing during that period.[434] Facebook has 60 days after the court's December 22 ruling to decide to appeal the case to the Supreme Court of Canada. If it does not appeal, the case returns to the Quebec Superior Court.[434] On September 21, 2023, the California Courts of Appeal ruled that Facebook could be sued for discriminatory advertising under the Unruh Civil Rights Act.[435]

Impact

[edit]
Facebook at ad:tech 2010 in London

Scope

[edit]

A commentator in The Washington Post noted that Facebook constitutes a "massive depository of information that documents both our reactions to events and our evolving customs with a scope and immediacy of which earlier historians could only dream".[436] Especially for anthropologists, social researchers, and social historians—and subject to proper preservation and curation—the website "will preserve images of our lives that are vastly crisper and more nuanced than any ancestry record in existence".[436]

Economy

[edit]

Economists have noted that Facebook offers many non-rivalrous services that benefit as many users as are interested without forcing users to compete with each other. By contrast, most goods are available to a limited number of users. E.g., if one user buys a phone, no other user can buy that phone. Three areas add the most economic impact: platform competition, the market place and user behavior data.[437] Facebook began to reduce its carbon impact after Greenpeace attacked it for its long-term reliance on coal and resulting carbon footprint.[438] In 2021 Facebook announced that their global operations are supported by 100 percent renewable energy and they have reached net zero emissions, a goal set in 2018.[439][440]

Facebook provides a development platform for many social gaming, communication, feedback, review, and other applications related to online activities. This platform spawned many businesses and added thousands of jobs to the global economy. Zynga Inc., a leader in social gaming, is an example of such a business. An econometric analysis found that Facebook's app development platform added more than 182,000 jobs in the U.S. economy in 2011. The total economic value of the added employment was about $12 billion ($16.8 billion in 2024 dollars[32]).[441]

Society

[edit]

Facebook was one of the first large-scale social networks. In The Facebook Effect, David Kirkpatrick said that Facebook's structure makes it difficult to replace, because of its "network effects". As of 2016, it was estimated 44% of Americans get news through Facebook.[442] A study published at Frontiers Media in 2023 found that there was more polarization of the user-base on Facebook than even far-right social networks like Gab.[443]

Mental and emotional health

[edit]

Studies have associated social networks with positive[444] and negative impacts[445][446][447][448][449] on emotional health.

Studies have associated Facebook with feelings of envy, often triggered by vacation and holiday photos. Other triggers include posts by friends about family happiness and images of physical beauty—such feelings leave people dissatisfied with their own lives. A joint study by two German universities discovered that one out of three people were more dissatisfied with their lives after visiting Facebook,[450][451] and another study by Utah Valley University found that college students felt worse about themselves following an increase in time on Facebook.[451][452][453]

Positive effects include signs of "virtual empathy" with online friends and helping introverted persons learn social skills.[454] A 2020 experimental study in the American Economic Review found that deactivating Facebook led to increased subjective well-being.[455] In a blog post in December 2017, the company highlighted research that has shown "passively consuming" the News Feed, as in reading but not interacting, left users with negative feelings, whereas interacting with messages pointed to improvements in well-being.[456]

Politics

[edit]

In February 2008, a Facebook group called "One Million Voices Against FARC" organized an event in which hundreds of thousands of Colombians marched in protest against the Revolutionary Armed Forces of Colombia (FARC).[457] In August 2010, one of North Korea's official government websites and the country's official news agency, Uriminzokkiri, joined Facebook.[458]

A man during the 2011 Egyptian protests carrying a card saying "Facebook,#jan25, The Egyptian Social Network"

During the Arab Spring many journalists claimed Facebook played a major role in the 2011 Egyptian revolution.[459][460] On January 14, the Facebook page of "We are all Khaled Said" was started by Wael Ghoniem to invite the Egyptian people to "peaceful demonstrations" on January 25. In Tunisia and Egypt, Facebook became the primary tool for connecting protesters and led the Egyptian government to ban it, Twitter and other sites.[461] After 18 days, the uprising forced President Hosni Mubarak to resign.

In a Bahraini uprising that started on February 14, 2011, Facebook was utilized by the Bahraini regime and regime loyalists to identify, capture and prosecute citizens involved in the protests. A 20-year-old woman named Ayat Al Qurmezi was identified as a protester using Facebook and imprisoned.[462] In 2011, Facebook filed paperwork with the Federal Election Commission to form a political action committee under the name FB PAC.[463] In an email to The Hill, a spokesman for Facebook said "Facebook Political Action Committee will give our employees a way to make their voice heard in the political process by supporting candidates who share our goals of promoting the value of innovation to our economy while giving people the power to share and make the world more open and connected."[464]

During the Syrian civil war, the YPG, a libertarian army for Rojava recruited westerners through Facebook in its fight against ISIL.[465] Dozens joined its ranks. The Facebook page's name "The Lions of Rojava" comes from a Kurdish saying which translates as "A lion is a lion, whether it's a female or a male", reflecting the organization's feminist ideology.[466]

In recent years, Facebook's News Feed algorithms have been identified as a cause of political polarization, for which it has been criticized.[467][468] It has likewise been accused of amplifying the reach of 'fake news' and extreme viewpoints, as when it may have enabled conditions which led to the 2015 Rohingya refugee crisis.[469][470] Facebook first played a role in the American political process in January 2008, shortly before the New Hampshire primary. Facebook teamed up with ABC and Saint Anselm College to allow users to give live feedback about the "back to back" January 5 Republican and Democratic debates.[471][472][473] Facebook users took part in debate groups on specific topics, voter registration and message questions.[474]

Over a million people installed the Facebook application "US Politics on Facebook" in order to take part which measured responses to specific comments made by the debating candidates.[475] A poll by CBS News, UWIRE and The Chronicle of Higher Education claimed to illustrate how the "Facebook effect" had affected youthful voters, increasing voting rates, support of political candidates, and general involvement.[476] The new social media, such as Facebook and Twitter, connected hundreds of millions of people. By 2008, politicians and interest groups were experimenting with systematic use of social media to spread their message.[477][478] By the 2016 election, political advertising to specific groups had become normalized. Facebook offered the most sophisticated targeting and analytics platform.[479] ProPublica noted that their system enabled advertisers to direct their pitches to almost 2,300 people who expressed interest in the topics of "Jew hater", "How to burn Jews", or, "History of 'why Jews ruin the world".[480]

Facebook has used several initiatives to encourage its users to register to vote and vote. An experiment in 2012 involved showing Facebook users pictures of their friends who reported that they had voted; users who were shown the pictures were about 2% more likely to report that they had also voted compared to the control group, which was not encouraged to vote.[481] In 2020, Facebook announced the goal of helping four million voters register in the US, saying that it had registered 2.5 million by September.[482]

The Cambridge Analytica data scandal offered another example of the perceived attempt to influence elections.[228][483] The Guardian claimed that Facebook knew about the security breach for two years, but did nothing to stop it until it became public.[484] Facebook banned political ads to prevent the manipulation of voters in the US's November's election. Propaganda experts said there are other ways for misinformation to reach voters on social media platforms and blocking political ads will not serve as a proven solution.[485]

In March 2024, former US President Donald Trump said that getting rid of TikTok would allow Facebook, which he called the "enemy of the people", to double its business. He spoke after President Biden said he was ready to sign legislation that would require TikTok owner ByteDance to sell the video platform or face a ban in the US.[486]

India

[edit]

Ahead of the 2019 general elections in India, Facebook has removed 103 pages, groups and accounts on Facebook and Instagram platforms originating from Pakistan. Facebook said its investigation found a Pakistani military link, along with a mix of real accounts of ISPR employees, and a network of fake accounts created by them that have been operating military fan pages, general interest pages but were posting content about Indian politics while trying to conceal their identity.[487] Owing to the same reasons, Facebook also removed 687 pages and accounts of Congress because of coordinated inauthentic behavior on the platform.[488]

Culture

[edit]
Facebook parade float in San Francisco Pride 2014

Facebook and Zuckerberg have been the subject of music, books, film and television. The 2010 film The Social Network, directed by David Fincher and written by Aaron Sorkin, stars Jesse Eisenberg as Zuckerberg and went on to win three Academy Awards and four Golden Globes.

In 2008, Collins English Dictionary declared "Facebook" as its new Word of the Year.[489] In December 2009, the New Oxford American Dictionary declared its word of the year to be the verb "unfriend", defined as "To remove someone as a 'friend' on a social networking site such as Facebook".[490]

Internet.org

[edit]

In August 2013, Facebook founded Internet.org in collaboration with six other technology companies to plan and help build affordable Internet access for underdeveloped and developing countries.[491] The service, called Free Basics, includes various low-bandwidth applications such as AccuWeather, BabyCenter, BBC News, ESPN, and Bing.[492][493] There was severe opposition to Internet.org in India, where the service started in partnership with Reliance Communications in 2015 was banned a year later by the Telecom Regulatory Authority of India (TRAI). In 2018, Zuckerberg claimed that "Internet.org efforts have helped almost 100 million people get access to the internet who may not have had it otherwise."[492]

Environment

[edit]

Facebook announced in 2021 that it will make an effort to stop disinformation about climate change. The company will use George Mason University, Yale Program on Climate Change Communication and the University of Cambridge as sources of information. The company will expand its information hub on climate to 16 countries. Users in other countries will be directed to the site of the United Nations Environment Programme for information.[494]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Facebook is an founded on February 4, 2004, by along with Harvard undergraduates , , , and , initially under the name "TheFacebook" and restricted to Harvard students. The platform rapidly expanded to other schools, then universities nationwide, and opened to the general public in September 2006, facilitating user profiles, friend connections, status updates, photo sharing, and later features like the News Feed introduced in 2006 and the "Like" button in 2009. By 2012, it achieved one billion monthly , a milestone reflecting its explosive growth driven by network effects and viral adoption. As of October 2025, Facebook reports approximately 3.07 billion monthly globally, representing about 37% of the world's and sustaining its position as the dominant platform despite competition from newer services. Owned by , Inc.—formerly Facebook, Inc., which rebranded in 2021 to emphasize broader technological ambitions including —the service derives nearly all its revenue from digital advertising, leveraging vast user data for targeted placements that have reshaped online commerce and information dissemination. Facebook's scale has enabled unprecedented global connectivity, empowering movements through real-time information sharing, yet it has also precipitated profound controversies, including systemic data privacy failures such as the 2018 scandal where millions of users' information was harvested without consent for political targeting, culminating in a record $5 billion penalty from the U.S. in 2019 for deceptive practices. efforts, often reliant on third-party fact-checkers until a 2025 pivot to community-driven notes amid criticisms of overreach and viewpoint bias, have drawn accusations of suppressing dissenting narratives while amplifying others, exacerbating divisions in public discourse. These issues underscore causal tensions between the platform's profit model, which incentivizes engagement via algorithmic amplification, and demands for neutral, transparent governance in an era of .

History

Founding and Initial Launch (2004–2006)

, a sophomore, launched TheFacebook.com on February 4, 2004, from his dormitory room as a social networking site initially limited to Harvard students. The platform was developed by Zuckerberg along with fellow Harvard students , Andrew McCollum, , and , who contributed to its early coding and promotion efforts. Inspired by earlier campus directories and the need for a digital space to connect students, TheFacebook allowed users to create profiles including personal details, photographs, and connections to classmates, emphasizing verified student email addresses for exclusivity. The site gained rapid traction at Harvard, with over two-thirds of undergraduates registering within weeks of launch, driven by word-of-mouth and the novelty of online social graphing among peers. This early success stemmed from its simple interface and focus on real-world social ties, contrasting with broader networks like that suffered from technical glitches. By March 2004, Zuckerberg expanded access to other schools including Yale, Stanford, and Columbia, followed by additional U.S. universities, marking the beginning of controlled geographic and institutional rollout. By December 2004, TheFacebook had amassed over one million registered users across more than 800 college networks, prompting the team to relocate operations from Harvard to , to facilitate full-time development and proximity to talent. In 2005, the domain simplified to Facebook.com, dropping "The" to reflect its evolving identity, while features like photo uploads and wall postings were introduced to enhance user interaction. Revenue remained negligible at $0.4 for the year, generated sporadically through minor ads, as the priority centered on user growth over . Into 2006, Facebook continued expanding to high schools and international universities, culminating in with openness to anyone over 13 with a valid , broadening beyond its college-centric origins and accelerating user acquisition to approximately 12 million by year's end. This shift was enabled by improved server infrastructure to handle surging , though early challenges included server crashes from overload and Zuckerberg's hands-on coding to maintain uptime. The platform's emphasis on authentic identity verification contributed to its organic virality, setting it apart from pseudonymous alternatives.

Expansion and Key Milestones (2007–2012)

In 2007, Facebook accelerated its user base expansion, growing from approximately 20 million monthly active users in April to 30 million by July, surpassing to become the world's most popular social networking site by global traffic. The platform extended its reach internationally by launching localized versions in multiple languages and partnering with mobile operators for broader accessibility. That November, Facebook introduced , an advertising system designed to track user purchases on partner sites like Overstock.com and automatically share them in friends' news feeds without explicit opt-in consent, prompting immediate backlash over privacy violations. CEO publicly apologized in December 2007, acknowledging errors in implementation and offering users the ability to , though Beacon's opt-out model persisted until its full discontinuation in 2009 amid ongoing complaints and lawsuits. Facebook's acquisition strategy intensified during this period to bolster technical capabilities and eliminate competition. In July 2007, it acquired Parakey, a web-desktop application developer, for an undisclosed sum to enhance platform interoperability. The company settled a with rival in June 2008 by acquiring its assets for around $31 million in cash and stock, effectively absorbing a Harvard-originated competitor. In 2008, Facebook hired as chief operating officer, who played a pivotal role in scaling advertising revenue and operations. User growth continued rapidly, reaching 500 million by July 2010, with significant international adoption driving the establishment of its first overseas headquarters in , , in 2008 to support European expansion. By 2011, monthly active users exceeded 750 million in July and approached 800 million by September, fueled by features like the September launch of Timeline, which restructured user profiles into a chronological narrative of life events. The platform hit one trillion page views in June 2011, underscoring its dominance in online engagement. In April 2012, Facebook acquired for $1 billion in cash and stock, integrating the photo-sharing app to capture mobile-first younger demographics amid rising usage. The period culminated in Facebook's on May 18, 2012, pricing 421 million shares at $38 each to raise $16 billion, valuing the company at $104 billion and marking the largest U.S. tech IPO at the time, though shares initially declined due to technical glitches and market skepticism. By October 2012, monthly active users reached one billion, reflecting sustained global scaling despite privacy and competitive pressures.

Public Offering and Scaling Challenges (2013–2020)

Facebook's on May 18, 2012, priced shares at $38, raising approximately $16 billion, but the debut faced significant technical glitches on , delaying trading and contributing to an initial 11% drop from the opening price. In the year following, shares fell to a low of $26.25 by mid-2013 amid concerns over mobile and slowing growth projections, marking it as one of the largest IPO disappointments relative to hype, though the company settled related lawsuits without admitting wrongdoing. Post-IPO pressures as a public entity intensified scrutiny on quarterly performance, with retaining voting control through dual-class shares to prioritize long-term scaling over short-term shareholder demands. By 2013, Facebook had 1.11 billion monthly active users (MAU), expanding to 2.74 billion by 2020 through and strategic acquisitions, while surged from $7.87 billion in 2013 to $85.96 billion in 2020, driven primarily by amid a pivot to mobile platforms that comprised over 90% of usage by mid-decade. Key acquisitions bolstered scaling: in February 2014 for $19 billion integrated 450 million users into Facebook's ecosystem, enhancing messaging capabilities; Oculus VR in March 2014 for $2 billion laid groundwork for investments; and smaller buys like in 2013 provided analytics for user behavior insights. These moves addressed competitive threats but drew antitrust scrutiny, with regulators questioning whether they stifled innovation in social networking and messaging markets. Technical infrastructure demands escalated with the user surge, requiring innovations in data centers, custom hardware, and software to handle petabyte-scale and maintain 99.99% across global servers. Challenges included optimizing for real-time features like Live video, which scaled to billions of views by 2016 through edge caching and , and managing explosive data growth that necessitated proprietary tools for static analysis and fault-tolerant systems. Economic pressures emerged in 2020 amid the , prompting Facebook to defer up to $3 billion in capital expenditures for data centers while pausing construction to adapt to reduced physical event reliance. Regulatory and privacy hurdles compounded scaling efforts, as revelations of data mishandling—such as the 2018 exposing 87 million users' data—led to a 2019 settlement imposing a $5 billion penalty and new oversight for violations of a 2012 decree. These issues stemmed from lax third-party app controls and inadequate user mechanisms, eroding trust and inviting global probes into practices like data sharing with partners, though Facebook maintained such integrations were standard industry tools for growth. By late 2020, mounting antitrust actions in the U.S. and targeted Facebook's dominance, alleging acquisitions like (pre-IPO but integral to post-IPO empire) eliminated rivals, forcing defensive investments in compliance amid ambitions to interconnect apps under a unified framework.

Rebranding to Meta and Strategic Shifts (2021–Present)

On October 28, 2021, at its Connect conference, Facebook Inc. announced a of its parent company to Inc., with CEO stating the change reflected a shift toward building the ""—a vision of interconnected (VR), (AR), and social experiences beyond traditional . The rebrand occurred amid revelations from whistleblower , a former , who on October 3, 2021, disclosed internal documents to U.S. regulators and media outlets alleging the company prioritized growth and profits over mitigating harms like , impacts on teens, and failures; Haugen testified before on October 5, 2021, claiming these issues were systemic despite public statements to the contrary. Meta maintained the apps—Facebook, , —retained their names, but emphasized , its VR/AR division, as central to future revenue, projecting opportunities to eventually exceed scale. The strategy involved aggressive investments in hardware like Quest VR headsets and software ecosystems, but reported cumulative operating losses exceeding $60 billion by mid-2025, including a record $17.7 billion in and $4.97 billion in Q4 alone, despite generating under $1.1 billion in quarterly sales. These losses stemmed from high R&D costs for unproven technologies, with adoption lagging: Quest headset sales remained niche, and user engagement failed to materialize at scale, prompting investor skepticism and a 70% stock drop from 2021 peaks by late 2022. In response, Meta initiated "Year of Efficiency" in 2023, cutting costs through layoffs totaling over 21,000 roles by mid-2023, including middle management and non-core teams, to fund bets while stabilizing advertising revenue, which comprised 97% of income. By March 2023, Zuckerberg declared AI as Meta's "single largest investment," signaling a pivot from primacy, with resources redirected to generative AI tools like Llama models, AI-driven ad targeting, and enhancements, contributing to a tripling in 2023. This shift accelerated in 2024–2025, with AI infrastructure spending projected at $64–72 billion annually and acquisitions of AI talent, though efforts persisted amid ongoing losses of $4.2–$4.5 billion per quarter in 2025. continued, including a 5% workforce reduction (about 3,600 roles) in February 2025 focused on performance and non-essential teams, and 600 AI-specific cuts in October 2025 to streamline research amid economic pressures. Despite pivots, Meta's core social platforms grew daily to over 3.2 billion by 2025, underscoring resilience over speculative ventures.

Technical Infrastructure

Core Architecture and Programming Languages

Facebook's core architecture centers on a distributed system optimized for the , comprising billions of vertices (user objects) and edges (associations like friendships). The (The Associations and Objects) layer serves as the primary graph store, providing low-latency reads and writes by abstracting a write-through cache over sharded databases, with handling hot data for frequent accesses. TAO partitions data geographically across data centers, using for load balancing and for non-critical updates to prioritize availability under high read-to-write ratios typical of social workloads. The underlying persistent storage relies heavily on , initially with the engine for transactions on core social data, later augmented by custom optimizations like MyRocks—a RocksDB-based storage engine—for improved compression and write efficiency on flash storage. Additional systems, such as , support high-write scenarios like messaging logs, while the overall stack evolved from the LAMP (, , , ) foundation to incorporate custom runtimes for scalability. Server-side development predominantly uses Hack, a statically typed dialect of developed by Meta for the HipHop Virtual Machine (), which enables and seamless interoperability with legacy code while compiling to efficient bytecode or native executables. Hack powers much of the logic, reconciling 's rapid iteration with to reduce runtime errors in a massive codebase. Complementary languages include C++ for performance-intensive components like caching and query execution, Python for data processing and internal tools, and for emerging systems requiring memory safety without garbage collection overhead. Erlang and handle specific services, such as real-time messaging and backend APIs, reflecting a polyglot approach to balance developer productivity with operational demands.

Scalability, CDN, and Performance Optimizations

Facebook's scalability relies on distributed systems engineered to manage vast social graphs and user interactions, processing datasets with trillions of edges using frameworks like , scaled in 2013 to handle graph algorithms across massive datasets. Data processing infrastructure, including the -based warehouse, expanded to 300 petabytes by 2014 through compressed storage formats that optimized on-disk efficiency for raw data handling. Cluster orchestration via enables stateful service scaling, addressing challenges in managing large fleets of servers for web and mobile workloads. The content delivery network (), termed FBCDN, incorporates advanced caching to accelerate media delivery, minimizing latency for photos and videos while cutting backbone traffic costs. FBCDN operates through domains such as scontent-*.fbcdn.net, routing content via location-aware servers, and leverages Facebook Network Appliances (FNAs) deployed across approximately 1,689 global nodes as of 2018 to edge-cache static assets closer to users. Proactive prefetching and jitter minimization in media routing further enhance CDN reliability for high-volume traffic. Performance optimizations span runtime environments and binary-level tweaks, with providing for and Hack code to sustain web service throughput at scale. BOLT, a LLVM-based post-link optimizer, applies sample profiling to reorder binaries, yielding measurable speedups in executions for server-side applications. Mobile optimizations include Hermes, a lightweight JavaScript engine reducing app startup times in environments. Network-level enhancements, such as those discussed in 2023 engineering talks, target large-scale traffic routing to bolster overall system responsiveness. By 2025, infrastructure scaling incorporates AI-driven demands, with the 10X Backbone evolving connectivity topologies to support exponential compute growth without compromising core platform performance. This layered approach—combining sharded storage, edge caching, and profiled optimizations—sustains daily operations for billions of users across Meta's ecosystem, including .

Core Features and Functionality

User Profiles, Timelines, and Personalization

Facebook user profiles serve as the central hub for individual accounts, enabling users to share personal information, photos, videos, and life events with selected audiences. Profiles include sections for basic details such as name, profile picture, cover photo, and an "About" area where users can list education, work history, interests, and relationship status, with visibility controls allowing customization of audience reach from public to friends-only. In 2021, Facebook introduced a refreshed profile layout for desktop users. In 2022, professional mode was added to personal profiles, allowing users to enable followers in addition to friends, with the follower count displayed prominently; no redesign for 2026 has been confirmed. Since its inception in 2004, profiles have enforced a real-name policy requiring users to register with the name they use in everyday life to represent their authentic identity, a rule intended to foster trust but criticized for endangering vulnerable groups like activists, domestic violence survivors, and LGBTQ+ individuals who fear real-name disclosure. The Timeline feature, rolled out in September 2011, restructured user profiles into a chronological of posts, photos, and milestones dating back to account creation or earlier via manual entries for events like births or schools attended. This replaced the previous format, allowing users to highlight key moments with a "Featured" section and curate visibility by editing, hiding, or deleting entries to shape the presented history. Users can manage Timeline content through tools like activity logs to review and adjust past posts, ensuring control over the digital autobiography displayed to visitors. Personalization of profiles and Timelines emphasizes user agency in privacy and presentation, with options to toggle professional mode for analytics and monetization tools on personal profiles, or adjust feed inputs to prioritize certain content types. Privacy settings enable granular control, such as limiting who sees tagged photos or updates, while features like link history and activity logs support reviewing and refining personalized experiences. In 2022, Facebook introduced options for users to manually curate Timeline feeds by selecting "show more" or "show less" for specific friends or pages, aiming to enhance relevance amid algorithmic defaults. Additionally, the "Take a Break" feature allows users to reduce visibility of a specific friend's content in their feed and limit what that friend sees of theirs, without unfriending, blocking, or notifying the friend, maintaining the friendship while creating soft distance. These tools reflect ongoing efforts to balance platform-driven personalization with user-directed customization, though reliance on self-reported data and policy enforcement has drawn scrutiny for inconsistencies in application.

News Feed, Algorithm, and Content Ranking

The News Feed, introduced on September 5, 2006, aggregates updates from users' connections, groups, and pages into a personalized stream, fundamentally transforming Facebook from a static directory of profiles into a dynamic platform for real-time social interaction. Initially presented in reverse-chronological order, the feature faced user backlash for its perceived invasiveness in surfacing private activities without consent, prompting privacy adjustments but establishing it as central to user engagement. Over time, the Feed evolved to prioritize algorithmic curation over strict chronology to combat information overload, as the volume of potential posts grew exponentially with Facebook's user base surpassing 1 billion by 2012. Early ranking relied on , a simplified weighting three factors: affinity (user-poster relationship strength, derived from interaction history), (content type and engagement potential, e.g., photos over text), and time decay (favoring recent posts exponentially). This model, publicly detailed around , aimed to score "edges" (interactions like likes or comments) as σ=affinity×[weight](/page/TheWeight)decay\sigma = \sum \frac{affinity \times [weight](/page/The_Weight)}{decay}, surfacing higher-scoring content first, though Facebook later confirmed it as an approximation rather than the full system. By the mid-2010s, EdgeRank gave way to multilayer models processing thousands of signals, predicting engagement probabilities to filter the "inventory" of eligible posts down to a manageable . Contemporary ranking, as of 2025, operates in four stages per Meta's disclosures: (1) compilation of all potential content from followed sources and recommendations; (2) signals extraction, including over 1,000 variables like recency, poster-user ties, content format (e.g., video over links), and past interactions; (3) predictions via neural networks forecasting metrics such as click-through rates, shares, or dwell time; and (4) relevancy scoring to finalize order, demoting low-quality or spammy posts based on user feedback like hides or reports. Key factors emphasize relationships (stronger ties to friends/family boost visibility over pages), content type (Reels and original videos prioritized post-2022 TikTok competition adjustments), timeliness (decay halves within hours), and engagement quality (sustained comments over passive , with 2025 updates weighting saves and private shares higher than follower counts). Milestone changes reflect responses to engagement-driven issues, such as 2018's pivot to "meaningful interactions" reducing page reach by favoring personal content amid concerns, and 2022's video-centric overhaul increasing distribution to 20-30% of feeds for algorithmic short-form competition. These shifts, while boosting retention—evidenced by average session times rising to 30+ minutes daily—have drawn scrutiny for amplifying , as maximization inherently favors emotionally charged or divisive material, per internal analyses leaked in 2021 showing algorithmic contributions to polarization. Meta counters that human moderators and demotion rules mitigate harms, with over 90% of violating content removed proactively via ML classifiers trained on billions of examples. Nonetheless, third-party studies attribute disproportionate visibility to rage-inducing posts, underscoring causal trade-offs in profit-oriented .

Video Playback and Audio Controls

Facebook videos autoplay muted by default in web browsers such as Chrome, Firefox, and Edge. To unmute, users play the video and click the speaker icon, typically featuring a slash or line through it, located in the bottom-right corner of the video player. If the icon does not appear or sound remains absent, users can right-click the browser tab and select "Unmute site" from the context menu. Additionally, verifying system volume settings is recommended, such as ensuring the browser's volume is enabled in tools like Windows Volume Mixer. Automatic sound playback for Feed videos is unavailable in browsers and is supported only in the Facebook mobile app.

Messaging, Groups, and Community Tools

Facebook's messaging functionality originated with the launch of Facebook Chat on April 14, 2008, enabling real-time text-based communication integrated into the web platform for connected users. This feature initially supported one-on-one chats and was expanded in 2010 with improved mobile integration and threaded conversations. In August 2011, Facebook released dedicated and Android apps under the name Messenger, initially as companions to the main app. By April 2014, Messenger became a standalone application, requiring separate downloads and logins, which facilitated the addition of advanced features such as voice calling in 2015, video calling later that year, and for select "secret" conversations introduced in 2016. As of 2025, Facebook Messenger reports approximately 1 billion monthly active users, with daily message volumes exceeding 100 billion, underscoring its role in personal and business communications including bots, payments in supported regions, and . Facebook Groups, first appearing in rudimentary form around mid-2005 as basic interest-based lists, evolved significantly with a major redesign launched on , 2010. The updated system allowed any member to manage content, initiate group chats, edit collaborative wikis, and send bulk emails to members, shifting from admin-only control to distributed . options include , closed, private, and visible/secret settings, with tools for scheduling posts, polls, event integration, and file libraries. Posts in groups, whether anonymous or not, do not appear on the poster's personal profile, timeline, or activity log, and do not send notifications to the poster's friends about the activity. Groups facilitate niche discussions, from hobbyist communities to professional networks, and by 2020 encompassed over 1.8 billion users worldwide, though exact current figures remain undisclosed by Meta. Administrative features emphasize member engagement metrics, such as post reach and interaction rates, to prioritize active groups in algorithmic recommendations. Community tools on Facebook extend beyond direct messaging and groups to include Pages and Events, which support organized interaction and real-world coordination. , introduced in November 2007, enable public entities, brands, and figures to cultivate follower-based communities with features like pinned posts, insights analytics, and advertising integration, distinct from personal profiles by lacking friend requests in favor of open follows. , launched in fall 2007, allow users and Pages to create virtual or in-person gatherings with RSVP tracking, guest lists, and co-hosting, integrating with Groups for targeted invitations and notifications. These tools collectively foster scalable community building, with Pages amassing billions of followers globally and Events facilitating coordination for protests, meetups, and conferences, though usage has declined amid platform shifts toward algorithmic feeds. Integration across these features, such as embedding Messenger chats in Groups or Pages, enhances retention by enabling seamless transitions between private discussions and public announcements.

Marketplace, Advertising, and E-Commerce Integration

Facebook , launched on October 3, 2016, enables users to buy and sell items locally through a dedicated section integrated into the Facebook app and website, initially rolling out to users over 18 in the United States, , , and . The platform emphasizes community-based transactions, allowing listings with photos, prices, and descriptions, while prohibiting certain categories like vehicles, animals, and weapons to mitigate risks associated with sales. By 2025, attracts an estimated 491 million monthly shoppers, representing about 16% of Facebook's user base, with over 1 billion monthly engaging overall since Meta's last official figure in 2021. Advertising within integrates directly with Meta's broader ad , where businesses use Ads Manager to create and target promotions, including boosted listings that appear prominently in users' feeds and search results. Sellers can promote individual posts by setting budgets and selecting placements, leveraging Facebook's audience data for local targeting, which has driven 's projected annual revenue to $30 billion by 2024 through transaction facilitation and ad monetization. This model relies on algorithmic recommendations to match ads with user interests, though it faces scrutiny for enabling scams, with Meta reporting removal of millions of violating listings annually via automated detection and human review. E-commerce integration expanded with Facebook Shops in May 2020, allowing merchants to create customizable storefronts linked to product catalogs uploaded via integrations with platforms like Shopify, enabling browsing, tagging products in posts, and initially native checkout within the app. By 2025, Meta shifted away from in-app checkout for Shops on Facebook and Instagram, directing purchases to merchants' external websites to support custom branding, payment options, and loyalty programs, while retaining features like product syncing and ad-driven traffic. Partnerships with e-commerce tools facilitate API-based catalog management, boosting sales through dynamic ads that retarget users based on browsing behavior across Facebook's properties. This evolution positions Marketplace and Shops as feeders into Meta's $164.5 billion annual advertising revenue in 2024, primarily from targeted e-commerce promotions, though effectiveness varies with platform algorithm changes and competition from dedicated marketplaces.

Business Model and Operations

Revenue Generation and Advertising Ecosystem

Meta Platforms, Inc., the parent company of Facebook, derives nearly all of its from digital across its family of apps, including Facebook, Instagram, and WhatsApp. In 2024, Meta reported total of $164.50 billion, with accounting for $160.63 billion, or approximately 97.6% of the total. This marked a 21.74% increase in ad from $131.95 billion in 2023, driven by expanded AI-powered targeting and higher ad impressions. The Family of Apps segment, encompassing Facebook's core operations, generated $162.4 billion in for the year, predominantly from ads displayed to its over 3 billion monthly active users. The ecosystem operates through a real-time system that determines placement for each user impression. Advertisers bid on space using formats like cost-per-click or cost-per-thousand-impressions, with the evaluating three primary factors: the bid amount, the estimated action rate (likelihood of user such as clicks or conversions), and quality ( and user feedback signals). The winning is the one that maximizes overall value to both users and advertisers, rather than solely the highest bid, which helps optimize for and reduces costs for high-quality campaigns. This system processes billions of s daily across Facebook's feed, stories, and features. Targeting relies on extensive user data, including demographics, interests inferred from behavior, and cross-platform activity, enabling precise audience segmentation. Tools like (using uploaded customer lists) and (expanding reach to similar users) enhance efficiency, while AI models predict user responses to refine delivery. Advertisers access performance metrics via Meta's , allowing iterative optimization, though the system's opacity in exact algorithms has drawn scrutiny for potential biases in ad prioritization. Within this ecosystem, content creators monetize primarily through in-stream ads in videos, advertisements on , —a virtual tipping system where viewers purchase and send stars to support creators—and sponsored content partnerships with brands, sharing in ad revenue to incentivize engaging content production. Non-ad revenue, such as from hardware sales in , remains marginal at under 3% of total, underscoring advertising's dominance.

Acquisitions, Integrations, and Corporate Governance

Facebook, Inc., rebranded as , Inc. in October 2021, has pursued an aggressive acquisition strategy to expand its , acquiring over 90 companies since , with a focus on , messaging, , and emerging technologies. Key deals include for $1 billion in April 2012, which bolstered photo-sharing capabilities; for $19 billion in February 2014, adding 450 million users to its messaging portfolio; and for $2 billion in March 2014, entering hardware. More recent acquisitions encompass for $400 million in May 2020 to enhance integration across platforms, and in 2025, a $14.8 billion stake in Scale AI for AI data labeling capabilities, alongside WaveForms for audio AI models.
AcquisitionDateValuePurpose
April 2012$1 billionPhoto and video sharing expansion
February 2014$19 billionCross-platform messaging
March 2014$2 billionVirtual reality hardware entry
May 2020$400 millionMedia content integration
Scale AI (49% stake)June 2025$14.8 billionAI training data access
Post-acquisition integrations have varied, prioritizing operational autonomy for user-facing products while leveraging shared infrastructure for advertising, data analytics, and AI. Instagram and WhatsApp retained independent apps and teams but adopted Facebook's ad systems and backend tools, enabling cross-promotion and unified monetization; for instance, Giphy's library was embedded directly into Instagram Stories and Messenger after 2020. evolved into Meta's lineup, integrating social features from Facebook accounts for multiplayer experiences. Regulatory scrutiny, including EU mandates, has limited data sharing between WhatsApp and Facebook, preserving user privacy silos despite shared ownership. Corporate governance at Meta Platforms centers on founder control via a dual-class share structure established at its 2012 IPO, granting Mark Zuckerberg approximately 58-61% of voting power through Class B shares despite holding about 14% of economic interest. This setup classifies Meta as a "controlled company" under Nasdaq rules, exempting it from certain independence requirements for board committees. Zuckerberg serves as chairman and CEO, directing strategy with board support, including members like Sheryl Sandberg (former COO until 2022) and independent directors focused on audit and compensation. Critics, including shareholder groups, argue this structure entrenches management and reduces accountability, prompting 2024-2025 proposals for reforms like time-based sunset clauses on super-voting shares, though Zuckerberg's control has blocked implementation.

User Base and Engagement

Global Reach and Growth Metrics

As of the second quarter of 2025, Facebook reported 3.07 billion monthly active users (MAUs) worldwide. This figure represents a year-over-year increase of approximately 3%, or roughly 100 million additional users from the prior year, though overall growth has stagnated compared to earlier decades. Daily active users (DAUs) for the platform hovered around 2.1 billion in late 2023, with subsequent quarterly reports indicating sustained engagement levels near this mark amid a DAU/MAU of roughly 65-70%, signaling consistent but not accelerating daily usage. Facebook's expansion traces a trajectory of exponential early growth followed by deceleration. Launched in 2004, the platform reached 100 million by , surpassed 1 billion by September 2012, and climbed to 2.91 billion by 2020 before plateauing due to market saturation in mature regions and regulatory pressures on practices. The following table summarizes key historical MAU milestones:
YearMAUs (billions)Year-over-Year Growth (%)
0.10N/A
20121.00~150
20161.86~21
20202.91~11
20233.00~3
20253.07~3
User distribution skews heavily toward emerging markets, with accounting for the largest share—over 50% of total MAUs—driven by high penetration in countries like (378 million users) and (119 million). contributes about 9.7% (roughly 221 million users, led by the with 194 million), while and each represent around 20-25% of the base, reflecting slower adoption in privacy-conscious or saturated demographics. Growth persists in regions with rising , such as sub-Saharan Africa and , offsetting declines or flatlines in the U.S. and where alternatives like (also Meta-owned) and capture younger cohorts. As of early 2025, Facebook reported approximately 3.07 billion worldwide, with 2.11 billion , representing a DAU-to- ratio of about 68.7%. These figures reflect steady global penetration, with 54.3% of accessing the platform monthly. Demographically, Facebook's user base skews toward adults rather than adolescents, with the largest age cohort being 25- to 34-year-olds, comprising 31.1% of users globally. Men constitute 56.7% of the global audience, compared to 43.3% women, though U.S. users show a reversal with women at 53.8%. Geographically, leads with the highest absolute number of users, followed by the , where penetration exceeds 82% of the population. In the U.S., usage is highest among 30- to 49-year-olds at 77%, declining among those under 30 as younger cohorts migrate to platforms like . Usage patterns indicate habitual engagement, with global users averaging 30 to 32 minutes per day on the platform, ranking it behind and but ahead of X (formerly ) in time spent. In the U.S., 70% of adults report daily access, often via mobile devices, where 64% of users engage in April 2024 data carrying into 2025 trends. Core activities include scrolling News Feed (primary for 80% of sessions), messaging via Messenger (194 million U.S. users), and Marketplace browsing, with ad-driven interactions peaking during evenings in high-density regions like . Retention trends show resilience among older users but erosion among youth, with overall DAU growth at 5.5% year-over-year as of mid-2025, down from prior peaks due to saturation in mature markets. Platform retention stands at 69.6%, higher than 's 39.1%, driven by network effects and family connections that sustain logins among 55+ demographics (3.4% of ad audience but loyal). However, churn accelerates among 18- to 24-year-olds, with only 23% representation, as algorithmic shifts and concerns prompt shifts to decentralized alternatives; MAU growth has stalled since 2021 in some analyses, stabilizing at 3 billion amid regulatory pressures. This bifurcated retention—strong for utility-focused adults, weaker for entertainment-seeking youth—underpins 's pivot toward AI-enhanced feeds to boost session stickiness.

Content Moderation and Policies

Evolution of Moderation Framework

Facebook's content moderation framework originated with basic user-reporting mechanisms and prohibitions against spam, , and illegal activities shortly after its launch, relying primarily on automated filters and limited human review to manage a small user base. By the early , as membership surpassed 1 billion active users in , the company formalized Community Standards, expanding rules to cover , , and , with enforcement scaling through partnerships with contractors for human moderation. The 2016 U.S. presidential election prompted a significant escalation, with Facebook acknowledging the platform's role in amplifying and announcing in December 2016 plans to hire 3,000 additional reviewers to address and divisive content proactively. This led to the introduction of third-party partnerships in April 2017 under the International Fact-Checking Network, enabling reduced distribution of flagged false content rather than outright removal, alongside algorithmic demotions for violating material. Enforcement metrics grew rapidly; by 2018, the platform removed over 2.5 million pieces of terrorist propaganda quarterly and invested in AI tools to detect 99% of ISIS-related content before user reports. In response to ongoing scandals, including the 2018 data misuse revelation, Facebook established the Oversight Board in September 2019 as an independent entity funded by a trust but structurally separate, with operations commencing in late to appeal and adjudicate high-profile content removal decisions, aiming to inject external accountability into policy application. The board, comprising 20 global experts, has since reviewed cases involving political speech and hate content, overturning some Meta decisions while endorsing others, though critics noted its limited scope, handling fewer than 1% of appeals annually. The accelerated reliance on proactive moderation, with policies updated in March 2020 to remove health misinformation deemed harmful by WHO partners, resulting in over 20 million pieces of violating content actioned monthly by mid-2020; human moderators numbered over 15,000 by 2021, supplemented by AI classifiers trained on billions of data points. However, reports of errors—estimated at 300,000 daily in 2020—highlighted scalability issues, prompting refinements like nuanced labeling over blanket bans. By 2024–2025, amid internal reviews and external pressures including U.S. political shifts, Meta pivoted toward reduced intervention, announcing on January 7, 2025, the termination of U.S. third-party in favor of a system modeled on (formerly ), prioritizing user-contributed context and algorithmic transparency to minimize over-removal while maintaining core prohibitions on violence and illegality. This framework evolution reflects a transition from reactive, user-driven to hybrid AI-human proactive systems, quasi-independent oversight, and latterly a de-emphasis on viewpoint-based demotions, with quarterly transparency reports documenting over 90% of removals now AI-initiated.

Technologies, Human Review, and Enforcement Metrics

Meta employs machine learning-based systems to proactively detect content violating Standards, analyzing text, images, videos, and user behavior patterns to flag or remove material before user reports. These systems achieve proactive action rates exceeding 90% across 12 of 13 policy areas, including spam, nudity, and , by training on labeled datasets and iterating models for accuracy. For nuanced or high-risk cases, such as contextual or graphic violence, AI escalates content to review queues prioritized by severity and potential . In May 2025, Meta outlined plans to automate approximately 90% of processes—covering , youth protections, and integrity evaluations—replacing reviewers with advanced models to scale efficiency amid growing content volumes. Human review involves global teams applying discretionary judgment to -flagged items, informed by regional cultural contexts and guidelines, though exact staffing numbers remain undisclosed in 2025 reports, fueling critiques. Historically, Meta maintained around 15,000 moderators as of 2024, handling millions of daily reviews in outsourced and in-house operations, but shifts toward AI augmentation have reduced human involvement in routine tasks. Moderators face reported challenges including exposure to traumatic content and inconsistent training, with some facilities employing 150 staff in specialized hubs as of April 2025. Quarterly Community Standards Enforcement Reports detail enforcement scale: in Q1 2025, actions decreased across categories like dangerous organizations due to policy refinements reducing over-enforcement, with U.S. mistake rates halved from Q4 2024 levels. By Q2 2025, weekly enforcement errors dropped over 75% since January, reflecting AI improvements and deprioritization of low-severity violations; proactive detection dominated high-priority areas, yielding over 2 million child exploitation reports to . Violation prevalence remained low, with upper bounds of 0.05% for terrorism-related views and 0.07-0.09% for or on Facebook, though slight upticks occurred from measurement adjustments and reduced interventions. and constituted the bulk of actions, underscoring AI's efficacy in volume-based categories over subjective ones like .
CategoryQ1 2025 Proactive FocusQ2 2025 Key Metric
Child ExploitationHigh-severity priority>2M NCMEC reports
Violent/Graphic ContentEscalated for contextPrevalence ~0.09% views
Spam/Fake AccountsDominant enforcement volumeAdjustments increased Instagram actions
Enforcement Errors (U.S.)~50% reduction>75% weekly drop since Jan

Policy Shifts Toward Reduced Intervention (2024–2025)

In January 2025, Meta announced a series of policy changes aimed at reducing proactive content interventions on , , and Threads, emphasizing free expression over prior moderation frameworks. CEO stated that the company would end its third-party program, which had involved partnerships with external organizations to label or demote content deemed misleading, and replace it with a user-driven "" system modeled after X's approach. This shift eliminated fact-checker-imposed visibility reductions and labels, which Zuckerberg described as forms of "" that prioritized expert judgments often influenced by political biases. The changes also included simplifying enforcement policies to minimize errors in content removals, such as erroneous takedowns of legitimate speech, and reducing the overall volume of proactive moderation actions. Meta reported that between January and March 2025, it removed 3.4 million pieces of content for hateful conduct— a decline from prior quarters—while noting fewer enforcement mistakes overall. These adjustments aligned with recommendations from free speech advocates, including the Foundation for Individual Rights and Expression (), which had critiqued Meta's prior rules for overreach in viewpoint discrimination. Zuckerberg attributed the pivot to lessons from government pressures during the , including reported demands to censor content on and elections, which he later acknowledged as oversteps. Implementation led to measurable reductions in intervention rates but also prompted concerns about rising harmful content. Meta's May 2025 transparency report indicated slight increases in reported , , and graphic material, though the company argued these did not broadly undermine platform safety and reflected a for broader speech protections. Critics, including Meta's independent Oversight Board, faulted the rollout as hasty and insufficiently assessed for risks, potentially exacerbating in a post-2024 U.S. election environment. Proponents, such as U.S. House Committee Chair , praised the moves as correcting long-standing aligned with left-leaning institutional pressures. Empirical data from the period showed no surge in viral hoaxes attributable to the policy, though third-party analyses questioned the neutrality of given user demographics skewed toward established viewpoints.

Data Practices and Privacy

Data Collection, Usage, and User Controls

Facebook collects extensive user data directly from platform interactions, such as posts, comments, likes, shares, and messages, as well as device and network information including IP addresses, location data, browser types, and operating systems. Additional data sources encompass third-party integrations, like advertiser-shared information and off-platform activity tracked via Facebook and cookies embedded on over 30% of the top million websites, enabling inference of habits even for non-logged-in users or those without accounts. Metadata from photos, videos, and connections (e.g., friend lists, group memberships) further supplements this, with collection occurring continuously to build comprehensive profiles for . This data is primarily used to personalize user experiences, such as curating the and recommendations on Facebook and integrated platforms like , while also powering targeted advertising, which relies on behavioral signals to match ads to inferred interests, demographics, and purchase intents. For instance, interactions like viewing products or engaging with pages inform ad delivery, with Meta's systems analyzing patterns to optimize relevance and measure effectiveness through metrics like click-through rates. Secondary uses include safety enforcement (e.g., detecting spam via pattern recognition), internal analytics for product improvement, and research initiatives, such as aggregating anonymized data for public health studies; however, advertising remains the dominant application, as evidenced by Meta's reliance on user profiling to sustain its ad auction model. Starting December 16, 2025, data from AI interactions will enhance personalization for features and ads. Users retain several controls to manage practices, accessible via the Privacy Center, including granular settings for post visibility (e.g., friends-only or custom audiences) and profile information exposure. The "Off-Facebook Activity" tool allows viewing collected from external sites and apps, with options to disconnect future sharing or clear historical logs, though this does not retroactively erase already processed for ads. access features enable downloading a portable copy of personal information, including posts, messages, and ad interactions, via the "Your Facebook Information" section, while ad preferences settings permit hiding specific categories or opting out of certain targeting based on partners' . Account deactivation temporarily hides the profile from view and suspends most account activity and processing, but the underlying data remains stored on servers, and the account can be reactivated by logging back in. Permanent deletion, in contrast, initiates a process to remove the account and associated data after a 30-day grace period during which the deletion can be canceled, though copies may persist in backups or for legal compliance following the grace period. Despite these mechanisms, independent analyses indicate persistent tracking challenges, as signals like IP addresses and device fingerprints can still link activities across sessions, limiting full evasion without broader measures like browser extensions.

Major Breaches, Shadow Profiles, and Incident Responses

In September 2018, a security in Facebook's "View As" feature was exploited, allowing hackers to access access tokens for up to 50 million user accounts, potentially enabling control over those accounts and further data extraction; the company invalidated , reset logins for 90 million affected users, and investigated no evidence of broader misuse. In , data from 540 million user records was exposed through unsecured databases maintained by third-party apps Cultura Colectiva and At the Pool, including comments, likes, and account names, stemming from lax oversight of app-stored data; Facebook worked with the developers to delete the databases and notified affected users where possible. The most significant incident occurred in , when a in Facebook's contact importer —patched in —allowed scraping of data from 533 million users, including phone numbers, full names, locations, and birthdates, which was then posted on a hacking forum; this stemmed from features designed to help users find contacts but lacked sufficient safeguards against bulk extraction. Facebook maintains shadow profiles—collections of on individuals without accounts—by aggregating from users' uploaded contacts, hashes, device signals, and third-party sources, which can include photos, , and phone numbers not explicitly provided by the subject; this practice, intended to enhance friend suggestions and , has persisted despite concerns raised since at least 2011. During 2018 congressional hearings following the scandal, CEO acknowledged that shadow profiles exist for non-users, derived from shared by connected users, but emphasized users' control over their own without addressing non-user recourse. Incidents involving shadow profiles include a 2013 experiment where Facebook deanonymized non-users via hashes, and ongoing revelations that such profiles fuel inferences, even for opted-out individuals, highlighting causal links between user incentives and unintended non-user . Facebook's responses to breaches have typically involved rapid technical fixes, such as patching vulnerabilities and invalidating compromised tokens, coupled with notifications to regulators and affected parties under laws like GDPR; however, critics note delays in public disclosure and a defensive posture, as in the scraping incident where the company argued the data was "old" and no action like password resets was needed, prioritizing takedown requests over proactive user alerts. For s, Facebook has introduced tools like "Off-Facebook Activity" in to show data from partners and allow limited deletions, but has not eliminated the underlying collection, citing benefits for platform functionality; regulatory scrutiny, including fines, has prompted partial restrictions on contact uploads, though empirical evidence of reduced shadow profile growth remains limited. Overall, incident handling has emphasized engineering solutions over systemic redesigns, with post-breach audits revealing persistent risks from legacy features designed for growth over containment.

Regulatory Compliance and Policy Evolutions

Facebook has faced extensive regulatory scrutiny globally, particularly regarding data , antitrust practices, and obligations under frameworks such as the European Union's (GDPR), enacted in 2018, and the U.S. Federal Trade Commission's (FTC) enforcement actions. Compliance efforts intensified following high-profile incidents, including the 2018 scandal, which prompted (Facebook's parent) to overhaul internal governance, establishing a dedicated privacy committee and enhancing user data controls. In the U.S., the 2019 FTC settlement imposed a $5 billion penalty—the largest ever for violations—and mandated structural reforms, such as independent privacy audits and restrictions on facial recognition data use without affirmative consent. In the , Meta has incurred cumulative fines exceeding €3 billion by late 2024, reflecting repeated violations in data transfers, security breaches, and personalized advertising consent mechanisms. Notable enforcement includes a €1.2 billion fine in May 2023 for unlawful EU-U.S. data transfers relying on standard contractual clauses invalidated by the ruling, leading Meta to suspend transatlantic data flows temporarily and pivot to the adopted in July 2023 for adequacy. Additional penalties encompassed €414 million in January 2023 for breaching GDPR's consent rules in ad targeting and €251 million in December 2024 for failures in securing email addresses and phone numbers from a 2018 breach affecting 29 million users. These actions compelled policy shifts, including granular consent toggles for data processing and the introduction of "off-Facebook activity" tools allowing users to disconnect external data sources. Under the EU's (DSA), effective from 2024, and (DMA), Meta was designated a platform, imposing obligations for transparency in algorithmic recommendations, risk assessments for systemic harms, and with rivals. Non-compliance yielded a €200 million DMA fine in April 2025 for violating combination rules between and , alongside Apple's penalty, prompting Meta to adjust "pay or consent" models for ad-free subscriptions to align with consent requirements. Antitrust probes evolved similarly; by October 2025, Meta neared settlements with the on two DMA-related cases to avert escalating fines, following commitments to open up access for advertisers and competitors. In the U.S., ongoing FTC antitrust suits, including a 2020 monopoly maintenance case, saw Meta contest evidence handling in October 2025, while state-level actions under laws like California's Consumer Privacy Act (CCPA) drove enhancements in opt-out mechanisms and deletion requests. Policy evolutions from 2020 to 2025 emphasized reactive adaptations, such as the 2022 Privacy Policy rewrite for clarity on and sharing, and a January 2025 Terms of Service update expanding Meta's rights to for AI training while mandating compliance with emerging privacy laws in eight jurisdictions. These changes, often litigated—Meta challenged several GDPR fines in Irish and EU courts—reflect a pattern of minimal voluntary overhauls until penalized, with empirical audits showing persistent gaps in enforcement efficacy despite billions invested in compliance infrastructure.

Political Influence and Manipulation Claims

Allegations of Election Interference and Foreign Operations

In 2016, Russian operatives affiliated with the purchased approximately 3,500 advertisements on Facebook, spending about $100,000, which generated content viewed by an estimated 10 million users, though the company later revised the potential reach to up to 126 million impressions across posts from and pages. These efforts, detailed in congressional testimonies and the , involved creating divisive content on topics like and race to sow discord, but empirical analyses, such as one by economists Hunt Allcott and Matthew Gentzkow, found that shared on influenced only a small fraction—around 0.04 percentage points—of the vote margin in key states, suggesting limited causal impact on the election outcome. Facebook responded by enhancing ad transparency requirements and sharing data with investigators, though critics from both parties alleged the platform's algorithms amplified polarizing content without sufficient early detection. Allegations extended beyond Russia, with claims of Iranian influence operations using Facebook to promote anti-American narratives during the same cycle, though on a smaller scale than Russian efforts. In response to such foreign activities, (later Meta) has dismantled numerous coordinated inauthentic behavior networks; for instance, between 2017 and 2020, it removed operations originating from and targeting U.S. audiences, including 70 Facebook pages and 65 accounts linked to the Russian in 2018. By 2022, Meta took down networks from and promoting state interests through fake accounts, and in 2023, it removed nearly 9,000 accounts tied to a Chinese "Spamouflage" campaign amplifying propaganda on global issues. These removals, often proactive via AI and human review, numbered in the dozens annually, with and consistently ranking as primary sources of such operations per Meta's transparency reports. For the 2020 U.S. , allegations focused less on foreign actors exploiting the platform—though Meta continued removals—and more on domestic , including claims of voter fraud that persisted post-election. Meta CEO stated in November 2020 that the company had built systems to detect and limit interference, labeling thousands of posts and removing content violating policies, while a 2020 internal report highlighted improvements in combating false claims about voting processes. However, in 2024, Zuckerberg acknowledged pressure during the prior administration to censor COVID-19-related content, some of which intersected with election narratives, raising questions about external influence on platform decisions. By 2023, Meta rolled back restrictions, permitting political ads to reference unproven 2020 election theft claims, a shift from earlier suppression policies that critics argued disproportionately targeted conservative viewpoints without equivalent action against left-leaning . Internationally, foreign operations have targeted elections in multiple countries via Facebook; for example, Russian-linked networks influenced discourse in prior to 2016 U.S. events, and Chinese campaigns have aimed at democracies like and . Meta's enforcement has scaled accordingly, removing three foreign influence operations in Q3 2023 alone—two Chinese and one Russian—demonstrating ongoing mitigation efforts amid persistent vulnerabilities in open platforms. While allegations of systemic interference by Facebook itself lack direct evidence, the platform's scale has made it a vector for exploitation, prompting debates over algorithmic amplification versus user-driven virality as primary causal factors.

Bias in Moderation and Viewpoint Discrimination Debates

Debates over in Facebook's have centered on allegations of systematic viewpoint against conservative and right-leaning perspectives, with critics citing specific instances of suppression and internal inconsistencies in enforcement. In October 2020, Facebook limited the distribution of a article detailing contents from Hunter Biden's laptop, citing concerns over hacked materials and potential misinformation, an action later scrutinized in congressional investigations as contributing to election-related . acknowledged in a 2024 letter to that the platform erred by overly restricting such content based on FBI warnings about foreign interference, though he maintained the decision was precautionary rather than politically motivated. Further evidence emerged from leaked internal documents, including the Facebook Papers released in , which revealed that company executives prioritized avoiding perceptions of conservative while grappling with algorithmic amplification of polarizing content, often leading to uneven application of rules favoring left-leaning narratives on issues like origins and . Whistleblower Frances Haugen's 2021 testimony highlighted internal research showing Facebook's failure to consistently curb from all ideological sides, but subsequent analyses of her disclosures pointed to disproportionate scrutiny on right-wing claims. These revelations fueled claims that moderation teams, influenced by predominantly left-leaning internal culture, applied "" and "" labels more readily to conservative posts, as evidenced by disparities in removal rates for similar content across political spectrums. Empirical studies have yielded mixed findings, with some, like a 2021 report, asserting no against conservatives and even suggesting amplification of right-wing voices, though critics noted the study's reliance on platform-provided data potentially masking enforcement biases. Conversely, user surveys indicate widespread perception of , with 73% of Americans in a 2020 Pew Research poll believing sites intentionally suppress political viewpoints they deem objectionable, a view substantiated by post-January 6, 2021, suspensions of former President Trump's accounts under vague "incitement" policies not equally applied to analogous left-leaning rhetoric. In response to ongoing scrutiny, Meta announced in January 2025 the discontinuation of third-party programs—criticized by Zuckerberg as ideologically skewed—and adoption of a model to reduce top-down intervention and mitigate perceived biases. Congressional hearings, including those by the , have documented communications between Facebook and the pressuring content demotion on topics, with Zuckerberg expressing regret in for yielding to such influence, underscoring causal links between external political demands and decisions. These episodes highlight broader tensions, where empirical data on enforcement metrics reveal higher suspension rates for conservative-leaning accounts engaging in policy-violating behavior at comparable frequencies, yet debates persist over whether this reflects genuine rule-breaking or discriminatory enforcement. Overall, while Facebook maintains its policies aim for neutrality, accumulated evidence from leaks, admissions, and policy reversals substantiates claims of viewpoint discrimination favoring progressive viewpoints, prompting shifts toward less interventionist approaches by 2025.

International Cases: Propaganda and Geopolitical Tensions

In , Facebook's algorithms amplified anti-Rohingya prior to and during the 2017 military crackdown, contributing to ethnic violence that displaced over 700,000 Rohingya Muslims. A 2022 Amnesty International report detailed how the platform's recommendation systems prioritized inflammatory content from military-affiliated accounts, with internal Facebook documents revealing awareness of risks but inadequate Burmese-language moderation resources—only about 200 content reviewers for a population of 50 million users. The described the platform as a "useful instrument" in what it termed a example of , while Facebook later acknowledged in 2018 that the site had been used to incite offline violence, leading to the removal of over 20 million posts between 2018 and 2021. Rohingya victims filed lawsuits in 2021 seeking $150 billion in damages, alleging Meta's profit-driven expansion exacerbated the crisis despite warnings from rights groups. India's government exerted significant pressure on Facebook to relax enforcement against propaganda and hate speech favoring the ruling (BJP), particularly during the and elections, amid rising communal tensions. Leaked internal documents from 2021 showed Facebook identified coordinated operations praising military actions against but hesitated to act due to fears of regulatory backlash, including potential bans similar to those on rivals like . In , Meta approved AI-generated political ads on Facebook and that incited violence and spread about opposition leaders, violating its own policies, as verified by fact-checkers who flagged over 100 such instances. This deference contributed to the proliferation of anti-Muslim narratives, with one study estimating junk news comprised 20-30% of election-related content shared on the platform, heightening geopolitical friction between India's Hindu-nationalist policies and minority protections. Russian state-linked networks exploited Facebook to undermine support for during the 2022 invasion and beyond, evading Meta's bans through and ads that reached millions despite U.S. and sanctions prohibiting Kremlin-linked business. A 2025 report identified operations like "Doppelganger," which purchased over 10,000 ads promoting narratives of Ukrainian corruption and aggression, generating 200 million impressions across and the U.S. before detection. Meta disrupted hundreds of such clusters in 2024 alone, removing accounts mimicking news outlets to sow division on topics from to the Gaza conflict, though critics noted persistent gaps in AI detection for non-English content. These efforts intensified geopolitical tensions by amplifying , with one analysis linking platform exposure to shifted in swing regions. In , Facebook's shortcomings fueled ethnic during the 2020-2022 Tigray conflict, where campaigns incited violence killing thousands and displacing millions. Reports documented platform failures to curb Amhara-Tigray , with algorithms boosting viral posts from militias despite user flags, leading to real-world attacks on civilians. Meta's limited local moderators—fewer than 100 for 120 million users—exacerbated the issue, prompting calls for reparations akin to Myanmar's case and highlighting broader tensions in moderating authoritarian-leaning regimes' internal .

Societal and Economic Impacts

Economic Contributions and Job Creation Effects

, Inc., the parent company of Facebook, generated $164.5 billion in revenue in 2024, primarily from , contributing significantly to the global technology sector's economic output. This revenue stream, exceeding the GDP of 136 countries, underscores Facebook's role in digital markets, where it captures a dominant share through targeted ad placements on its platforms. Facebook's advertising ecosystem supports over 200 million businesses worldwide, with approximately 3 million actively purchasing ads, enabling these entities to reach targeted audiences and drive sales growth. Meta's internal research attributes more than $360 billion in annual global business spend to its platforms, fostering revenue generation for small and medium enterprises that leverage Facebook for customer acquisition and expansion. These tools have been linked to enhanced ROI for advertisers, with 40% of businesses reporting the highest returns from Facebook ads compared to other channels. Direct employment at Meta stood at 74,067 full-time employees as of 2024, spanning engineering, , sales, and operations across global offices. Indirect job creation effects are substantially larger; Meta's 2024 analysis estimates that platform-dependent supply chains in the United States generated $548 billion in economic activity and supported 3.4 million jobs, including roles in agencies, app development, and tied to facilitated by Facebook. Similar patterns appear internationally, with personalized ads on Facebook and associated with €213 billion in European economic value and 1.44 million jobs in 2024.
RegionEconomic Activity Linked (2024)Jobs Supported
$548 billion3.4 million
€213 billion1.44 million
These figures derive from Meta-commissioned studies modeling dependencies and multipliers, though independent verification remains limited; causal attribution to Facebook specifically requires isolating platform effects from broader . Nonetheless, empirical tracking of ad-driven business expansions indicates positive network externalities, where increased platform usage amplifies economic spillovers through developer ecosystems and third-party services.

Social Connectivity and Positive Network Externalities

Facebook's platform facilitates social connectivity by enabling users to maintain and expand personal networks, leveraging network externalities where the platform's value escalates as more individuals join, creating a self-reinforcing cycle of participation. Empirical analyses confirm that the of the service rises with user scale, as each additional participant enhances opportunities for interaction and across diverse groups. This dynamic underpins Facebook's dominance, with approximately 79% of users engaging the platform multiple times daily, fostering sustained connectivity. Studies demonstrate that Facebook usage correlates positively with the formation and maintenance of , particularly bridging ties that connect disparate social circles. For instance, among college students, intensive Facebook engagement predicts higher levels of both bonding social capital—strengthening close relationships—and bridging —expanding weak ties for broader support networks. This effect manifests in reduced through mechanisms like status updates, where increased posting over a week directly lowers perceived isolation by reinforcing relational bonds. In long-distance contexts, Facebook serves as a critical tool for relational upkeep, allowing couples and acquaintances to share updates, assess mutual sentiments, and sustain despite physical separation. on geographically distant romantic relationships highlights how platform features enable ongoing partner and communication, mitigating the decay of ties that distance often induces. Network externalities amplify these benefits, as users' incentives to remain active grow with the density of their connections, evidenced by structural models showing persistent usage driven by perceived relational value. Beyond interpersonal links, these externalities extend to collective outcomes, such as enhanced via expanded professional networks formed on the platform. One analysis estimates that prolonged college-era Facebook access boosts cohort earnings by 0.62 percentiles on average, attributing gains to information diffusion and opportunity matching through social ties. Overall, empirical syntheses affirm a consistent positive association between Facebook engagement and micro-level , underscoring the platform's role in augmenting societal interconnectivity without supplanting offline interactions.

Mental Health, Addiction, and Empirical Causality Assessments

Numerous empirical studies have identified associations between Facebook usage and adverse mental health outcomes, including increased symptoms of depression, anxiety, and diminished subjective well-being, particularly among adolescents and young adults. For instance, a 2022 study analyzing college students found a significant correlation between the introduction of Facebook on campuses and rises in anxiety and depression rates, with effect sizes indicating a modest but detectable impact. Meta-analyses similarly report small positive correlations between social media engagement, including platforms like Facebook, and depressive symptoms (r ≈ 0.23), anxiety (r ≈ 0.10), and social comparison tendencies (r ≈ 0.33). These associations are often stronger for problematic or excessive use, defined by metrics such as time spent or compulsive checking, which correlate with heightened loneliness and fear of missing out (r ≈ 0.31 for anxiety). Facebook's addictive design features, such as variable reward schedules from notifications and likes, contribute to habitual checking behaviors akin to behavioral addictions, with self-reported addiction scales showing links to real-world functional impairments like reduced productivity. Internal Facebook research from 2019–2021, leaked via whistleblower Frances Haugen, revealed that Instagram—a Meta platform closely integrated with Facebook—exacerbated body image issues for approximately one in three teen girls, with 32% reporting worsened perceptions after exposure to idealized content. However, the same studies noted that for the majority of affected teens, Instagram had neutral or positive effects on body image, challenging blanket harm narratives. Addiction metrics from these internals indicated teens spending up to 3 hours daily, with algorithms prioritizing engagement over well-being, though causal pathways remain inferred from usage logs rather than controlled trials. Assessing causality requires distinguishing correlation from causation, a challenge due to confounders like preexisting mental health vulnerabilities driving heavier use (reverse causality) and bidirectional influences. Experimental evidence, such as randomized deactivations, yields mixed results: a 2019 study of 2,800+ users found quitting Facebook for a month improved well-being slightly (e.g., +0.06 standard deviations in life satisfaction), but effects were short-term and small, comparable to other low-impact interventions. Conversely, meta-analyses of abstinence interventions report no significant changes in affect or satisfaction, suggesting displacement activities or selection biases inflate perceived benefits. Critics like Orben and Przybylski highlight that effect sizes are minuscule—akin to minor dietary factors—and often fail to survive rigorous specification curve analyses accounting for measurement variance. Longitudinal and quasi-experimental designs, including difference-in-differences around platform rollouts, provide stronger causal evidence but reveal heterogeneity: harms appear more pronounced for vulnerable subgroups (e.g., girls with concerns) than population averages, with no universal decline attributable to Facebook alone. Internal Meta data, while proprietary and potentially biased toward self-justification, corroborates targeted risks like algorithm-driven exposure to harmful content, yet external replications struggle with endogeneity. Overall, while empirical data supports probabilistic risks from heavy, unmoderated use, claims of deterministic causality overstate evidence, as null or positive findings in balanced reviews underscore the role of individual agency and usage patterns over inherent platform toxicity.

Cultural Shifts, Innovation Enablement, and Long-Term Influence

Facebook facilitated a transition from private, one-to-one communication—such as and calls—to of personal updates via status feeds and photo tagging, fundamentally altering social interaction norms by emphasizing casual, network-wide over formal replies. This shift normalized oversharing, where users routinely disclose intimate details like family milestones or emotional states, creating digital legacies that persist beyond individual lifetimes, as evidenced by adolescent users viewing profiles as enduring diaries. Empirical indicate variations in these practices, with American users more likely to emphasize facial images in posts compared to East Asians, reflecting underlying norms of versus collectivism that platforms like Facebook amplify rather than originate. The platform enabled reconnections with geographically dispersed contacts, such as childhood acquaintances or estranged relatives, addressing modern fragmentation from mobility while fostering support networks for life events like . However, this has contributed to evolving privacy expectations, where initial enthusiasm for visibility often collides with later concerns over data exposure, though users persist due to perceived social benefits outweighing risks in subjective assessments. On May 24, 2007, Facebook launched its developer platform, opening APIs like the Graph API to third-party creators and enabling the integration of social applications directly into user feeds, which spurred innovations in gaming, , and content sharing. This ecosystem attracted over 95,000 applications by 2010, including viral hits that leveraged network effects for rapid scaling, and supported initiatives like the fbFund, which granted funds to developers from 2007 to 2009 to build platform-dependent businesses. Such tools democratized app development, allowing non-Facebook entities to innovate within its infrastructure, though dependency on platform policies later constrained autonomy for some creators. Over two decades, Facebook's introduction of features like the News Feed in 2006 and photo tagging has set precedents for algorithmic and visual storytelling across , influencing successors like and in prioritizing real-time engagement over chronological posting. Its scale—reaching 1 billion monthly active users by September 2012 and 2.11 billion daily users by late 2023—has entrenched data-driven as a core paradigm, enabling economic models where fuels revenues exceeding $40 billion quarterly by 2023, though this has reshaped media ecosystems by diverting traffic from traditional publishers. Long-term, these dynamics have accelerated through global trend diffusion, such as viral challenges, while empirical analyses suggest platforms reinforce rather than unilaterally cause norm shifts, with usage patterns adapting to pre-existing cultural contexts like high-context versus low-context communication styles.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.