Case study: Facebook–Cambridge Analytica data breach scandal

Cambridge Analytica is a federal data analytics, marketing, and consulting firm based in London, UK, that is accused of illegally obtaining Facebook data and using it to determine a variety of federal crusades. These crusades include those of American Senator Ted Cruz and, to an extent, Donald Trump and the Leave-EU Brexit campaign, which resulted in the UK’s withdrawal from the EU.  In 2018, the Facebook–Cambridge Analytica data scandal was a major disgrace, with Cambridge Analytica collecting the private data of millions of people’s Facebook profiles without their permission and using it for Political Advertising. It was defined as a watershed flash in the country’s understanding of private data, prompting a seventeen (17) per cent drop in Facebook’s cut-rate and summons for stricter laws governing tech companies’ usage of private data.

Background Information

Fotis International Law Firm  aims to provide our readers with a brief overview of the Facebook Data Breach that happened. A lot of people took a survey in 2014 that looked similar and included not only the user’s personally identifiable information or data but also the data of the user’s Facebook friends with the Company that worked for President Trump’s 2016 campaign. This is where Cambridge Analytica (CA) entered the picture, partnering with Aleksandr Kogan, a UK research academic who was using Facebook for research purposes. Kogan’s survey, which appeared innocuous and included over 100 personality traits with which surveyees could agree or disagree, was sent to 3L Americans.

But there’s a catch: to take the survey, surveyee’s must log in or sign up for Facebook, giving Kogan access to the user’s profile, birth date, and location. Kogan created a psychometric model, which is similar to a personality profile, by combining the survey results with the user’s Facebook data. The data was then combined with voter records and sent to CA by Kogan. CA claimed that the results of this survey, combined with the personal traits of various users and models, were crucial in determining how they profiled a user’s psychoneurosis and other susceptible traits.

In only a few months, two lakh twenty thousand people took part in the survey of Kogan, and data from up to 87 million Facebook user profiles were harvested, accounting for nearly a quarter of all Facebook users in the United States. The goal was to use the data to target users/surveyees with political messaging that would aid Trump’s campaign strategy, but the campaign objected. Even though Kogan’s work was for academic research, he shared the formulated data with CA, which is against Facebook’s policy. In response to the violation, Facebook CEO Mark Zuckerberg stated that it was not a data breach because no passwords were stolen or any systems were infiltrated, but it was a violation of the terms of service. In response to the breach, the CEO of Facebook who is Mark Zuckerberg stated that it was not a data breach because no passwords were stolen or any systems were infiltrated, but rather a breach of contravention between Facebook and its users. The Federal Trade Commission of the US took up the investigation after that.

Facebook Data Breach

CA’s illegitimate procurement of personally identifiable data was first revealed in December 2015 by Harry Davies, a Guardian journalist. CA was working for US Senator Ted Cruz, according to Harry, and had obtained data from millions of Facebook accounts without their permission. Facebook declined to comment on the story other than to say that it was looking into it. The scandal finally blew up in March 2018 when a conspirator, Christopher Wylie, an ex-CA employee, was exposed. Christopher was an unidentified source for Cadwalladr’s article “The Great British Brexit Robbery” in 2017. This report was well-received, but it was met with scepticism in some quarters, prompting sceptical responses in publications such as The New York Times. In March 2018, the news organizations released their stories concurrently, causing a massive public outcry that resulted in more than $100 billion being deducted from Facebook’s retail funding in a matter of days. Senators from the US and the UK have demanded answers from Facebook CEO Mark Zuckerberg. Following the scandal, Mark Zuckerberg agreed to testify in front of the US Congress.

Summary of the Case

CA’s parent company, Strategic Communication Laboratories Group, was a private British behavioural and strategic research communication corporation. In the US and other countries, SCL sparked public outrage by obtaining data through data mining and data analysis on its users with the help of a university researcher named Aleksandr Kogan, who was tasked with developing an app called “This is your digital life” and along with that, he was told to create a survey on the behavioural patterns of users that he had obtained from Facebook’s social media users, to use the data for electoral/political purposes without the approval of Facebook or the users of Facebook, since the data was detailed enough to create a profile that implied which type of advertisement would be most effective in influencing them. Based on the findings, the data would be carefully targeted to key audience associations to change behaviour in line with SCL’s client’s objective, resulting in a breach of trust between Facebook and its users.

Legal Implications

As a result, the Facebook CEO was questioned, and the stock price dropped by seventeen (17) per cent. He was also requested to enforce strict regulations on the protection of users’ data. Users were afterwards told that the access they had provided for various applications had been withdrawn and reviewed in the settings, as well as there being audit trials on breach investigation. Meanwhile, Facebook has vowed to create an app that would require users to delete all of their Facebook web search data. CA has been the subject of multiple baseless allegations in past years, and despite the firm’s efforts to improve the record, it has been chastised for actions that are not only legal but also generally acknowledged as a routine component of internet promotion in both the federal and industrial sectors.

Julian Malins, a third-party auditor, was appointed by CA to look into the allegations of wrongdoing. According to the firm, the inquiry determined that the charges were not supported by the facts. Despite CA’s constant belief that its employees have acted ethically and lawfully, a belief that is now completely supported by Mr Malin’s declaration, the Company’s clients and suppliers have been driven away implicitly as a result of the media coverage. As a result, in May 2018, it was decided that continuing to manage the firm was no longer practicable, leaving CA with no practical alternative for bringing the firm into government.

The General Data Protection Regulation (GDPR), which had come into effect in May 2018, establishes logical data security laws across Europe. It applies to all companies that prepare private data about EU citizens, regardless of where they are situated. Processing is a comprehensive term that refers to everything linked to private data, such as how a company handles and uses data, such as settling, saving, using, and destroying it. 

While many of the GDPR’s requirements are based on EU data protection regulations, the GDPR has a greater reach, more precise standards, and ample penalties. For example, it necessitates a higher level of consent for the use of certain types of data and enhances people’s rights to request and shifting their data. Failure to comply with the GDPR can result in significant penalties, including fines of up to 4% of worldwide annual income for multiple violations or infringements. In terms of policy changes, data may only be accessed by others, including developers. If permissions are granted, data settings are stricter, and a research tool is used to scrutinize the search.

Regardless matter how many changes or updates are made to specific applications, the user of that platform should be aware of the types of personal data and apps to which rights should be granted. In addition, maintaining a check, such as evaluating account activity, revoking access to illegal applications, and monitoring its settings at regular intervals, is critical to keeping their data safe and being aware of the repercussions of a breach. The case of CA is the precedent. Countries should create a legal framework that will severely restrict the operations of firms like CA and prevent the globally uncontrolled exploitation of personal data on social media. No one can guarantee that a government would resist the temptation to utilize technology for its ends. It’s quite probable that it’s going on right now.

facebook data breach 2019 case study

Facebook data breach: what happened and why it’s hard to know if your data was leaked

facebook data breach 2019 case study

Associate Dean (Computing and Security), Edith Cowan University

Disclosure statement

Paul Haskell-Dowland does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Edith Cowan University provides funding as a member of The Conversation AU.

View all partners

Over the long weekend reports emerged of an alleged data breach, impacting half a billion Facebook users from 106 countries.

And while this figure is staggering, there’s more to the story than 533 million sets of data. This breach once again highlights how many of the systems we use aren’t designed to adequately protect our information from cyber criminals.

Nor is it always straightforward to figure out whether your data have been compromised in a breach or not.

What happened?

More than 500 million Facebook users’ details were published online on an underground website used by cyber criminals.

It quickly became clear this was not a new data breach, but an older one which had come back to haunt Facebook and the millions of users whose data are now available to purchase online.

The data breach is believed to relate to a vulnerability which Facebook reportedly fixed in August of 2019 . While the exact source of the data can’t be verified, it was likely acquired through the misuse of legitimate functions in the Facebook systems .

Such misuses can occur when a seemingly innocent feature of a website is used for an unexpected purpose by attackers, as was the case with a PayID attack in 2019.

facebook data breach 2019 case study

Read more: PayID data breaches show Australia's banks need to be more vigilant to hacking

In the case of Facebook, criminals can mine Facebook’s systems for users’ personal information by using techniques which automate the process of harvesting data.

This may sound familiar. In 2018 Facebook was reeling from the Cambridge Analytica scandal . This too was not a hacking incident , but a misuse of a perfectly legitimate function of the Facebook platform.

While the data were initially obtained legitimately — as least, as far as Facebook’s rules were concerned — it was then passed on to a third party without the appropriate consent from users.

Read more: We need to talk about the data we give freely of ourselves online and why it's useful

Were you targeted?

There’s no easy way to determine if your details were breached in the recent leak. If the website concerned is acting in your best interest, you should at least receive a notification. But this isn’t guaranteed .

Even a tech-savvy user would be limited to hunting for the leaked data themselves on underground websites.

The data being sold online contain plenty of key information. According to haveibeenpwned.com, most of the records include names and genders, with many also including dates of birth, location, relationship status and employer.

Although, it has been reported only a small proportion of the stolen data contained a valid email address (about 2.5 million records).

This is important since a user’s data are less valuable without the corresponding email address. It’s the combination of date of birth, name, phone number and email which provides a useful starting point for identity theft and exploitation .

If you’re not sure why these details would be valuable to a criminal, think about how you confirm your identity over the phone with your bank, or how you last reset a password on a website.

Haveibeenpwned.com creator and web security expert Troy Hunt has said a secondary use for the data could be to enhance phishing and SMS-based spam attacks.

How to protect yourself

Given the nature of the leak, there is very little Facebook users could have done proactively to protect themselves from this breach. As the attack targeted Facebook’s systems, the responsibility for securing the data lies entirely with Facebook.

On an individual level, while you can opt to withdraw from the platform, for many this isn’t a simple option. That said, there are certain changes you can make to your social media behaviours to help reduce your risk from data breaches.

1) Ask yourself if you need to share all your information with Facebook

There are some bits of information we inevitably have to forfeit in exchange for using Facebook, including mobile numbers for new accounts (as a security measure, ironically). But there are plenty of details you can withhold to retain a modicum of control over your data.

2) Think about what you share

Apart from the leak being reported, there are plenty of other ways to harvest user data from Facebook. If you use a fake birth date on your account, you should also avoid posting birthday party photos on the real day. Even our seemingly innocent photos can reveal sensitive information.

3) Avoid using Facebook to sign in to other websites

Although the “sign-in with Facebook” feature is potentially time-saving (and reduces the number of accounts you have to maintain), it also increases potential risk to you — especially if the site you’re signing into isn’t a trusted one. If your Facebook account is compromised, the attacker will have automatic access to all the linked websites.

4) Use unique passwords

Always use a different password for each online account, even if it is a pain. Installing a password manager will help with this (and this is how I have more than 400 different passwords). While it won’t stop your data from ever being stolen, if your password for a site is leaked it will only work for that one site.

If you really want a scare, you can always download a copy of all the data Facebook has on you . This is useful if you’re considering leaving the platform and want a copy of your data before closing your account.

Read more: New evidence shows half of Australians have ditched social media at some point, but millennials lag behind

  • Social media
  • Online security
  • Cybersecurity
  • Data breaches
  • Online data

facebook data breach 2019 case study

Audience Development Coordinator (fixed-term maternity cover)

facebook data breach 2019 case study

Data and Reporting Analyst

facebook data breach 2019 case study

Lecturer (Hindi-Urdu)

facebook data breach 2019 case study

Director, Defence and Security

facebook data breach 2019 case study

Opportunities with the new CIEHF

TechRepublic

Account information.

facebook data breach 2019 case study

Share with Your Friends

Facebook data privacy scandal: A cheat sheet

Your email has been sent

Image of TechRepublic Staff

A decade of apparent indifference for data privacy at Facebook has culminated in revelations that organizations harvested user data for targeted advertising, particularly political advertising, to apparent success. While the most well-known offender is Cambridge Analytica–the political consulting and strategic communication firm behind the pro-Brexit Leave EU campaign, as well as Donald Trump’s 2016 presidential campaign–other companies have likely used similar tactics to collect personal data of Facebook users.

TechRepublic’s cheat sheet about the Facebook data privacy scandal covers the ongoing controversy surrounding the illicit use of profile information. This article will be updated as more information about this developing story comes to the forefront. It is also available as a download, Cheat sheet: Facebook Data Privacy Scandal (free PDF) .

SEE: Navigating data privacy (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)

What is the Facebook data privacy scandal?

The Facebook data privacy scandal centers around the collection of personally identifiable information of “ up to 87 million people ” by the political consulting and strategic communication firm Cambridge Analytica. That company–and others–were able to gain access to personal data of Facebook users due to the confluence of a variety of factors, broadly including inadequate safeguards against companies engaging in data harvesting, little to no oversight of developers by Facebook, developer abuse of the Facebook API, and users agreeing to overly broad terms and conditions.

SEE: Information security policy (TechRepublic Premium)

In the case of Cambridge Analytica, the company was able to harvest personally identifiable information through a personality quiz app called thisisyourdigitiallife, based on the OCEAN personality model. Information gathered via this app is useful in building a “psychographic” profile of users (the OCEAN acronym stands for openness, conscientiousness, extraversion, agreeableness, and neuroticism). Adding the app to your Facebook account to take the quiz gives the creator of the app access to profile information and user history for the user taking the quiz, as well as all of the friends that user has on Facebook. This data includes all of the items that users and their friends have liked on Facebook.

Researchers associated with Cambridge University claimed in a paper that it “can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender,” with a model developed by the researchers that uses a combination of dimensionality reduction and logistic/linear regression to infer this information about users.

The model–according to the researchers–is effective due to the relationship of likes to a given attribute. However, most likes are not explicitly indicative of their attributes. The researchers note that “less than 5% of users labeled as gay were connected with explicitly gay groups,” but that liking “Juicy Couture” and “Adam Lambert” are likes indicative of gay men, while “WWE” and “Being Confused After Waking Up From Naps” are likes indicative of straight men. Other such connections are peculiarly lateral, with “curly fries” being an indicator of high IQ, “sour candy” being an indicator of not smoking, and “Gene Wilder” being an indicator that the user’s parents had not separated by age 21.

SEE: Can Russian hackers be stopped? Here’s why it might take 20 years (TechRepublic cover story) | download the PDF version

Additional resources

  • How a Facebook app scraped millions of people’s personal data (CBS News)
  • Facebook reportedly thinks there’s no ‘expectation of privacy’ on social media (CNET)
  • Cambridge Analytica: ‘We know what you want before you want it’ (TechRepublic)
  • Average US citizen had personal information stolen at least 4 times in 2019 (TechRepublic)
  • Facebook: We’ll pay you to track down apps that misuse your data (ZDNet)
  • Most consumers do not trust big tech with their privacy (TechRepublic)
  • Facebook asks permission to use personal data in Brazil (ZDNet)

What is the timeline of the Facebook data privacy scandal?

Facebook has more than a decade-long track record of incidents highlighting inadequate and insufficient measures to protect data privacy. While the severity of these individual cases varies, the sequence of repeated failures paints a larger picture of systemic problems.

SEE: All TechRepublic cheat sheets and smart person’s guides

In 2005, researchers at MIT created a script that downloaded publicly posted information of more than 70,000 users from four schools. (Facebook only began to allow search engines to crawl profiles in September 2007.)

In 2007, activities that users engaged in on other websites was automatically added to Facebook user profiles as part of Beacon, one of Facebook’s first attempts to monetize user profiles. As an example, Beacon indicated on the Facebook News Feed the titles of videos that users rented from Blockbuster Video, which was a violation of the Video Privacy Protection Act . A class action suit was filed, for which Facebook paid $9.5 million to a fund for privacy and security as part of a settlement agreement.

SEE: The Brexit dilemma: Will London’s start-ups stay or go? (TechRepublic cover story)

In 2011, following an FTC investigation, the company entered into a consent decree, promising to address concerns about how user data was tracked and shared. That investigation was prompted by an incident in December 2009 in which information thought private by users was being shared publicly, according to contemporaneous reporting by The New York Times .

In 2013, Facebook disclosed details of a bug that exposed the personal details of six million accounts over approximately a year . When users downloaded their own Facebook history, that user would obtain in the same action not just their own address book, but also the email addresses and phone numbers of their friends that other people had stored in their address books. The data that Facebook exposed had not been given to Facebook by users to begin with–it had been vacuumed from the contact lists of other Facebook users who happen to know that person. This phenomenon has since been described as “shadow profiles.”

The Cambridge Analytica portion of the data privacy scandal starts in February 2014. A spate of reviews on the Turkopticon website–a third-party review website for users of Amazon’s Mechanical Turk–detail a task requested by Aleksandr Kogan asking users to complete a survey in exchange for money. The survey required users to add the thisisyourdigitiallife app to their Facebook account, which is in violation of Mechanical Turk’s terms of service . One review quotes the request as requiring users to “provide our app access to your Facebook so we can download some of your data–some demographic data, your likes, your friends list, whether your friends know one another, and some of your private messages.”

In December 2015, Facebook learned for the first time that the data set Kogan generated with the app was shared with Cambridge Analytica. Facebook founder and CEO Mark Zuckerberg claims “we immediately banned Kogan’s app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.”

According to Cambridge Analytica, the company took legal action in August 2016 against GSR (Kogan) for licensing “illegally acquired data” to the company, with a settlement reached that November.

On March 17, 2018, an exposé was published by The Guardian and The New York Times , initially reporting that 50 million Facebook profiles were harvested by Cambridge Analytica; the figure was later revised to “up to 87 million” profiles. The exposé relies on information provided by Christopher Wylie, a former employee of SCL Elections and Global Science Research, the creator of the thisisyourdigitiallife app. Wylie claimed that the data from that app was sold to Cambridge Analytica, which used the data to develop “psychographic” profiles of users, and target users with pro-Trump advertising, a claim that Cambridge Analytica denied.

On March 16, 2018, Facebook threatened to sue The Guardian over publication of the story, according to a tweet by Guardian reporter Carole Cadwalladr . Campbell Brown, a former CNN journalist who now works as head of news partnerships at Facebook, said it was “not our wisest move,” adding “If it were me I would have probably not threatened to sue The Guardian.” Similarly, Cambridge Analytica threatened to sue The Guardian for defamation .

On March 20, 2018, the FTC opened an investigation to determine if Facebook had violated the terms of the settlement from the 2011 investigation.

In April 2018, reports indicated that Facebook granted Zuckerberg and other high ranking executives powers over controlling personal information on a platform that is not available to normal users. Messages from Zuckerberg sent to other users were remotely deleted from users’ inboxes, which the company claimed was part of a corporate security measure following the 2014 Sony Pictures hack . Facebook subsequently announced plans to make available the “unsend” capability “to all users in several months,” and that Zuckerberg will be unable to unsend messages until such time that feature rolls out. Facebook added the feature 10 months later , on February 6, 2019. The public feature permits users to delete messages up to 10 minutes after the messages were sent. In the controversy prompting this feature to be added, Zuckerberg deleted messages months after they were sent.

On April 4, 2018, The Washington Post reported that Facebook announced “malicious actors” abused the search function to gather public profile information of “most of its 2 billion users worldwide.”

In a CBS News/YouGov poll published on April 10, 2018, 61% of Americans said Congress should do more to regulate social media and tech companies. This sentiment was echoed in a CBS News interview with Box CEO Aaron Levie and YML CEO Ashish Toshniwal who called on Congress to regulate Facebook. According to Levie, “There are so many examples where we don’t have modern ways of either regulating, controlling, or putting the right protections in place in the internet age. And this is a fundamental issue that, that we’re gonna have to grapple with as an industry for the next decade.”

On April 18, 2018, Facebook updated its privacy policy .

On May 2, 2018, SCL Group, which owns Cambridge Analytica, was dissolved. In a press release , the company indicated that “the siege of media coverage has driven away virtually all of the Company’s customers and suppliers.”

On May 15, 2018, The New York Times reported that Cambridge Analytica is being investigated by the FBI and the Justice Department. A source indicated to CBS News that prosecutors are focusing on potential financial crimes.

On May 16, 2018, Christopher Wylie testified before the Senate Judiciary Committee . Among other things, Wylie noted that Cambridge Analytica, under the direction of Steve Bannon, sought to “exploit certain vulnerabilities in certain segments to send them information that will remove them from the public forum, and feed them conspiracies and they’ll never see mainstream media.” Wylie also noted that the company targeted people with “characteristics that would lead them to vote for the Democratic party, particularly African American voters.”

On June 3, 2018, a report in The New York Times indicated that Facebook had maintained data-sharing partnerships with mobile device manufacturers, specifically naming Apple, Amazon, BlackBerry, Microsoft, and Samsung. Under the terms of this personal information sharing, device manufacturers were able to gather information about users in order to deliver “the Facebook experience,” the Times quotes a Facebook official as saying. Additionally, the report indicates that this access allowed device manufacturers to obtain data about a user’s Facebook friends, even if those friends had configured their privacy settings to deny information sharing with third parties.

The same day, Facebook issued a rebuttal to the Times report indicating that the partnerships were conceived because “the demand for Facebook outpaced our ability to build versions of the product that worked on every phone or operating system,” at a time when the smartphone market included BlackBerry’s BB10 and Windows Phone operating systems, among others. Facebook claimed that “contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends. We are not aware of any abuse by these companies.” The distinction being made is partially semantic, as Facebook does not consider these partnerships a third party in this case. Facebook noted that changes to the platform made in April began “winding down” access to these APIs, and that 22 of the partnerships had already been ended.

On June 5, 2018, the The Washington Post and The New York Times reported that the Chinese device manufacturers Huawei, Lenovo, Oppo, and TCL were granted access to user data under this program. Huawei, along with ZTE, are facing scrutiny from the US government on unsubstantiated accusations that products from these companies pose a national security risk .

On July 2, 2018, The Washington Post reported that the US Securities and Exchange Commission, Federal Trade Commission, and Federal Bureau of Investigation have joined the Department of Justice inquiry into the Facebook/Cambridge Analytica data scandal. In a statement to CNET , Facebook indicated that “We’ve provided public testimony, answered questions, and pledged to continue our assistance as their work continues.” On July 11th, the Wall Street Journal reported that the SEC is separately investigating if Facebook adequately warned investors in a timely manner about the possible misuse and improper collection of user data. The same day, the UK assessed a £500,000 fine to Facebook , the maximum permitted by law, over its role in the data scandal. The UK’s Information Commissioner’s Office is also preparing to launch a criminal probe into SCL Elections over their involvement in the scandal.

On July 3, 2018, Facebook acknowledged a “bug” unblocked people that users has blocked between May 29 and June 5.

On July 12, 2018, a CNBC report indicated that a privacy loophole was discovered and closed. A Chrome plug-in intended for marketing research called Grouply.io allowed users to access the list of members for private Facebook groups. Congress sent a letter to Zuckerberg on February 19, 2019 demanding answers about the data leak, stating in part that “labeling these groups as closed or anonymous potentially misled Facebook users into joining these groups and revealing more personal information than they otherwise would have,” and “Facebook may have failed to properly notify group members that their personal health information may have been accessed by health insurance companies and online bullies, among others.”

Fallout from a confluence of factors in the Facebook data privacy scandal has come to bear in the last week of July 2018. On July 25th, Facebook announced that daily active user counts have fallen in Europe, and growth has stagnated in the US and Canada. The following day, Facebook suffered the worst single-day market value decrease for a public company in the US, dropping $120 billion , or 19%. On the July 28th, Reuters reported that shareholders are suing Facebook, Zuckerberg, and CFO David Wehner for “making misleading statements about or failing to disclose slowing revenue growth, falling operating margins, and declines in active users.”

On August 22, 2018, Facebook removed Facebook-owned security app Onavo from the App Store , for violating privacy rules. Data collected through the Onavo app is shared with Facebook.

In testimony before the Senate, on September 5, 2018, COO Sheryl Sandberg conceded that the company “[was] too slow to spot this and too slow to act” on privacy protections. Sandberg, and Twitter CEO Jack Dorsey faced questions focusing on user privacy, election interference, and political censorship. Senator Mark Warner of Virginia even said that, “The era of the wild west in social media is coming to an end,” which seems to indicate coming legislation.

On September 6, 2018, a spokesperson indicated that Joseph Chancellor was no longer employed by Facebook . Chancellor was a co-director of Global Science Research, the firm which improperly provided user data to Cambridge Analytica. An internal investigation was launched in March in part to determine his involvement. No statement was released indicating the result of that investigation.

On September 7, 2018, Zuckerberg stated in a post that fixing issues such as “defending against election interference by nation states, protecting our community from abuse and harm, or making sure people have control of their information and are comfortable with how it’s used,” is a process which “will extend through 2019.”

On September 26, 2018, WhatsApp co-founder Brian Acton stated in an interview with Forbes that “I sold my users’ privacy” as a result of the messaging app being sold to Facebook in 2014 for $22 billion.

On September 28, 2018, Facebook disclosed details of a security breach which affected 50 million users . The vulnerability originated from the “view as” feature which can be used to let users see what their profiles look like to other people. Attackers devised a way to export “access tokens,” which could be used to gain control of other users’ accounts .

A CNET report published on October 5, 2018, details the existence of an “ Internet Bill of Rights ” drafted by Rep. Ro Khanna (D-CA). The bill is likely to be introduced in the event the Democrats regain control of the House of Representatives in the 2018 elections. In a statement, Khanna noted that “As our lives and the economy are more tied to the internet, it is essential to provide Americans with basic protections online.”

On October 11, 2018, Facebook deleted over 800 pages and accounts in advance of the 2018 elections for violating rules against spam and “inauthentic behavior.” The same day, it disabled accounts for a Russian firm called “Social Data Hub,” which claimed to sell scraped user data. A Reuters report indicates that Facebook will ban false information about voting in the midterm elections.

On October 16, 2018, rules requiring public disclosure of who pays for political advertising on Facebook, as well as identity verification of users paying for political advertising, were extended to the UK . The rules were first rolled out in the US in May.

On October 25, 2018, Facebook was fined £500,000 by the UK’s Information Commissioner’s Office for their role in the Cambridge Analytica scandal. The fine is the maximum amount permitted by the Data Protection Act 1998. The ICO indicated that the fine was final. A Facebook spokesperson told ZDNet that the company “respectfully disagreed,” and has filed for appeal .

The same day, Vice published a report indicating that Facebook’s advertiser disclosure policy was trivial to abuse. Reporters from Vice submitted advertisements for approval attributed to Mike Pence, DNC Chairman Tom Perez, and Islamic State, which were approved by Facebook. Further, the contents of the advertisements were copied from Russian advertisements. A spokesperson for Facebook confirmed to Vice that the copied content does not violate rules, though the false attribution does. According to Vice, the only denied submission was attributed to Hillary Clinton.

On October 30, 2018, Vice published a second report in which it claimed that it successfully applied to purchase advertisements attributed to all 100 sitting US Senators, indicating that Facebook had yet to fix the problem reported in the previous week. According to Vice, the only denied submission in this test was attributed to Mark Zuckerberg.

On November 14, 2018, the New York Times published an exposé on the Facebook data privacy scandal, citing interviews of more than 50 people, including current and former Facebook executives and employees. In the exposé, the Times reports:

  • In the Spring of 2016, a security expert employed by Facebook informed Chief Security Officer Alex Stamos of Russian hackers “probing Facebook accounts for people connected to the presidential campaigns,” which Stamos, in turn, informed general counsel Colin Stretch.
  • A group called “Project P” was assembled by Zuckerberg and Sandberg to study false news on Facebook. By January 2017, this group “pressed to issue a public paper” about their findings, but was stopped by board members and Facebook vice president of global public policy Joel Kaplan, who had formerly worked in former US President George W. Bush’s administration.
  • In Spring and Summer of 2017, Facebook was “publicly claiming there had been no Russian effort of any significance on Facebook,” despite an ongoing investigation into the extent of Russian involvement in the election.
  • Sandberg “and deputies” insisted that the post drafted by Stamos to publicly acknowledge Russian involvement for the first time be made “less specific” before publication.
  • In October 2017, Facebook expanded their engagement with Republican-linked firm Definers Public Affairs to discredit “activist protesters.” That firm worked to link people critical of Facebook to liberal philanthropist George Soros , and “[lobbied] a Jewish civil rights group to cast some criticism of the company as anti-Semitic.”
  • Following comments critical of Facebook by Apple CEO Tim Cook , a spate of articles critical of Apple and Google began appearing on NTK Network, an organization which shares an office and staff with Definers. Other articles appeared on the website downplaying the Russians’ use of Facebook.

On November 15, 2018, Facebook announced it had terminated its relationship with Definers Public Affairs, though it disputed that either Zuckerberg or Sandberg was aware of the “specific work being done.” Further, a Facebook spokesperson indicated “It is wrong to suggest that we have ever asked Definers to pay for or write articles on Facebook’s behalf, or communicate anything untrue.”

On November 22, 2018, Sandberg acknowledged that work produced by Definers “was incorporated into materials presented to me and I received a small number of emails where Definers was referenced.”

On November 25, 2018, the founder of Six4Three, on a business trip to London, was compelled by Parliament to hand over documents relating to Facebook . Six4Three obtained these documents during the discovery process relating to an app developed by the startup that used image recognition to identify photos of women in bikinis shared on Facebook users’ friends’ pages. Reports indicate that Parliament sent an official to the founder’s hotel with a warning that noncompliance would result in possible fines or imprisonment. Despite the warning, the founder of the startup remained noncompliant, prompting him to be escorted to Parliament, where he turned over the documents.

A report in the New York Times published on November 29, 2018, indicates that Sheryl Sandberg personally asked Facebook communications staff in January to “research George Soros’s financial interests in the wake of his high-profile attacks on tech companies.”

On December 5, 2018, documents obtained in the probe of Six4Three were released by Parliament . Damian Collins, the MP who issued the order compelling the handover of the documents in November, highlighted six key points from the documents:

  • Facebook entered into whitelisting agreements with Lyft, Airbnb, Bumble, and Netflix, among others, allowing those groups full access to friends data after Graph API v1 was discontinued. Collins indicates “It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.”
  • According to Collins, “increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers’ relationship with Facebook is a recurring feature of the documents.”
  • Data reciprocity between Facebook and app developers was a central focus for the release of Platform v3, with Zuckerberg discussing charging developers for access to API access for friend lists.
  • Internal discussions of changes to the Facebook Android app acknowledge that requesting permissions to collect calls and texts sent by the user would be controversial, with one project manager stating it was “a pretty high-risk thing to do from a PR perspective.”
  • Facebook used data collected through Onavo, a VPN service the company acquired in 2013, to survey the use of mobile apps on smartphones. According to Collins, this occurred “apparently without [users’] knowledge,” and was used by Facebook to determine “which companies to acquire, and which to treat as a threat.”
  • Collins contends that “the files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business.” Documents disclosed specifically indicate Facebook revoked API access to video sharing service Vine.

In a statement , Facebook claimed, “Six4Three… cherrypicked these documents from years ago.” Zuckerberg responded separately to the public disclosure on Facebook, acknowledging, “Like any organization, we had a lot of internal discussion and people raised different ideas.” He called the Facebook scrutiny “healthy given the vast number of people who use our services,” but said it shouldn’t “misrepresent our actions or motives.”

On December 14, 2018, a vulnerability was disclosed in the Facebook Photo API that existed between September 13-25, 2018, exposing private photos of 6.8 million users. The Photo API bug affected people who use Facebook to log in to third-party services.

On December 18, 2018, The New York Times reported on special data sharing agreements that “[exempted] business partners from its usual privacy rules, naming Microsoft’s Bing search engine, Netflix, Spotify, Amazon, and Yahoo as partners in the report. Partners were capable of accessing data including friend lists and private messages, “despite public statements it had stopped that type of sharing years earlier.” Facebook claimed the data sharing was about “helping people,” and that this was not done without user consent.

On January 17, 2019, Facebook disclosed that it removed hundreds of pages and accounts controlled by Russian propaganda organization Sputnik, including accounts posing as politicians from primarily Eastern European countries.

On January 29, 2019, a TechCrunch report uncovered the “Facebook Research” program , which paid users aged 13 to 35 to receive up to $20 per month to install a VPN application similar to Onavo that allowed Facebook to gather practically all information about how phones were used. On iOS, this was distributed using Apple’s Developer Enterprise Program, for which Apple briefly revoked Facebook’s certificate as a result of the controversy .

Facebook initially indicated that “less than 5% of the people who chose to participate in this market research program were teens,” and on March 1, 2019 amended the statement to “about 18 percent.”

On February 7, 2019, the German antitrust office ruled that Facebook must obtain consent before collecting data on non-Facebook members, following a three-year investigation.

On February 20, 2019, Facebook added new location controls to its Android app that allows users to limit background data collection when the app is not in use .

The same day, ZDNet reported that Microsoft’s Edge browser contained a secret whitelist allowing Facebook to run Adobe Flash, bypassing the click-to-play policy that other websites are subject to for Flash objects over 398×298 pixels. The whitelist was removed in the February 2019 Patch Tuesday update.

On March 6, 2019, Zuckerberg announced a plan to rebuild services around encryption and privacy , “over the next few years.” As part of these changes, Facebook will make messages between Facebook, Instagram, and WhatsApp interoperable. Former Microsoft executive Steven Sinofsky –who was fired after the poor reception of Windows 8–called the move “fantastic,” comparing it to Microsoft’s Trustworthy Computing initiative in 2002.

CNET and CBS News Senior Producer Dan Patterson noted on CBSN that Facebook can benefit from this consolidation by making the messaging platforms cheaper to operate, as well as profiting from users sending money through the messaging platform, in a business model similar to Venmo.

On March 21, 2019, Facebook disclosed a lapse in security that resulted in hundreds of millions of passwords being stored in plain text, affecting users of Facebook, Facebook Lite, and Instagram. Facebook claimed that “these passwords were never visible to anyone outside of Facebook and we have found no evidence to date that anyone internally abused or improperly accessed them.”

Though Facebook’s post does not provide specifics, a report by veteran security reporter Brian Krebs claimed “between 200 million and 600 million” users were affected, and that “more than 20,000 Facebook employees” would have had access.

On March 22, 2019, a court filing by the attorney general of Washington DC alleged that Facebook knew about the Cambridge Analytica scandal months prior to the first public reports in December 2015. Facebook claimed that employees knew of rumors relating to Cambridge Analytica, but the claims relate to a “different incident” than the main scandal, and insisted that the company did not mislead anyone about the timeline of the scandal.

Facebook is seeking to have the case filed in Washington DC dismissed, as well as to seal a document filed in that case.

On March 31, 2019, The Washington Post published an op-ed by Zuckerberg calling for governments and regulators to take a “more active role” in regulating the internet. Shortly after, Facebook introduced a feature that explains why content is shown to users on their news feeds .

On April 3, 2019, over 540 million Facebook-related records were found on two improperly protected AWS servers . The data was collected by Cultura Colectiva, a Mexico-based online media platform, using Facebook APIs. Amazon deactivated the associated account at Facebook’s request.

On April 15, 2019, it was discovered that Oculus, a company owned by Facebook, shipped VR headsets with internal etchings including text such as “ Big Brother is Watching .”

On April 18, 2019, Facebook disclosed the “unintentional” harvesting of email contacts belonging to approximately 1.5 million users over the course of three years. Affected users were asked to provide email address credentials to verify their identity.

On April 30, 2019, at Facebook’s F8 developer conference , the company unveiled plans to overhaul Messenger and re-orient Facebook to prioritize Groups instead of the timeline view, with Zuckerberg declaring “The future is private.”

On May 9, 2019, Facebook co-founder Chris Hughes called for Facebook to be broken up by government regulators, in an editorial in The New York Times. Hughes, who left the company in 2007, cited concerns that Zuckerberg has surrounded himself with people who do not challenge him . “We are a nation with a tradition of reining in monopolies, no matter how well-intentioned the leaders of these companies may be. Mark’s power is unprecedented and un-American,” Hughes said.

Proponents of a Facebook breakup typically point to unwinding the social network’s purchase of Instagram and WhatsApp.

Zuckerberg dismissed Hughes’ appeal for a breakup in comments to France 2, stating in part that “If what you care about is democracy and elections, then you want a company like us to invest billions of dollars a year, like we are, in building up really advanced tools to fight election interference.”

On May 24, 2019, a report from Motherboard claimed “multiple” staff members of Snapchat used internal tools to spy on users .

On July 8, 2019, Apple co-founder Steve Wozniak warned users to get off of Facebook .

On July 18, 2019, lawmakers in a House Committee on Financial Services hearing expressed mistrust of Facebook’s Libra cryptocurrency plan due to its “pattern of failing to keep consumer data private.” Lawmakers had previously issued a letter to Facebook requesting the company pause development of the project.

On July 24, 2019, the FTC announced a $5 billion settlement with Facebook over user privacy violations. Facebook agreed to conduct an overhaul of its consumer privacy practices as part of the settlement. Access to friend data by Sony and Facebook was “immediately” restricted as part of this settlement, according to CNET. Separately, the FTC settled with Aleksandr Kogan and former Cambridge Analytica CEO Alexander Nix , “restricting how they conduct any business in the future, and requiring them to delete or destroy any personal information they collected.” The FTC announced a lawsuit against Cambridge Analytica the same day.

Also on July 24, 2019, Netflix released “The Great Hack,” a documentary about the Cambridge Analytica scandal .

In early July, 2020, Facebook admitted to sharing user data with an estimated 5,000 third-party developers after it access to that data was supposed to expire.

Zuckerberg testified before Congress again on July 29, 2020, as part of an antitrust hearing that included Amazon’s Jeff Bezos, Apple’s Tim Cook, and Google’s Sundar Pichai . The hearing didn’t touch on Facebook’s data privacy scandal, and was instead focused on Facebook’s purchase of Instagram and WhatsApp , as well as its treatment of other competing services.

  • Facebook knew of illicit user profile harvesting for 2 years, never acted (CBS News)
  • Facebook’s FTC consent decree deal: What you need to know (CNET)
  • Australia’s Facebook investigation expected to take at least 8 months (ZDNet)
  • Election tech: The truth about Cambridge Analytica’s political big data (TechRepublic)
  • Google sued by ACCC for allegedly linking data for ads without consent (ZDNet)
  • Midterm elections, social media and hacking: What you need to know (CNET)
  • Critical flaw revealed in Facebook Fizz TLS project (ZDNet)
  • CCPA: What California’s new privacy law means for Facebook, Twitter users (CNET)

What are the key companies involved in the Facebook data privacy scandal?

In addition to Facebook, these are the companies connected to this data privacy story.

SCL Group (formerly Strategic Communication Laboratories) is at the center of the privacy scandal, though it has operated primarily through subsidiaries. Nominally, SCL was a behavioral research/strategic communication company based in the UK. The company was dissolved on May 1, 2018.

Cambridge Analytica and SCL USA are offshoots of SCL Group, primarily operating in the US. Registration documentation indicates the pair formally came into existence in 2013. As with SCL Group, the pair were dissolved on May 1, 2018.

Global Science Research was a market research firm based in the UK from 2014 to 2017. It was the originator of the thisisyourdigitiallife app. The personal data derived from the app (if not the app itself) was sold to Cambridge Analytica for use in campaign messaging.

Emerdata is the functional successor to SCL and Cambridge Analytica. It was founded in August 2017, with registration documents listing several people associated with SCL and Cambridge Analytica, as well as the same address as that of SCL Group’s London headquarters.

AggregateIQ is a Canadian consulting and technology company founded in 2013. The company produced Ripon, the software platform for Cambridge Analytica’s political campaign work, which leaked publicly after being discovered in an unprotected GitLab bucket .

Cubeyou is a US-based data analytics firm that also operated surveys on Facebook, and worked with Cambridge University from 2013 to 2015. It was suspended from Facebook in April 2018 following a CNBC report .

Six4Three was a US-based startup that created an app that used image recognition to identify photos of women in bikinis shared on Facebook users’ friends’ pages. The company sued Facebook in April 2015, when the app became inoperable after access to this data was revoked when the original version of Facebook’s Graph API was discontinued .

Onavo is an analytics company that develops mobile apps. They created Onavo Extend and Onavo Protect, which are VPN services for data protection and security, respectively. Facebook purchased the company in October 2013 . Data from Onavo is used by Facebook to track usage of non-Facebook apps on smartphones .

The Internet Research Agency is a St. Petersburg-based organization with ties to Russian intelligence services. The organization engages in politically-charged manipulation across English-language social media, including Facebook.

  • If your organization advertises on Facebook, beware of these new limitations (TechRepublic)
  • Data breach exposes Cambridge Analytica’s data mining tools (ZDNet)
  • Was your business’s Twitter feed sold to Cambridge Analytica? (TechRepublic)
  • US special counsel indicts 13 members of Russia’s election meddling troll farm (ZDNet)

Who are the key people involved in the Facebook data privacy scandal?

Nigel Oakes is the founder of SCL Group, the parent company of Cambridge Analytica. A report from Buzzfeed News unearthed a quote from 1992 in which Oakes stated, “We use the same techniques as Aristotle and Hitler. … We appeal to people on an emotional level to get them to agree on a functional level.”

Alexander Nix was the CEO of Cambridge Analytica and a director of SCL Group. He was suspended following reports detailing a video in which Nix claimed the company “offered bribes to smear opponents as corrupt,” and that it “campaigned secretly in elections… through front companies or using subcontractors.”

Robert Mercer is a conservative activist, computer scientist, and a co-founder of Cambridge Analytica. A New York Times report indicates that Mercer invested $15 million in the company. His daughters Jennifer Mercer and Rebekah Anne Mercer serve as directors of Emerdata.

Christopher Wylie is the former director of research at Cambridge Analytica. He provided information to The Guardian for its exposé of the Facebook data privacy scandal. He has since testified before committees in the US and UK about Cambridge Analytica’s involvement in this scandal.

Steve Bannon is a co-founder of Cambridge Analytica, as well as a founding member and former executive chairman of Breitbart News, an alt-right news outlet. Breitbart News has reportedly received funding from the Mercer family as far back as 2010. Bannon left Breitbart in January 2018. According to Christopher Wylie, Bannon is responsible for testing phrases such as “ drain the swamp ” at Cambridge Analytica, which were used extensively on Breitbart.

Aleksandr Kogan is a Senior Research Associate at Cambridge University and co-founder of Global Science Research, which created the data harvesting thisisyourdigitiallife app. He worked as a researcher and consultant for Facebook in 2013 and 2015. Kogan also received Russian government grants and is an associate professor at St. Petersburg State University, though he claims this is an honorary role .

Joseph Chancellor was a co-director of Global Science Research, which created the data harvesting thisisyourdigitiallife app. Around November 2015, he was hired by Facebook as a “quantitative social psychologist.” A spokesperson indicated on September 6, 2018, that he was no longer employed by Facebook.

Michal Kosinski , David Stillwell , and Thore Graepel are the researchers who proposed and developed the model to “psychometrically” analyze users based on their Facebook likes. At the time this model was published, Kosinski and Stillwell were affiliated with Cambridge University, while Graepel was affiliated with the Cambridge-based Microsoft Research. (None have an association with Cambridge Analytica, according to Cambridge University .)

Mark Zuckerberg is the founder and CEO of Facebook. He founded the website in 2004 from his dorm room at Harvard.

Sheryl Sandberg is the COO of Facebook. She left Google to join the company in March 2008. She became the eighth member of the company’s board of directors in 2012 and is the first woman in that role.

Damian Collins is a Conservative Party politician based in the United Kingdom. He currently serves as the Chair of the House of Commons Culture, Media and Sport Select Committee. Collins is responsible for issuing orders to seize documents from the American founder of Six4Three while he was traveling in London, and releasing those documents publicly.

Chris Hughes is one of four Facebook co-founders, who originally took on beta testing and feedback for the website, until leaving in 2007. Hughes is the first to call for Facebook to be broken up by regulators.

  • Facebook investigates employee’s ties to Cambridge Analytica (CBS News)
  • Aleksandr Kogan: The link between Cambridge Analytica and Facebook (CBS News)
  • Video: Cambridge Analytica shuts down following data scandal (CBS News)

How have Facebook and Mark Zuckerberg responded to the data privacy scandal?

Each time Facebook finds itself embroiled in a privacy scandal, the general playbook seems to be the same: Mark Zuckerberg delivers an apology, with oft-recycled lines, such as “this was a big mistake,” or “I know we can do better.” Despite repeated controversies regarding Facebook’s handling of personal data, it has continued to gain new users. This is by design–founding president Sean Parker indicated at an Axios conference in November 2017 that the first step of building Facebook features was “How do we consume as much of your time and conscious attention as possible?” Parker also likened the design of Facebook to “exploiting a vulnerability in human psychology.”

On March 16, 2018, Facebook announced that SCL and Cambridge Analytica had been banned from the platform. The announcement indicated, correctly, that “Kogan gained access to this information in a legitimate way and through the proper channels that governed all developers on Facebook at that time,” and passing the information to a third party was against the platform policies.

The following day, the announcement was amended to state:

The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.

On March 21, 2018, Mark Zuckerberg posted his first public statement about the issue, stating in part that:

“We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you. I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again.”

On March 26, 2018, Facebook placed full-page ads stating : “This was a breach of trust, and I’m sorry we didn’t do more at the time. We’re now taking steps to ensure this doesn’t happen again,” in The New York Times, The Washington Post, and The Wall Street Journal, as well as The Observer, The Sunday Times, Mail on Sunday, Sunday Mirror, Sunday Express, and Sunday Telegraph in the UK.

In a blog post on April 4, 2018, Facebook announced a series of changes to data handling practices and API access capabilities. Foremost among these include limiting the Events API, which is no longer able to access the guest list or wall posts. Additionally, Facebook removed the ability to search for users by phone number or email address and made changes to the account recovery process to fight scraping.

On April 10, 2018, and April 11, 2018, Mark Zuckerberg testified before Congress. Details about his testimony are in the next section of this article.

On April 10, 2018, Facebook announced the launch of its data abuse bug bounty program. While Facebook has an existing security bug bounty program, this is targeted specifically to prevent malicious users from engaging in data harvesting. There is no limit to how much Facebook could potentially pay in a bounty, though to date the highest amount the company has paid is $40,000 for a security bug.

On May 14, 2018, “around 200” apps were banned from Facebook as part of an investigation into if companies have abused APIs to harvest personal information. The company declined to provide a list of offending apps.

On May 22, 2018, Mark Zuckerberg testified, briefly, before the European Parliament about the data privacy scandal and Cambridge Analytica. The format of the testimony has been the subject of derision, as all of the questions were posed to Zuckerberg before he answered. Guy Verhofstadt, an EU Parliament member representing Belgium, said , “I asked you six ‘yes’ and ‘no’ questions, and I got not a single answer.”

What did Mark Zuckerberg say in his testimony to Congress?

In his Senate testimony on April 10, 2018, Zuckerberg reiterated his apology, stating that “We didn’t take a broad enough view of our responsibility, and that was a big mistake. And it was my mistake. And I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here,” adding in a response to Sen. John Thune that “we try not to make the same mistake multiple times.. in general, a lot of the mistakes are around how people connect to each other, just because of the nature of the service.”

Sen. Amy Klobuchar asked if Facebook had determined whether Cambridge Analytica and the Internet Research Agency were targeting the same users. Zuckerberg replied, “We’re investigating that now. We believe that it is entirely possible that there will be a connection there.” According to NBC News , this was the first suggestion there is a link between the activities of Cambridge Analytica and the Russian disinformation campaign.

On June 11, 2018, nearly 500 pages of new testimony from Zuckerberg was released following promises of a follow-up to questions for which he did not have sufficient information to address during his Congressional testimony. The Washington Post notes that the release, “in some instances sidestepped lawmakers’ questions and concerns,” but that the questions being asked were not always relevant, particularly in the case of Sen. Ted Cruz, who attempted to bring attention to Facebook’s donations to political organizations, as well as how Facebook treats criticism of “Taylor Swift’s recent cover of an Earth, Wind and Fire song.”

  • Facebook gave Apple, Samsung access to data about users — and their friends (CNET)
  • Zuckerberg doubles down on Facebook’s fight against fake news, data misuse (CNET)
  • Tech execs react to Mark Zuckerberg’s apology: “I think he’s sorry he has to testify” (CBS News)
  • On Facebook, Zuckerberg gets privacy and you get nothing (ZDNet)
  • 6 Facebook security mistakes to fix on Data Privacy Day (CNET)
  • Zuckerberg takes Facebook data apology tour to Washington (CNET)
  • Zuckerberg’s Senate hearing highlights in 10 minutes (CNET via YouTube)
  • Russian politicians call on Facebook’s Mark Zuckerberg to testify on privacy (CNET)

What is the 2016 US presidential election connection to the Facebook data privacy scandal?

In December 2015, The Guardian broke the story of Cambridge Analytica being contracted by Ted Cruz’s campaign for the Republican Presidential Primary. Despite Cambridge Analytica CEO Alexander Nix’s claim i n an interview with TechRepublic that the company is “fundamentally politically agnostic and an apolitical organization,” the primary financier of the Cruz campaign is Cambridge Analytica co-founder Robert Mercer, who donated $11 million to a pro-Cruz Super PAC. Following Cruz’s withdrawal from the campaign in May 2016, the Mercer family began supporting Donald Trump.

In January 2016, Facebook COO Sheryl Sandberg told investors that the election was “a big deal in terms of ad spend,” and that through “using Facebook and Instagram ads you can target by congressional district, you can target by interest, you can target by demographics or any combination of those.”

In October 2017, Facebook announced changes to its advertising platform, requiring identity and location verification and prior authorization in order to run electoral advertising. In the wake of the fallout from the data privacy scandal, further restrictions were added in April 2018, making “issue ads” regarding topics of current interest similarly restricted .

In secretly recorded conversations by an undercover team from Channel 4 News, Cambridge Analytica’s Nix claimed the firm was behind the “defeat crooked Hillary” advertising campaign, adding, “We just put information into the bloodstream of the internet and then watch it grow, give it a little push every now and again over time to watch it take shape,” and that “this stuff infiltrates the online community, but with no branding, so it’s unattributable, untrackable.” The same exposé quotes Chief Data Officer Alex Tayler as saying, “When you think about the fact that Donald Trump lost the popular vote by 3 million votes but won the electoral college vote, that’s down to the data and the research.”

  • How Cambridge Analytica used your Facebook data to help elect Trump (ZDNet)
  • Facebook takes down fake accounts operated by ‘Roger Stone and his associates’ (ZDNet)
  • Facebook, Cambridge Analytica and data mining: What you need to know (CNET)
  • Civil rights auditors slam Facebook stance on Trump, voter suppression (ZDNet)
  • The Trump campaign app is tapping a “gold mine” of data about Americans (CBS News)

What is the Brexit tie-in to the Facebook data privacy scandal?

AggregateIQ was retained by Nigel Farage’s Vote Leave organization in the Brexit campaign , and both The Guardian and BBC claim that the Canadian company is connected to Cambridge Analytica and its parent organization SCL Group. UpGuard, the organization that found a public GitLab instance with code from AggregateIQ, has extensively detailed its connection to Cambridge Analytica and its involvement in Brexit campaigning .

Additionally, The Guardian quotes Wylie as saying the company “was set up as a Canadian entity for people who wanted to work on SCL projects who didn’t want to move to London.”

  • Brexit: A cheat sheet (TechRepublic)
  • Facebook suspends another data analytics firm, AggregateIQ (CBS News)
  • Lawmakers grill academic at heart of Facebook scandal (CBS News)

How is Facebook affected by the GDPR?

Like any organization providing services to users in European Union countries, Facebook is bound by the EU General Data Protection Regulation ( GDPR ). Due to the scrutiny Facebook is already facing regarding the Cambridge Analytica scandal, as well as the general nature of the social media giant’s product being personal information, its strategy for GDPR compliance is similarly receiving a great deal of focus from users and other companies looking for a model of compliance.

While in theory the GDPR is only applicable to people residing in the EU, Facebook will require users to review their data privacy settings. According to a ZDNet article , Facebook users will be asked if they want to see advertising based on partner information–in practice, websites that feature Facebook’s “Like” buttons. Users globally will be asked if they wish to continue sharing political, religious, and relationship information, while users in Europe and Canada will be given the option of switching automatic facial recognition on again.

Facebook members outside the US and Canada have heretofore been governed by the company’s terms of service in Ireland. This has reportedly been changed prior to the start of GDPR enforcement, as this would seemingly make Facebook liable for damages for users internationally, due to Ireland’s status as an EU member.

  • Google, Facebook hit with serious GDPR complaints: Others will be soon (ZDNet)
  • Facebook rolls out changes to comply with new EU privacy law (CBS News)
  • European court strikes down EU-US Privacy Shield user data exchange agreement as invalid (ZDNet)
  • GDPR security pack: Policies to protect data and achieve compliance (TechRepublic Premium)
  • IT pro’s guide to GDPR compliance (free PDF) (TechRepublic)

What are Facebook “shadow profiles?”

“Shadow profiles” are stores of information that Facebook has obtained about other people–who are not necessarily Facebook users. The existence of “shadow profiles” was discovered as a result of a bug in 2013. When a user downloaded their Facebook history, that user would obtain not just his or her address book, but also the email addresses and phone numbers of their friends that other people had stored in their address books.

Facebook described the issue in an email to the affected users. This is an excerpt of the email, according to security site Packet Storm:

When people upload their contact lists or address books to Facebook, we try to match that data with the contact information of other people on Facebook in order to generate friend recommendations. Because of the bug, the email addresses and phone numbers used to make friend recommendations and reduce the number of invitations we send were inadvertently stored in their account on Facebook, along with their uploaded contacts. As a result, if a person went to download an archive of their Facebook account through our Download Your Information (DYI) tool, which included their uploaded contacts, they may have been provided with additional email addresses or telephone numbers.

Because of the way that Facebook synthesizes data in order to attribute collected data to existing profiles, data of people who do not have Facebook accounts congeals into dossiers, which are popularly called a “shadow profile.” It is unclear what other sources of input are added to said “shadow profiles,” a term that Facebook does not use, according to Zuckerberg in his Senate testimony.

  • Shadow profiles: Facebook has information you didn’t hand over (CNET)
  • Finally, the world is getting concerned about data privacy (TechRepublic)
  • Firm: Facebook’s shadow profiles are ‘frightening’ dossiers on everyone (ZDNet)

What are the possible implications for enterprises and business users?

Business users and business accounts should be aware that they are as vulnerable as consumers to data exposure. Because Facebook harvests and shares metadata–including SMS and voice call records–between the company’s mobile applications, business users should be aware that their risk profile is the same as a consumer’s. The stakes for businesses and employees could be higher, given that incidental or accidental data exposure could expose the company to liability, IP theft, extortion attempts, and cybercriminals.

Though deleting or deactivating Facebook applications won’t prevent the company from creating so-called advertising “shadow profiles,” it will prevent the company from capturing geolocation and other sensitive data. For actional best practices, contact your company’s legal counsel.

  • Social media policy (TechRepublic Premium)
  • Want to attain and retain customers? Adopt data privacy policies (TechRepublic)
  • Hiring kit: Digital campaign manager (TechRepublic Premium)
  • Photos: All the tech celebrities and brands that have deleted Facebook (TechRepublic)

How can I change my Facebook privacy settings?

According to Facebook, in 2014 the company removed the ability for apps that friends use to collect information about an individual user. If you wish to disable third-party use of Facebook altogether–including Login With Facebook and apps that rely on Facebook profiles such as Tinder–this can be done in the Settings menu under Apps And Websites. The Apps, Websites And Games field has an Edit button–click that, and then click Turn Off.

Facebook has been proactively notifying users who had their data collected by Cambridge Analytica, though users can manually check to see if their data was shared by going to this Facebook Help page .

Facebook is also developing a Clear History button, which the company indicates is “their database record of you.” CNET and CBS News Senior Producer Dan Patterson noted on CBSN that “there aren’t a lot of specifics on what that clearing of the database will do, and of course, as soon as you log back in and start creating data again, you set a new cookie and you start the process again.”

To gain a better understanding of how Facebook handles user data, including what options can and cannot be modified by end users, it may be helpful to review Facebook’s Terms of Service , as well as its Data Policy and Cookies Policy .

  • Ultimate guide to Facebook privacy and security (Download.com)
  • Facebook’s new privacy tool lets you manage how you’re tracked across the web (CNET)
  • Securing Facebook: Keep your data safe with these privacy settings (ZDNet)
  • How to check if Facebook shared your data with Cambridge Analytica (CNET)

Note: This article was written and reported by James Sanders and Dan Patterson. It was updated by Brandon Vigliarolo.

facebook data breach 2019 case study

Subscribe to the Cybersecurity Insider Newsletter

Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices. Delivered Tuesdays and Thursdays

Image of TechRepublic Staff

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

Louise Matsakis Issie Lapowsky

Everything We Know About Facebook's Massive Security Breach

As a result of Facebook's first known major security breach hackers could have taken full control of the accounts of...

Facebook’s privacy problems severely escalated Friday when the social network disclosed that an unprecedented security issue, discovered September 25, impacted almost 50 million user accounts. Unlike the Cambridge Analytica scandal, in which a third-party company erroneously accessed data that a then-legitimate quiz app had siphoned up, this vulnerability allowed attackers to directly take over user accounts.

The bugs that enabled the attack have since been patched, according to Facebook. The company says that the attackers could see everything in a victim's profile, although it's still unclear if that includes private messages or if any of that data was misused. As part of that fix, Facebook automatically logged out 90 million Facebook users from their accounts Friday morning, accounting both for the 50 million that Facebook knows were affected, and an additional 40 million that potentially could have been. Later Friday, Facebook also confirmed that third-party sites that those users logged into with their Facebook accounts could also be affected .

Facebook says that affected users will see a message at the top of their News Feed about the issue when they log back into the social network. "Your privacy and security are important to us," the update reads. "We want to let you know about recent action we've taken to secure your account." The message is followed by a prompt to click and learn more details. If you were not logged out but want to take extra security precautions, you can check this page to see the places where your account is currently logged in, and log them out.

Facebook has yet to identify the hackers, or where they may have originated. “We may never know,” Guy Rosen, Facebook’s vice president of product, said on a call with reporters Friday. The company is now working with the Federal Bureau of Investigation to identify the attackers. A Taiwanese hacker named Chang Chi-yuan had earlier this week promised to live-stream the deletion of Mark Zuckerberg's Facebook account, but Rosen said Facebook was "not aware that that person was related to this attack."

“If the attacker exploited custom and isolated vulnerabilities, and the attack was a highly targeted one, there simply might be no suitable trace or intelligence allowing investigators to connect the dots,” says Lukasz Olejnik, a security and privacy researcher and member of the W3C Technical Architecture Group.

On the same call, Facebook CEO Mark Zuckerberg reiterated previous statements he has made about security being an “arms race.”

“This is a really serious security issue, and we’re taking it really seriously,” he said. “I’m glad that we found this, and we were able to fix the vulnerability and secure the accounts, but it definitely is an issue that it happened in the first place.”

The social network says its investigation into the breach began on September 16, when it saw an unusual spike in users accessing Facebook. On September 25, the company’s engineering team discovered that hackers appear to have exploited a series of bugs related to a Facebook feature that lets people see what their own profile looks like to someone else. The " View As " feature is designed to allow users to experience how their privacy settings look to another person.

The first bug prompted Facebook's video upload tool to mistakenly show up on the "View As" page. The second one caused the uploader to generate an access token—what allows you to remain logged into your Facebook account on a device, without having to sign in every time you visit—that had the same sign-in permissions as the Facebook mobile app. Finally, when the video uploader did appear in "View As" mode, it triggered an access code for whoever the hacker was searching for.

These Women Came to Antarctica for Science. Then the Predators Emerged

David Kushner

A Vigilante Hacker Took Down North Korea’s Internet. Now He’s Taking Off His Mask

Andy Greenberg

The Solar Eclipse Is the Super Bowl for Conspiracists

David Gilbert

He Got a Pig Kidney Transplant. Now Doctors Need to Keep It Working

Emily Mullin

“This is a complex interaction of multiple bugs,” Rosen said, adding that the hackers likely required some level of sophistication.

That also explains Friday morning's logouts; they served to reset the access tokens of both those directly affected and any additional accounts “that have been subject to a View As look-up” in the last year, Rosen said. Facebook has temporarily turned off "View As," as it continues to investigate the issue.

“It’s easy to say that security testing should have caught this, but these types of security vulnerabilities can be extremely difficult to spot or catch since they rely on having to dynamically test the site itself as it’s running,” says David Kennedy, the CEO of the cybersecurity firm TrustedSec.

The vulnerability couldn’t have come at a worse time for Facebook, whose executives are still reeling from a series of scandals that unfolded in the wake of the 2016 US presidential election. A widespread Russian disinformation campaign leveraged the platform unnoticed, followed by revelations that third-party companies like Cambridge Analytica had collected user data without their knowledge.

The social network already faces multiple federal investigations into its privacy and data-sharing practices, including one probe by the Federal Trade Commission and another conducted by the Securities and Exchange Commission. Both have to do with its disclosures around Cambridge Analytica.

It also faces the specter of more aggressive regulation from Congress, on the heels of a series of occasionally contentious hearings about data privacy. After Facebook’s announcement Friday, Senator Mark Warner (D-Virginia), who serves as vice chairman of the Senate Intelligence Committee, called for a “full investigation” into the breach. “Today’s disclosure is a reminder about the dangers posed when a small number of companies like Facebook or the credit bureau Equifax are able to accumulate so much personal data about individual Americans without adequate security measures,” Warner said in a statement. “This is another sobering indicator that Congress needs to step up and take action to protect the privacy and security of social media users.”

Facebook may also face unprecedented scrutiny in Europe, where the new General Data Protection Regulation , or GDPR, requires companies to disclose a breach to a European agency within 72 hours of it occurring. In cases of high risk to users, the regulation also requires that they be notified directly. Facebook says it has notified the Irish Data Protection Commission about the issue.

This is the second security vulnerability that Facebook has disclosed in recent months. In June, the company announced it had discovered a bug that made up to 14 million people’s posts publicly viewable to anyone for days. This is the first time in Facebook’s history, though, that users’ entire accounts may have been compromised by outside hackers. Its response to this vulnerability—and the speed and comprehensiveness of the important disclosures ahead—will likely be of serious importance. Once again, all eyes are on Mark Zuckerberg.

Additional reporting by Lily Hay Newman.

  • Everyone wants to go to the moon— logic be damned
  • College Humor gives comedy subscription a serious effort
  • Tips to get the most out of Screen Time controls on iOS 12
  • Tech disrupted everything. Who's shaping the future ?
  • An oral history of Apple's Infinite Loop
  • Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories

facebook data breach 2019 case study

Dhruv Mehrotra

Yogurt Heist Reveals a Rampant Form of Online Fraud

Andrew Couts

The Mystery of ‘Jia Tan,’ the XZ Backdoor Mastermind

Dell Cameron

Chinese Hackers Charged in Decade-Long Global Spying Rampage

Matt Burgess

It's Time to Switch to a Privacy-Focused Browser You Can Trust

David Nield

MIT Technology Review

  • Newsletters

What you need to know about the Facebook data leak

The data trove, uncovered by security researcher Alon Gal, includes phone numbers, email addresses, hometowns, full names, and birth dates.

  • Charlotte Jee archive page

Zuckerberg

The news:  The personal data of 533 million Facebook users in more than 106 countries was found to be freely available online last weekend. The data trove, uncovered by security researcher  Alon Gal , includes phone numbers, email addresses, hometowns, full names, and birth dates. Initially, Facebook claimed that the data leak was previously reported on in 2019 and that it had patched the vulnerability that caused it that August. But in fact, it appears that Facebook did not properly disclose the breach at the time. The company finally acknowledged it on Tuesday, April 6, in a  blog post  by product management director Mike Clark. How it happened:  In the blog post, Clark said that Facebook believes the data was scraped from people’s profiles by “malicious actors” using its contact importer tool, which uses people’s contact lists to help them find friends on Facebook. It isn’t clear exactly when the data was scraped, but Facebook says it was “prior to September 2019.” One complicating factor is that it is very common for cyber criminals to combine different data sets and sell them off in different chunks, and Facebook has had  many  different  data breaches  over the years (most famously the  Cambridge Analytica  scandal).

Why the timing matters:  The General Data Protection Regulation came into force in European Union countries in May 2018. If this breach happened after that, Facebook could be liable for fines and enforcement action because it failed to disclose the breach to the relevant regulators within 72 hours, as the GDPR stipulates. Ireland’s Data Protection Commission is investigating the breach. In the US, Facebook  signed a deal two years ago  that gave it immunity from Federal Trade Commission fines for breaches before June 2019, so if the data was stolen after that, it could face action there too.

Keep Reading

Most popular, large language models can do jaw-dropping things. but nobody knows exactly why..

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

  • Will Douglas Heaven archive page

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

  • Casey Crownhart archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

  • Skip to main content
  • Keyboard shortcuts for audio player

What Leaked Internal Documents Reveal About The Damage Facebook Has Caused

Terry Gross square 2017

Terry Gross

WSJ reporter Jeff Horwitz says Facebook executives often choose to boost engagement at the expense of tackling misinformation and mental health problems, which are rampant on their platforms.

TERRY GROSS, HOST:

This is FRESH AIR. I'm Terry Gross. Internal Facebook documents were leaked by a whistleblower and acquired by my guest Jeff Horwitz, a technology reporter for The Wall Street Journal. He's the lead reporter for The Journal's new series of articles called "The Facebook Files." This series details how Facebook executives are aware of the ways the platform causes harm, but executives often lack the will or the ability to address them. The series reveals how a separate set of rules has been applied to VIP users like celebrities and politicians, allowing them to at least briefly escape restrictions and penalties that are applied to other users.

Facebook's own researchers are aware that Instagram, which is owned by Facebook, has negative effects on the self-image and mental health of many teenage girls. Internal documents also reveal that Facebook researchers have warned the company's executives that the platform is used in developing countries for human trafficking, drug-dealing and to promote ethnic violence.

The company's CEO, Mark Zuckerberg, has made it a goal to promote the COVID-19 vaccine, but his researchers have pointed out that that effort is being undermined by commenters spreading misinformation. At least some of the leaked internal documents have been turned over by the whistleblower to the Securities and Exchange Commission and to Congress.

Jeff Horwitz, welcome to FRESH AIR. Congratulations on the series, which isn't over yet (laughter). You're still - there's more to come. So what are these internal documents that were leaked?

JEFF HORWITZ: So this is a collection of internal research notes, executive presentations, in some cases company audits of its own practices that provide a pretty clear sense of how Facebook sees itself and the company's awareness of its own problems. And I think that's something that sort of separates it from a lot of other really good reporting on the company, which is that instead of this being outside voices asking questions about whether or not Facebook is being detrimental to the world, this is Facebook asking those questions and answering them and sometimes finding that the answer is very much yes.

GROSS: And what you're talking about is researchers from Facebook who report to executives and tell them what's going on. And often what they've told them is that this platform is backfiring. It's causing harm for these and these reasons.

HORWITZ: Yeah, exactly. I think it's important to draw a distinction between sort of irate watercooler chat and people letting off steam about things that don't really involve them at the company versus this stuff, which is these are the people that Facebook has hired to inform it of reality and to help it address problems. And in many cases, they are finding some really unpleasant things and then running into obstacles in trying to fix them.

GROSS: Now, are the obstacles a lack of will? Or are the obstacles that Facebook is so big and there are so many users that it is hard to control, even if you want to?

HORWITZ: I think that the premise that the company is just too big to be - to regulate itself isn't correct. There are - yes, having nearly 3 billion users is quite a lot of users to have to be in charge of. But what our reporting seems to indicate is that the company's complexity has become a big problem, as well as just kind of a lack of will and lack of interest in some instances. So it's not that a platform couldn't be made to work for this many users in a sort of simpler and safer way. It's that you can't have all the bells and whistles, and you can't maximize engagement in the way that Facebook would like to and not have that come at a cost.

GROSS: Let's look at the first program you reported on, which is a VIP program called XCheck. This is a program that basically created separate rules for VIPs and for everybody else who uses Facebook. What VIPs have been exempt from certain rules? What kinds of people?

HORWITZ: Oh, a lot of them. So Facebook has talked in the past about providing some - a little bit of extra leeway for politicians and fact-checking and misinformation - right? - the idea being that, you know, in an election, candidates should have the right to say whatever they want to say even if those things aren't strictly true. And the thing we found is that the protections Facebook offers to powerful users go far, far beyond that.

So they include celebrities. They include journalists. I have no doubt that you should qualify. I most certainly should qualify. They include athletes and just sort of people who are famous for being famous, influencers. They include animal influencers. So you know, just, like, literally, the account Doug the Pug is actually covered by XCheck, which was the program.

So basically, the idea is - the commonality among all these people and entities and animals is that they are big enough and prominent enough, they could cause problems for the platform. The way that this program was designed very explicitly internally was to avoid, quote-unquote, "PR fires." And I think that's something that kind of sticks out in general in this reporting, is that the thing that makes Facebook scared more so than harm that it might be causing is the risk of public embarrassment.

GROSS: What kind of public embarrassment? What kind of PR fire?

HORWITZ: So this can be everything from making a mistake and tangling with, you know, the singer Rihanna's account because she posted a risque French magazine cover to, you know, making an error on something Donald Trump said to, you know, anything that basically would result in the company receiving widespread public criticism. And I think this is something that is kind of - exists throughout the series, is that Facebook really likes to stay in the background. They really would like to be kind of viewed as this neutral platform in which just kind of life plays out online. And as you know, what our reporting tends to show is that that is not the case. The company is actively making a lot of choices, is determining which interests benefit and at what expense. And I think XCheck is kind of a perfect example of that, which is that the whole idea is to never publicly tangle with anyone who is influential enough to do you harm.

GROSS: Can you give us an example of a post that caused harm or could potentially cause harm that was allowed to stay up for a long time or a brief time because this person was a VIP?

HORWITZ: Sure. And there are - so there are a lot of them. Facebook's own analysis of XCheck found that 16.4 billion views of violating content occurred solely because of the lag time in taking down stuff from VIPs that shouldn't have been up in the first place. But I think the example I would give for how this program can cause harm and does sort of run against Facebook's sort of basic ethos of fairness is the Brazilian soccer player Neymar, who, in 2019, was accused by a Brazilian woman of rape. And he, to defend himself, took to Instagram and took to Facebook in a live video. And he showed pictures of this - of his WhatsApp chats with this woman, his messages with this woman. And those messages included not just her name, but also nude photos of her that she had shared with him.

And this is just a complete no-go on Facebook. You are not allowed to broadcast people's naked pictures without their consent. It is called nonconsensual nude imagery at Facebook. It's called revenge porn everywhere else. And the appropriate response, per Facebook's own rules, is to immediately take down the post and delete the account that posted it. So that was kind of what would have happened. A Facebook employee did catch this, you know, pretty early on and tried to delete it. But the problem was Neymar's account was cross-checked. So it didn't come down. In fact, it stayed up for 36 hours, during which it racked up 56 million views. And this resulted in extensive harassment of the woman who had accused him of sexual assault. There were thousands and thousands of impersonators of her. And the video was reposted just all over the Internet. And basically, Facebook acknowledged internally that it had just completely failed to protect this woman. And this happened because of XCheck.

Now, I think another part of the program that is important is that it really does and is intentionally designed to allow executives, communications and sort of public affairs people to weigh in on punishments that would otherwise be doled out. And that's what happened in this instance is that Neymar, who is one of the top 20 accounts on Instagram - like, this is a guy who is probably more famous for social media than he is for soccer. Facebook just simply wasn't willing to lose him. And so this got bumped all the way up to senior leadership of the company. And they determined that rather removing him from the platform, even though that was the absolute standard rule for this situation, they were going to kind of let it slide. So they took down the post in the end. But they didn't punish his account in the way they normally would. And I think it's kind of representative of the dual-class - or even more than dual-class system that Facebook created, in some ways, reinforcing power structures that, you know, the company has said it was supposed to kind of overthrow.

GROSS: There was a 2019 internal review of the XCheck program. What did that review say?

HORWITZ: I think people inside Facebook did have, on a long-term basis, a sense that exempting users from enforcement and from punishment on the platform was just, like, clearly not the right thing to do. This is not what Facebook was set to do. This isn't democratic. It isn't fair. And in 2019, an internal review of the XCheck program found a few things. The first one is that it was completely widespread, that there were dozens and dozens of teams that were enrolling users in various protections and that, in fact, pretty much any employee had been allowed to enter people into the XCheck program in the first place.

The second thing is that it was just deeply mismanaged and unorganized. And no one really even knew how these lists were getting pulled together. They weren't being reviewed by lawyers. There was just sort of, kind of this ad hoc process where people would just put in names. And the final thing is that they found that this was just completely indefensible. This was a breach of trust with users. It was putting users in risk of harm. And it was clearly unfair. And as they noted, this was publicly indefensible and simply something that, you know, was completely at odds with the company's own sense of its legitimacy as an overseer of its own platform.

GROSS: What was Facebook executives' reactions after getting this report?

HORWITZ: Facebook - I mean, no one disputed that XCheck was a mess and that the program was unseemly and was in, you know, direct conflict with what the company had said publicly its rules are. That said, they really weren't willing to take on the mess of just simply doing away with it, particularly with the 2020 election coming up. I think this is something that - you know, over the period of time that the documents we reviewed cover, this company was paranoid about the possibility that it might be blamed for something in relation to the 2020 election. And so they desperately wanted to keep a low profile. And there was no way that they were going to rein the program in because this was kind of one of their main methods of trying to avoid criticism from high-profile people.

GROSS: Let's talk about anti-vax posts on Facebook. Mark Zuckerberg has made it a priority to promote vaccines and facts about vaccines. But at the same time, Facebook has been used widely to convey anti-vax falsehoods. And you found that internal documents reveal that the anti-vax comments were mostly coming not from the original post, but from commenters. Would you describe what happened with that?

HORWITZ: Sure. And I think a important place to start here is what you said about Mark Zuckerberg and his goals. This is something - fighting COVID was something that Facebook was, perhaps, uniquely inclined and positioned to do. They early on recognized the threat of the public health crisis back when a lot of other people were poo-pooing the possibility of the global pandemic. They sent all their moderators home, content moderators home, with pay. You know, they sort of really reframed and sort of sprinted to provide new tools, to provide information, to, you know, help out with public health efforts. They really were focused on this. And this was something that came from Mark Zuckerberg personally. I mean, this was kind of going to be Facebook's moment.

And I think the interesting thing about this is that there were, you know, sort of all these resources and good intentions put into it, and yet also this kind of failure by the company to recognize the risks that its own platform could pose. And it's not as if Facebook hadn't had plenty of warnings that the anti-vaccine movement was very active on its platform. If you remember the, you know, measles outbreaks back in 2019 at Disneyland and things like that, there was a very, very aggressive community of anti-vaccine activists that have been active on the platform, had gotten really sophisticated in terms of their methods and their approach. And so the company sort of focused on the positive and all the things it could do that would be helpful and really didn't pay much attention to the, I think, fairly obvious threat that a small band of people who were extremely dedicated could pose if they correctly harnessed Facebook's tools, which they did.

GROSS: Well, let's take a short break here. And then we'll talk some more. If you're just joining us, my guest is Jeff Horwitz, who is the lead reporter for The Wall Street Journal's new and ongoing series of articles called "The Facebook Files." We'll be right back after a short break. This is FRESH AIR.

(SOUNDBITE OF OF MONTREAL SONG, "GRONLANDIC EDIT")

GROSS: This is FRESH AIR. Let's get back to my interview with Jeff Horwitz, who is the lead reporter on a new and ongoing Wall Street Journal series called "The Facebook Files," based on a series of leaked documents from Facebook. These documents detail how Facebook executives are aware of the ways the platform causes harm, but executives often lack the will or the ability to address them. Is it harder to oversee or to apply rules to commenters than it is with people doing the original posts on Facebook?

HORWITZ: This was a bit of a blind spot for the company. They hadn't really ever put that much resources into trying to understand comments, which is kind of funny because Facebook really did engineer its platform to produce a ton of comments. And they - what they realized early in 2021 was that, you know, as the vaccine was rolling out - was that all of the authoritative sources of information about it - right? - the World Health Organization, UNICEF and so on - all of their posts were just getting swamped by anti-vaccine advocates who were, you know, producing, at extremely high volume, content in the form of comments that was kind of just hitchhiking around.

And I think the company understood this, to its credit, at that point as being a real threat because, you know, it's one thing to see something authoritative from UNICEF, and it's another thing to see that same thing and then a whole bunch of people saying don't believe it, right? And that's kind of the style of comment that was rising to the top of Facebook's own systems. So they realized that basically all of the things they were doing to try to promote authoritative information were in some ways being harnessed by the people who were trying to promote the exact opposite.

GROSS: Internal documents also show that Facebook knew - that it was really a small group responsible for most of the COVID misinformation on Facebook. So what was Facebook's response to this research that was delivered to executives?

HORWITZ: Yeah. So the initial response was just basically horror because they realized that, you know, there were just a very high proportion, not only of comments but also posts in general, that seemed to be - vaccine-hesitant was the company's phrase - so not necessarily straight misinformation - you know, false things like saying vaccines cause autism or make you sterile - but people who simply were exercising their right to speak on the platform as often as possible and in just extremely coordinated, almost cut-and-paste-style ways. And they were creating, basically, a false sense that there was a large public debate about the safety of vaccines, when there really isn't.

So the initial response was just, uh-oh, this is a huge problem. We've got to fix it. And then the second response was, OK, how do we do that because they didn't really have the tools in place. They hadn't planned for this. And so they had to kind of make do with a whole bunch of kind of ad hoc interventions and try to sort of start getting public discourse to be at least somewhat representative - right? - so that any time someone who was, you know, encouraging about vaccinations wouldn't just get dogpiled by a - you know, a very, very dedicated group of anti-vaccine advocates.

GROSS: Were these changes effective in stopping misinformation about the vaccine?

HORWITZ: I think it's kind of too soon to tell how well they did. Certainly in terms of preventing this stuff from getting traction in the first place, they failed - right? - means that there were, you know - the whole problem and the thing that kicked this - kicked Facebook's response into gear was that public debate on the platform about this thing was skewed. It was getting sort of manipulated by anti-vaccine advocates. And, I mean, the fact that this was happening in 2021, as the vaccine was getting rolled out, you know, from, you know, the initial sort of first responders and medical officials to the broader population, certainly seems like it could have had an impact.

And I think, you know, the company would note that it's not the only source of vaccine misinformation in the world by any means, right? There's plenty of stuff on cable TV that would have you believe bad things about the efficacy, safety and utility of the vaccine. But certainly, it's a remarkable thing for a company that really saw itself as being, you know, in the vanguard of solving a public health crisis that, you know, they're basically having to go back and fight with this highly active, somewhat ridiculous community that is just spamming their platform with bad information.

GROSS: Let's take another break here, and then we'll talk some more. If you're just joining us, my guest is Jeff Horwitz, a technology reporter for The Wall Street Journal who's the lead reporter for The Journal's new series of articles called "The Facebook Files," based on internal Facebook documents that were leaked to The Journal. We'll be back after we take a short break.

I'm Terry Gross, and this is FRESH AIR.

(SOUNDBITE OF CHARLIE HUNTER AND LEON PARKER'S "THE LAST TIME")

GROSS: This is FRESH AIR. I'm Terry Gross. Let's get back to my interview with Jeff Horwitz, a technology reporter for The Wall Street Journal who's the lead reporter for the Journal's new series of articles called "The Facebook Files," which detail how Facebook executives are aware of the ways the platform causes harm but executives often lack the will or the ability to address them. The series is based on internal Facebook documents that were leaked by a whistleblower to Jeff Horwitz.

Let's talk about Instagram, which is owned by Facebook. Internal research from Facebook shows that Instagram could have a very damaging impact on teenage girls' self-image, their anxiety, depression. Why does Instagram sometimes have that effect on teenage girls? - 'cause you write that the algorithms on Instagram create a perfect storm for many teenage girls.

HORWITZ: Yeah. So body image issues and social comparison obviously didn't originate with the internet. That said, Facebook's own research found that Instagram had some uniquely harmful features in terms of encouraging young women in particular to compare themselves with others and to think about the flaws of their bodies in relation to others.

And, you know, this wasn't intentional. The company certainly hadn't meant to design something that did this. But, you know, there was no question in their own findings that, you know, compared to even other social media products, Instagram was worse in this respect - that it was very focused on the body as opposed to the face or performance and that, for users who arrived at the platform in not the best mental place, it could really have a big impact on them.

GROSS: What is the way in which algorithms create a perfect storm for teenagers? - 'cause you say that in the article.

HORWITZ: Right, right. So I think there's some core product mechanics here, which is that Instagram will always show you the most popular and successful posts from your friends and the people you follow and - whereas you're comparing that to your regular posts and your regular life. So there's kind of this kind of highlight reel ethos to it that tends to lead users to think that everyone else is living their best life while, you know, they're not.

And so that's part of it. Another part of it is just simply that people tend to be attracted to content that sort of really resonates with them. And if you have body image issues already, Instagram - and you are engaged with sort of looking at people who are prettier than you are on the platform, Instagram's going to keep on doing that. If you have concerns about diet and fitness and you think you might be overweight, Instagram is likely going to pick up on that and feed you a ton of dieting and fitness content.

And so they're kind of this - there's this feedback loop that the platform can create. And it turns out for people who are in a vulnerable place in the first place, it can be really damaging and, in some ways, lead to almost addictive-type behavior per Instagram's own analysis.

GROSS: So what you've just described is reported in documents that were written by Facebook researchers and then delivered to Facebook executives. So executives knew what you just told us, right?

HORWITZ: Absolutely. And Adam Mosseri, who's the head of Instagram, in fact, commissioned a lot of this research in the first place. So, you know, I think there's some credit that should go to the company for determining that - given the extensive external criticism of the company on these fronts, that perhaps it should at least get to the bottom of them. And it did. I mean, I think there's no question that what it found, you know, was convincing. As the company's own presentation - one of the presentations to executives notes, we make body image issues worse in 1 in 3 teen girls.

GROSS: But you write that this represents one of the clearest gaps revealed in these internal documents, gaps between Facebook's understanding of itself and its public position.

HORWITZ: Yeah. Look; I can understand why someone in corporate communications isn't eager to make the sentence, we make body image issues worse in 1 in 3 teen girls, public, much less some of the other things in these findings which included that young women who had thought about self-harm or suicide in the last month - that a not-tiny fraction of them traced those feelings directly back to Instagram's platform. So think potentially life-threatening effects.

And I can understand why the company wouldn't want to acknowledge that publicly, you know, or wouldn't want to talk about it much. I think what's interesting is the company did talk about these issues. They just didn't say that. What they said is that there were perhaps small effects, that the research was inconclusive, that, you know, there wasn't any, you know - that, you know, if there was an issue, it was bidirectional, so it was good for some users and bad for some users - basically really downplayed the clarity that they had internally about what was going on and the effect of their product.

GROSS: What was Facebook's reaction to your article about teenagers and Instagram?

HORWITZ: They defended the research and keeping the research private as necessary for, you know, honest internal discussion. And they, I think, tried to argue a bit with whether or not the conclusions of causality that seem to be very present within their own - how their own researchers discussed this stuff even with management - they sort of tried to undermine, you know, the certainty that it really sort of feels like pervades the presentations that the company's researchers gave to executives.

But, you know, I don't think they disagree with the issues. They sort of defended the things that they have said previously about there being relatively small effects. And, you know, I've noted that for many users and users who are in sort of a healthy emotional place, Instagram is a lot more beneficial than it is harmful, all of which is true. None of that is wrong. It's just that the question is, at what cost to vulnerable users?

GROSS: Well, let's take another short break here. If you're just joining us, my guest is Jeff Horwitz, who is the lead reporter for The Wall Street Journal's new series of articles called "The Facebook Files." We'll be right back after a break. This is FRESH AIR.

(SOUNDBITE OF SOLANGE SONG, "WEARY")

GROSS: This is FRESH AIR. Let's get back to my interview with Jeff Horwitz, a technology reporter for The Wall Street Journal. He's the lead reporter for The Journal's new series of articles called "The Facebook Files," which detail how Facebook executives are aware of the ways the platform causes harm, but executives often lack the will or the ability to address them. The series is based on internal Facebook documents that were leaked by a whistleblower to Jeff Horwitz.

One of the articles in the series is headlined "Facebook Tried To Make Its Platform A Healthier Place. It Got Angrier Instead." And this article is about a change that was made in 2018 that rewarded outrage. What was the change?

HORWITZ: Facebook promoted something in 2018 called meaningful social interaction. And the idea was that passively scrolling through content wasn't good for people - you know, it just turned them into zombies - and that what Facebook should be doing is encouraging people to sort of connect and engage with each other and with Facebook content more often. And there were two parts to this. One part was promoting content from people's friends and families, which was kind of a throwback to kind of an earlier era of Facebook where it was much more about that stuff than it was about kind of a constant stream of information and content.

The second part, though, was rewarding content that did really well on engagement, meaning things that got a lot of likes, but even more important than likes, things that got a lot of emoji responses, comments, re-shares, direct message shares and things like that - so basically things that made users kind of pound the keyboard a bit and, you know, share and engage as much as possible. And you know, nothing about that seems, you know, atrocious in sort of a general high-level view. But it turns out, as Facebook realized down the road, that the effect that had was privileging angry, incendiary conflict because there is nothing more engaging than a fight.

GROSS: And news publications, as a result, found that a lot of their traffic was decreasing dramatically. What was the connection?

HORWITZ: So there was some element of this where they were just kind of reducing news overall in feed at the - you know, in other words - and to boost the stuff from friends and family. But I think the type of content that succeeded changed. And one thing we found was that BuzzFeed's - the head of BuzzFeed, Jonah Peretti, who is - you know, no one could accuse this guy of being unsophisticated when it comes to social media - was actually figured out that something had changed materially when Facebook rolled out this stuff and that, essentially, a type of content that was succeeding was - on the platform, was, like, sensationalistic, incendiary. Gross medical stuff was doing well - you know, things that sort of got a response. And you know, his point to Facebook when he got in touch was that, look, like, you guys are forcing us to produce worse content.

And the same thing was true of political parties. They also picked up on what had changed, and they started adjusting accordingly. And so parties told Facebook that because of, literally, this algorithm change - like, some reweighting, some math - that they were shifting not just their communication strategy for the internet but, in some instances, their actual platform.

GROSS: Once this was reported to Facebook executives, what actions did the executives take?

HORWITZ: Facebook's attraction to meaningful social interaction as a metric wasn't just that they thought it would be good for people. It's also - they thought it would be good for Facebook. They really needed people to be engaging with content more because they'd been in decline in commenting and interaction in a way that was threatening to the future of a social network dependent on user-generated content. And so this had been really successful in terms of getting engagement back up and getting people to comment more. And the problem was that doing the things that researchers said would be necessary to sort of correct the amplified anger issue was going to come at the expense of some of the growth metrics that Facebook was pursuing. And that's always a hard sell inside that company.

GROSS: What was Facebook's response to this article?

HORWITZ: So Facebook noted that they had made some changes, which is true. I think the thing that we were very focused on is that people up to and including Mark Zuckerberg kind of resisted anything that was going to cause sacrifices in user growth numbers and in user engagement numbers for the purpose of improving the quality of discourse on the platform. So they told us on this one that basically any engagement-based ranking system or any ranking system is going to have problems - right? - that yes, they acknowledged that incendiary content did benefit from what they'd done, but, you know, that's not to say that there aren't disadvantages to other systems as well.

GROSS: So one of your articles in The Journal reports that in developing countries, Facebook was often used by drug cartels, human traffickers, used to promote violence against ethnic groups. And developing countries are actually very important to Facebook now. And why is that?

HORWITZ: People in poorer countries - they don't provide Facebook much money, but they do provide it with a lot of growth. The Facebook has basically stalled out in developed economies. I mean, there isn't really many - there isn't much in the way of new user growth to be achieved in the U.S., Canada, Europe and wealthier nations. So this is kind of where pretty much all of the company's growth has been coming in recent years. And you know, that makes them kind of - places like India are sort of the company's future.

And at the same time, though, Facebook has never really invested much in safety in those environments. And you know, they had, for example, a team of just a few people trying to focus on human trafficking across the globe. That includes sex trafficking, labor trafficking, organ trafficking. And they were clearly overwhelmed. And there were some, I think, serious issues of the company just simply not really caring all that much.

I think one instance we found was that the company had identified sort of wide-scale human trafficking occurring, in which people from the Philippines and Africa were kind of indenturing themselves into domestic labor in the Gulf states. And they were - once there, kind of lost all autonomy. They could literally be resold without their permission. And Facebook actually had - first of all, had allowed this for a long time. Like, up until 2019, it was actually OK for people to be sold on Facebook so long as the selling was happening through brick-and-mortar establishments, as long as, you know, there was - it was in a country where this was allowed. And then I think more broadly, Facebook had just kind of turned a blind eye to this whole practice. One thing, you know, that I think was - really stood out to me just in terms of demonstrating the company's lack of will on some of these things is that Facebook, while it had identified widespread human trafficking, hadn't done anything about it - and in some instances for years.

The thing that Facebook - moved Facebook in 2019 to take aggressive action on this was Apple. You know, maker of my iPhone told Facebook that it was going to take away - it was going to remove Instagram and Facebook from its App Store, basically make it so that people couldn't download the apps unless Facebook got its human trafficking problem under control. And boom, that was it, right? Actually, understanding human trafficking was happening on its platform wasn't enough to get Facebook's attention - what did was the threat that Apple might take an action that would severely damage its business. So Facebook, literally within days, was just pulling down content all over the place. And the crisis passed. And then, as we found, things went back to normal. And normal means that human trafficking is happening on a pretty widespread scale on the platform.

GROSS: Another obstacle that you report is Facebook doesn't have enough people monitoring posts who speak the dialect needed to identify dangerous or criminal uses of Facebook.

HORWITZ: Yeah. And this is something that I think - look; like, I think we're all familiar with Facebook's apologies right now, right? Like every couple of months or weeks or days, depending on how closely you're monitoring it, the company ends up saying that it's sorry that something happened. And particularly overseas, it seems like there's just this kind of succession of inadvertent oversights that come with large human consequences. And the thing we found is that these aren't accidents. These aren't due to the company, you know, just simply having too much to possibly do. These are issues of direct neglect. So for example, with Arabic, it's the third - world's third most commonly spoken language. It has many dialects that are mutually incomprehensible. Facebook literally can't - doesn't have anyone who can speak most of them or can understand most of them in terms of sort of the vernacular. And it also doesn't have a system to route content in those dialects to the right people.

So when something happens like the Israeli-Palestinian violence earlier this year, the company is just sort of woefully unprepared to deal with it. They can't process content. They don't have people on staff. And, I mean, one of the things that's kind of tragic that we could see inside the documents was that you had all of these people who work for Facebook with Middle Eastern backgrounds who were just desperately trying to, like, kick in ad hoc to try to, like, help steer the company in a better direction because it was just screwing up so much at a time that was, like, so crucial on its platform.

GROSS: Nick Clegg, who's the Facebook vice president of global affairs, recently published a blog post saying that The Wall Street Journal articles have contained deliberate mischaracterizations of what Facebook is trying to do and conferred egregiously false motives to Facebook's leadership and employees. What's your reaction to that?

HORWITZ: My reaction is that Facebook has the right to say whatever they would like to say in response to our reporting. I think the more useful reaction to that isn't mine. It's that there actually have been in recent days a large number of former Facebook employees who have directly taken issue with what Mr. Clegg and what the company has said on these subjects. And I mean, these are people who actually were doing the work. Like, there are names that are popping up on Twitter that are the names that were sort of protagonists, I suppose, in some of the stories I could see playing out inside of the company.

And what they've said very clearly is that - you know, one, that the things that we're raising are pretty much correct and, two, that there is, in fact, this history of kind of disregarding the work of the people Facebook's asked to do integrity work - integrity just being platform safety and content quality stuff. And so, you know, I think there's something really encouraging about some of these voices coming to the fore because these are people who sort of pioneered not just the ways to measure problems on the platform, but also ways to address them. And so the idea that they might be able to come out and talk more about the work they did is, I think, really interesting to me and, in some ways, would be very healthy for the company.

GROSS: My guest is Jeff Horwitz, who is the lead reporter for The Wall Street Journal's new and ongoing series called "The Facebook Files." This is FRESH AIR.

(SOUNDBITE OF YO LA TENGO'S "WEATHER SHY")

GROSS: This is FRESH AIR. Let's get back to my interview with Jeff Horwitz, a technology reporter for The Wall Street Journal, who's the lead reporter for the Journal's new series of articles called "The Facebook Files." The series details how Facebook executives are aware of the ways the platform causes harm. But the series also says executives have often lacked the will or the ability to address those problems. The series is based on internal Facebook documents that were leaked by a whistleblower to Jeff Horwitz. What are some of the suggestions current or former Facebook employees have made, that you're aware of, of how to improve some of the problems that you've reported on?

HORWITZ: Yeah, I think Facebook tends to treat social media as if it's - you know, Facebook is the only way in which it could possibly exist - right? - kind of a love-it-or-leave-it approach. And that, for their own - per their own employees, is absolutely not true. There are a number of things that can be changed, right? So in addition to just simply the question of resources, which would address a lot of problems, there are also ways in which the platform perhaps has grown too complex to be safe. So, for example, in developing countries, is it really a good idea for things to be able to go viral in a matter of minutes? Maybe that's not good if you're worried about information quality. So virality restrictions is one thing.

There's other work that I think seems like it would be really promising, such as trying to give more prominence to voices that seem to have respectful conversations. It's the - the concept is called earned voice. And rather than just sort of rewarding the biggest loudmouth, this would reward people who tend to be able to have conversations with people who aren't necessarily exactly like them that are nonetheless respectful and, you know, mutually satisfying. Now, that's not, of course, the way you get the most engagement, but it is something that could potentially provide a different style of conversation that would be, I think, recognized by most people outside the company as healthier.

GROSS: Recently, Facebook created what's been described as a Supreme Court for Facebook, an outside entity of experts who would help Facebook make complicated decisions about content. How has that been actually functioning?

HORWITZ: So this came up in the XCheck story that we did about the sort of special protections for VIPs. Facebook spent $130 million creating the Oversight Board and - with the stated purpose of providing transparency and accountability into its operations. And one of the powers it gave the Oversight Board was the ability to ask Facebook questions that Facebook would then have to answer, assuming that they were relevant. And in the case of XCheck, the board asked the right questions. In relation to Donald Trump's suspension from the platform, the board asked, very specifically, for data about the program and for the XCheck program and about protections for VIP users. And Facebook said it didn't exist. And this is obviously awkward, given the stuff we've seen, because, you know, we can actually see there were internal dashboards of metrics as well as just voluminous documentation of the program's problems, of the number of accounts, of how many bad views of content occurred as a result of the lag in review times. You know, this is a pretty well-documented program internally, and Facebook told its supposed overseers that it just simply didn't have the information and couldn't possibly gather it.

And the Oversight Board has, at this point, issued some pretty strong statements of discontent with that situation. But I think it does seem like a bit of a crisis in the sense that, you know, oversight does imply the ability to actually see what's going on inside the company. And I think the Oversight Board has, to its credit, recognized that that isn't something that Facebook is readily willing to provide. So what their role is, I think, going forward is going to be an interesting question, because they're, - you know, they're kind of being asked to play a self-regulatory role for Facebook. At the same time, they are fully independent, and they also seem to not have much trust in Facebook and whether Facebook's going to give them the truth about what Facebook is itself doing.

GROSS: Well, Jeff Horwitz, thank you for your reporting, and thank you for coming on our show.

HORWITZ: Thank you so much, Terry.

GROSS: Jeff Horwitz is the lead reporter on The Wall Street Journal series "The Facebook Files." If you'd like to catch up on FRESH AIR interviews you missed, like this week's interviews with B.J. Novak, who played Ryan in "The Office" and has a new TV series, or Max Chafkin, author of a new book about the controversial co-founder of PayPal, Peter Thiel, check out our podcast. You'll find lots of FRESH AIR interviews.

(SOUNDBITE OF JOHN COLTRANE'S "GIANT STEPS")

GROSS: FRESH AIR'S executive producer is Danny Miller. Our technical director and engineer is Audrey Bentham. Our interviews and reviews are produced and edited by Amy Salit, Phyllis Myers, Roberta Shorrock, Sam Briger, Lauren Krenzel, Heidi Saman, Ann Marie Baldonado, Thea Chaloner, Seth Kelley and Kayla Lattimore. Our digital media producer is Molly Seavy-Nesper. Therese Madden directed today's show. I'm Terry Gross.

Copyright © 2021 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

facebook data breach 2019 case study

Facebook data leak: details from 533 million users found on website for hackers

  • Information appears to be several years old
  • Facebook says leak stems from problem fixed in 2019

Details from more than 500 million Facebook users have been found available on a website for hackers.

The information appears to be several years old but it is another example of the vast amount of information collected by Facebook and other social media sites and the limits to how secure that information is.

The availability of the data set was first reported by Business Insider. According to that publication, it contains information from 106 countries including phone numbers, Facebook IDs, full names, locations, birthdates and email addresses.

Facebook has been grappling with data security issues for years. In 2018, the social media giant disabled a feature that allowed users to search for one another via phone numbers, following revelations that the political firm Cambridge Analytica had accessed information on up to 87 million users without their knowledge or consent.

In December 2019, a Ukrainian security researcher reported finding a database with the names, phone numbers and unique user IDs of more than 267 million Facebook users – nearly all US-based – on the open internet. It is unclear if the current data dump is related to this database.

The Menlo Park, California-based company did not immediately respond to a request for comment. In a statement provided to other publications, Facebook said the leak was old and stemmed from a problem that had been fixed in 2019.

  • Social networking

Most viewed

Facebook and Data Privacy in the Age of Cambridge Analytica

April 30, 2018

Iga Kozlowska

Spray_paint_on_sidewalk_of_Facebook_like_thumbs_up_and_Instagram_logo

In recent weeks, the world has been intently following the Cambridge Analytica revelations: millions of Facebook users’ personal data was used, without their knowledge, to aide the political campaigns of conservative candidates in the 2016 election, including Donald Trump. While not exactly a data breach, from the public response to this incident, it is clear that the vast majority of Facebook users did not knowingly consent to have their personal information used in this way.

What is certain is that Facebook, the world’s largest social network platform, serving over two billion customers globally, is facing public scrutiny like never before. With data breaches, ransomware attacks, and identity theft a regular occurrence in this digitally driven economy, this event is different. For the first time, we see the mishandling of social data for political purposes on a mass scale. [1] It remains to be seen whether this will be a watershed moment for rethinking how we use personal data in the modern age. It is also unclear whether this experience will change companies’ and consumers’ privacy practices forever. For now, however, Facebook users and investors, American and foreign governments, and numerous regulatory bodies are paying attention.

Cambridge Analytica and Facebook

In 2013, University of Cambridge psychology professor Dr. Aleksandr Kogan created an application called “thisisyourdigitallife.” This app, offered on Facebook, provided users with a personality quiz. After a Facebook user downloads the app, it would start collecting that person’s personal information such as profile information and Facebook activity (e.g., what content was “liked”). Around 300,000 people downloaded the app. But the data collection didn’t stop there. Because the app also collected information about those users’ friends, who had their privacy settings set to allow it, the app collected data from about 87 million people. [2]

Next, Dr. Kogan passed this data on to Strategic Communication Laboratories (SCL), which owns Cambridge Analytica (CA), a political consulting firm that uses data to determine voter personality traits and behavior. [3] It then uses this data to help conservative campaigns target online advertisements and messaging. It is precisely at this point of data transfer from Dr. Kogan to other third parties like CA that Dr. Kogan violated Facebook’s terms of service, which prohibit the transfer or sale of data “to any ad network, data broker or other advertising or monetization-related service.” [4]

When Facebook learned about this in 2015, it removed Kogan’s app and demanded certifications from Kogan, and CA that they had deleted the data. Kogan and CA all certified to Facebook that they destroyed the data. However, copies of the data remained beyond Facebook’s control. While Alexander Nix, the CEO of CA, has told lawmakers that the company does not have Facebook data, “a former employee said that he had recently seen hundreds of gigabytes on CA servers, and that the files were not encrypted” reports the New York Times. [5]

In 2015, Facebook did not make any public statements regarding the incident, nor did it inform those users whose data was shared with CA. [6] Neither did Facebook report the incident to Federal Trade Commission, the US agency that oversees privacy-related issues. As Mark Zuckerberg, Facebook CEO, said during his two-day Congressional hearing on April 9 and April 10, 2018, once they received CA’s attestation that the data has been deleted and is no longer being used, Facebook considered the “case closed.” [7]

With the breaking of the story on March 17, 2018 in The Guardian [8] and the New York Times [9] , Facebook was made aware that the data in fact have not been purged to this day. The fallout from this incident has been unprecedented. Facebook is facing numerous lawsuits, US, UK, and EU governmental inquiries, a #DeleteFacebook boycott campaign, and a sharp drop in share price that’s erased nearly $50 billion of the company’s market capitalization in a mere three days of the news breaking [10] .

This is not the first time, however, that Facebook, has faced issues related to its data collection and processing. [11] And, it is not the first time that it has faced regulatory scrutiny. For example, in 2011, the FTC settled a 20-year consent decree with Facebook, having found that Facebook routinely deceived its users by sharing personal data with third parties that users thought was private. [12] It is only now that Facebook’s irresponsible behavior is receiving widespread public scrutiny. Whereas warnings from privacy and security professionals to date have been large falling on deaf ears; why has this event capturing the attention of consumers, companies, and governments the world over?

We have seen international data breach cases at this scale before. Indeed, data breaches, identify theft, ransomware, and other cybersecurity attacks have become ubiquitous in a digital global economy that runs on data. [13] In the last five years, we have witnessed the 2013 Snowden revelations of mass global government surveillance and the 2014 North Korean attack on Sony, a US corporation. [14] The average consumer has been hit hard as well. The 2013 Target data breach resulted in 40 million compromised payment cards. [15] The 2016 Yahoo attack compromised 500 million accounts [16] and the 2017 Equifax hack compromised 143 million. [17] It doesn’t help that, at the same time as the Cambridge Analytica incident, Facebook discovered a vulnerability in its search and account recovery features that may have allowed bad actors to harvest the public profile information of most of its two billion users . [18] It seems that the public feels that enough is enough.

Beyond the scale of the event, the Cambridge Analytica incident involves arguably the most serious misuse and mishandling of consumer data we’ve yet seen. The purpose for which the data was illegally harvested is new and it hits a nerve with an American society that is already politically divided and where political emotions run high. Funded by Robert Mercer, a prominent Republican donor, and Stephen Bannon, Trump’s former political adviser, CA was using the data for explicit political purposes – to help conservative campaigns in the 2016 election, including Donald Trump’s campaign. [19] Neither the 3000,000 Facebook users who downloaded the app nor their 87 million friends anticipated that their personal data could be used for these political purposes. It’s one thing if customer data is used to serve bothersome ads, or a hacker steals credit card information for economic gain, but it’s another if the world’s largest social network was taken advantage of to help elect the president of the United States. So what exactly is Facebook’s accountability in all this?

From Data Breach to Breach of Trust

Was this incident a data breach? Facebook first responded on March 17, 2018 in a Facebook post by Paul Grewal, VP & Deputy General Counsel, who wrote that, “The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.” [20] That same day, Alex Stamos, Facebook’s Chief Security Officer, tweeted (and later deleted the tweet) that, “Kogan did not break into any systems, bypass any technical controls, our use a flaw in our software to gather more data than allowed. He did, however, misuse that data after he gathered it, but that does not retroactively make it a ‘breach.'” [21]

This is true. According to the International Organization for Standardization and the International Electrotechnical Commission – two bodies that govern global security best practices – the definition of data breach is as follows: “a compromise of security that leads to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to protected data transmitted, stored or otherwise processed.” [22] Because Facebook’s systems were not penetrated and the data was mishandled by a third-party in explicit violation of Facebook’s terms of service, the incident does not qualify as a data breach as understood by the global cybersecurity community. But what about everyone else?

Facebook quickly understood, however, that to millions of users whose data was mishandled, this incident felt like a data breach. [23] Despite the fact that technically all 87 million Facebook users consented to Kogan’s app collecting their personal data by not changing their privacy settings accordingly, the public outcry reveals that they do not feel that they authorized the app to access their data, let alone share it with a third party like CA. Facebook’s defense that it does provide users with controls to determine what types of data they want to share with which apps and what can be shared with apps that their friends use felt empty to customers who are largely unaware of these controls because Facebook does not make it easy to access them. Moreover, Facebook’s privacy settings are by default not set for privacy. This is, at least in part, because, as was made clear in the Congressional hearings this month, Facebook’s business model relies on app developers’ access to their users’ data for targeted advertising, which makes up over 90% of Facebook’s revenue. In other words, Facebook’s business model conflicts with privacy-friendly policies. [24]

Quickly recognizing this, Facebook pivoted, took some responsibility, and rather than argue the fine points of data breach definitions, apologized for what was experienced by customers as a breach of trust. Only five days after the story broke, Zuckerberg wrote in a Facebook post, “This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.” [25] That week Facebook took out full-page ads in nine major US and international newspapers with the message: “This was a breach of trust and I’m sorry we didn’t do more at the time. I promise to do better for you.” [26] Recognizing the complex digital ecosystem Zuckerberg said in his opening remarks at the Congressional hearing that, “We didn’t take a broad enough view of what our responsibility is. That was a huge mistake, and it was my mistake.” [27]

This “apology tour,” as Senator Blumenthal dubbed it, will be meaningless without concrete policy changes. [28] Facebook has already instituted some changes. For example, they have tightened some of the APIs that allow apps to harvest data like information about which events a user hosts or attends, the groups to which they belong, and page posts and comments. Apps that have not been used in more than three months will no longer be able to collect user data. [29] In addition, Facebook will now be authorizing those who want to place political or issues ads on Facebook’s platform by validating their identity and location. [30] These ads will be marked as ads and will show who has paid for them. In addition, in June, Facebook plans to launch a public and searchable political ads archive. [31] Finally, Facebook has started a partnership with scholars who will work out a new model for academics to gain access to social media data for research purposes. The plan is to “form a commission which, as a trusted third party, receives access to all relevant firm information and systems, and then recruits independent academics to do research in specific areas following standard peer review protocols organized and funded by nonprofit foundations.” [32] This should not only allow scholars greater access to social data but also safeguard against its misuse, as in the case of Dr. Kogan, by clearly distinguishing between data use for scholarly research and data use for advertising and other secondary purposes.

It remains to be seen just how extensive and impactful Facebook’s policy changes will be. Zuckerberg’s performance at the Congressional hearings was reported positively by the media and Facebook’s stock price regained much of the value it lost since the Cambridge Analytica story broke. However, this is in part because the Senators did not ask specific and pointed questions on what compliance policies Facebook will actually implement. [33] For example, the conversation around the balance between short privacy notices that are reader-friendly and longer and more comprehensive notices written in “legalese” resulted in Zuckerberg signaling that he knows that this debate among privacy professionals exists but did not lead to a commitment by Facebook to make their privacy policies more transparent. [34]

When Zuckerberg did mention specific policy changes, not all of them were new changes responding to this incident. For example, Zuckerberg announced Facebook’s application of the European General Data Protection Regulation (GDPR) to all Facebook customers, not just Europeans, as a heroic move of self-regulation. [35] However, it should not have taken Facebook this long to announce this position. Limiting the GDPR to EU citizens only, is not only shortsighted as the GDPR becomes de facto global privacy standard, but also unfair to non-EU citizens who would enjoy less privacy protections. In other words, while the Congressional hearing and Facebook’s initial policy changes are a good start, this should only be the beginning of Facebook’s journey toward improved transparency and data protection.

Lessons Learned

What are the lessons learned from the Cambridge Analytica incident for consumers, for companies, and for governments?

Consumers must recognize that their data has value. Consumers should educate themselves on how companies, especially ones that offer free service like Facebook and Google, use their personal data to drive their businesses. Consumers should read privacy notices and take advantage of the in-product user controls that most tech companies offer. Consumers should take advantage of their rights to request that a company let them view, edit, and delete their personal data because after all, consumers own their data, not companies. When companies engage in fraudulent or deceitful data handling practices, consumers should file complaints with the FTC or other appropriate regulatory bodies. Finally, consumers should advocate for more transparency and controls from companies and demand that their elected officials do more to protect privacy.

Companies that electronically process personal data – which is now practically every company in the world – must learn to better balance privacy risks with privacy controls. The riskier the data use, the more user controls are required. The more sensitive the data, the more protections should be put in place. Controls can include explicit consent, reader-friendly and prominent privacy notices, and privacy-friendly default settings. Company leaders should do more than just follow the letter of the law by putting themselves in their customers’ shoes. How do customers expect their data to be used when they hand it over? Is consent given? And is it truly freely given, specific, informed, and unambiguous? Moreover, as Facebook learned the hard way, there will always be bad actors. When sharing data with third parties, companies would do well to go the extra mile and ensure that those companies are meeting the company’s privacy requirements by investing in independent audits. When receiving data from third parties, companies should confirm that that data was collected in compliant manner, not by taking their vendors’ word for it, but again, by conducting period audits.

And finally, governments, in this digitally connected global marketplace, must reform outdated legislation so that it addresses the modern complexities of international data usage and transfers. The European Union, for example, is setting a global example, through the General Data Protection Regulation that comes into effect May 25, 2018. Seven years in the making, this is a comprehensive piece of legislation that (1) expands data subjects’ rights (2) enforces 72-hour data breach notifications (3) expands accountability measures and (4) improves enforcement capabilities through levying fines of up to 4% of global revenue. Although applicable only to European residents and citizens, most multi-national tech companies like Facebook, Google, and Microsoft are implementing these standards for all of their customers. However, it is high-time, that the US Congress find the political will to pass similar privacy protections for US consumers so that everyone can take advantage of the opportunities that come with the 21 st century digital economy.

[1] For an account of Facebook’s role in undermining democracy see: Vaidhyanathan, Siva. 2018. Antisocial Media : How Facebook Disconnects Us And Undermines Democracy . Oxford University Press. See also Heilbing, Dirk et al . 2017. “Will Democracy Survive Big Data and Artificial Intelligence?” Scientific American . https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/ Accessed 4/22/2018.

[2] Kang, Cecilia and Sheera Frenkel. “Facebook Says Cambridge Analytica Harvested Data of Up to 87 Million Users.” The New York Times . April 4, 2018. https://www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html Accessed 4/26/18.

[3] Rosenberg, Matthew et al . “How Trump Consultants Exploited the Facebook Data of Millions.” The New York Times . March 17, 2018. https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html Accessed 4/26/18.

[4] Granville, Kevin. “Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens.” The New York Times . March 19, 2018. https://www.nytimes.com/2018/03/19/technology/facebook-cambridge-analytica-explained.html Accessed 4/15/18.

[5] Rosenberg, 2018.

[6] Rosenberg, 2018.

[7] “Facebook CEO Mark Zuckerberg Hearing on Data Privacy and Protection.” C-SPAN. April 10, 2018. https://www.c-span.org/video/?443543-1/facebook-ceo-mark-zuckerberg-testifies-data-protection%20Accessed%204/15/18 Accessed 4/26/18.

[8] Cadwalladr, Carole and Emma Graham-Harrison. “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach.” The Guardian . March 17, 2018. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election Accessed 4/26/18.

[9] Rosenberg, 2018.

[10]  Mola, Rani. “Facebook has lost nearly $50 billion in market cap since the data scandal.” Recode. March 20, 2018. https://www.recode.net/2018/3/20/17144130/facebook-stock-wall-street-billion-market-cap Accessed 4/26/18

[11] For one of the earliest analyses of Facebook’s privacy policies see Jones, Harvey and Jose Hiram Soltren. 2005. Facebook: Threats to Privacy . http://groups.csail.mit.edu/mac/classes/6.805/student-papers/fall05-papers/facebook.pdf Accessed 4/22/18. See also Fuchs, Christian. 2014. “Facebook: A Surveillance Threat to Privacy?” in Social Media: A Critical Introduction . London: Sage.

[12] “FTC Approves Final Settlement With Facebook.” Federal Trade Commission. August, 10, 2012. https://www.ftc.gov/news-events/press-releases/2012/08/ftc-approves-final-settlement-facebook Accessed 4/15/18.

[13] For more on security and privacy see Schneier, Bruce. 2016. Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World . New York. W. W. Norton & Company.

[14] “The Interview: A guide to the cyber attack on Hollywood.” BBC. December 29, 2014. http://www.bbc.com/news/entertainment-arts-30512032 Accessed 4/27/18.

[15] “Target cyberattack by overseas hackers may have compromised up to 40 million cards.” The Washington Post . December 20, 2013. https://www.washingtonpost.com/business/economy/target-cyberattack-by-overseas-hackers-may-have-compromised-up-to-40-million-cards/2013/12/20/2c2943cc-69b5-11e3-a0b9-249bbb34602c_story.html?noredirect=on&utm_term=.2d3d9c763c06 Accessed 4/27/18.

[16] Fiegerman, Seth. “Yahoo says 500 million accounts stolen.” CNN. September 23, 2016.   http://money.cnn.com/2016/09/22/technology/yahoo-data-breach/index.html Accessed 4/27/18.

[17] Siegel Bernard, Tara et al . “Equifax Says Cyberattack May Have Affected 143 Million Users in the U.S.” The New York Times. September 7, 2017. https://www.nytimes.com/2017/09/07/business/equifax-cyberattack.html Accessed 4/27/18.

[18] Kang and Frenkel, 2018.

[19] Rosenberg, 2018.

[20] Grewal, Paul. “Suspending Cambridge Analytica and SCL Group from Facebook.” March 16, 2018. Facebook Newsroom. https://newsroom.fb.com/news/2018/03/suspending-cambridge-analytica/ Accessed 4/15/18.

[21] Wagner, Kurt. “How Did Facebook Let Cambridge Analytica Get 50M Users’ Data?” Newsfactor. March 21, 2018. https://newsfactor.com/story.xhtml?story_id=113000078MBA Accessed 4/15/18.

[22] ISO/IEC 27040: 2015. International Organization for Standardization. https://www.iso.org/obp/ui/#iso:std:iso-iec:27040:ed-1:v1:en Accessed 4/12/18.

[23] On the ethics of social media data collection see Richterich, Annika. 2018. The Big Data Agenda: Data Ethics and Critical Data Studies (Critical Digital and Social Media Studies Series). University of Westminster Press.

[24] “Facebook CEO Mark Zuckerberg Hearing on Data Privacy and Protection.” C-SPAN. April 10, 2018. https://www.c-span.org/video/?443543-1/facebook-ceo-mark-zuckerberg-testifies-data-protection%20Accessed%204/15/18 Accessed 4/26/18.

[25] Zuckerberg, Mark. Facebook Post. March 21, 2018. https://www.facebook.com/zuck/posts/10104712037900071 Accessed 4/15/18.

[26] “Facebook Apologizes for Cambridge Analytica Scandal in Newspaper Ads.” March 25, 2018. TIME . time.com/5214935/facebook-cambridge-analytica-apology-ads/ Accessed 4/15/18.

[27] “Facebook CEO Mark Zuckerberg Hearing on Data Privacy and Protection.” C-SPAN. April 10, 2018.  https://www.c-span.org/video/?443543-1/facebook-ceo-mark-zuckerberg-testifies-data-protection Accessed 4/15/18 .

[28] Dennis, Steven T. and Sarah Frier. “Zuckerberg Defends Facebook’s Value While Senators Question Apology.” Bloomberg. April 10, 2018. https://www.bloomberg.com/news/articles/2018-04-10/facebook-s-zuckerberg-warned-by-senators-of-privacy-nightmare Accessed 4/27/18 .

[29] Schroepfer, Mike. “An Update on Our Plans to Restrict Data Access on Facebook.” Facebook Newsroom. April 4, 2018. https://newsroom.fb.com/news/2018/04/restricting-data-access/ Accessed 4/22/2018.

[30] For a broader discussion of social media and political advertising see Napoli, Philip M. and Caplan, Robyn. 2016. “When Media Companies Insist They’re Not Media Companies and Why It Matters for Communications Policy” https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2750148 Accessed 4/22/18.

[31] Goldman, Rob and Alex Himel. “Making Ads and Pages More Transparent.” Facebook Newsroom. April 6, 2018. https://newsroom.fb.com/news/2018/04/transparent-ads-and-pages/ Accessed 4/22/2018.

[32] King, Gary and Nathaniel Persily. Working Paper. “A New Model for Industry-Academic Partnerships.” April 9, 2018. https://gking.harvard.edu/partnerships Accessed 4/22/2018.

[33] Member of the House of Representatives took a more aggressive line of questioning with Mark Zuckerberg. For example, Representative Joe Kennedy III poked holes in Facebook’s persistent claim that Facebook users “own” their data by pointing to the massive amount of metadata that Facebook generates (beyond what the user directly generates) and then sells to advertisers. See Madrigal, Alexis C. “The Most Important Exchange of the Zuckerberg Hearing.” The Atlantic . April 11, 2018. https://www.theatlantic.com/technology/archive/2018/04/the-most-important-exchange-of-the-zuckerberg-hearing/557795/ Accessed 4/27/18.

[34] For the evolution of Facebook’s privacy policy see Shore, Jennifer and Jill Steinman. 2015. “Did You Really Agree to That? The Evolution of Facebook’s Privacy Policy” Technology Science. https://techscience.org/a/2015081102/ Accessed 4/22/18. For a broader conversation around privacy and human behavior see Acquisti, Alessandro. 2015. “Privacy and Human Behavior in the Age of Information” Science . Vol. 347. Pp. 509-514.

[35] For more on European privacy law see Voss, W. Gregory. 2017. “European Union Data Privacy Law Reform: General Data Protection Regulation, Privacy Shield, and the Right to Delisting” Business Lawyer , Vol. 72. Pp. 221-233.

This publication was made possible in part by a grant from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author.

About the Author

Dr. Iga Kozlowska is a sociologist and a privacy professional currently working in the technology industry. Iga's expertise in international technology issues is grounded in the unique perspective of a scholar and practitioner. Fascinated by the global digital economy and information governance, Iga is also interested in cybersecurity and is an Associate of the International Information System Security Certification Consortium, the world's leading cybersecurity and IT security professional organization. Iga completed her PhD in sociology at Northwestern University in 2017. Her dissertation research focused on the transnational diffusion of historical memories as it has impacted European integration since 2000. Iga received the US Fulbright Award (Poland 2015-2016) in recognition of the contributions of her research to the burgeoning field of transnationalism studies and to policymakers interested in fostering international cooperation and mutual understanding. Her prior research at the intersections of public policy and nationalism has been published in Nations and Nationalism.

  • Center for Global Studies
  • Cybersecurity
  • Disinformation
  • International Policy Institute
  • Social media
  • North America
  • Research Themes
  • Technology, Security, and Diplomacy

Related Articles

Twitter black and white

JSIS Cybersecurity Report: How Should the Tech Industry Address Terrorist Use of Its Products?

facebook data breach 2019 case study

Contextualizing the iPhone Encryption Debate

Row_of_Ukrainian_flags

Countering Disinformation: Russia’s Infowar in Ukraine

Latest news.

  • Job Opportunity: Centralized Services Associate Director
  • Spring Course: Media And Information Technology In Global Conflict

Seaweed in water

Related Centers

Hub Image

Facebook is garnering headlines for another data leak putting users' privacy at risk. The latest incident involves the personal information of 533 million Facebook users from 106 different countries as apparently discovered by Alon Gal, co-founder and CTO of cybercrime intelligence firm Hudson Rock .

In an April 3 tweet , Gal said the data, which includes Facebook members' account creation date, bio, birthdate, Facebook IT, full name, location, past location and relationship status, has been made available free to members of a hacking forum.

In a January 14 post, he said an early 2020 vulnerability that exposed the phone numbers linked to every Facebook account had been exploited and that a hacker had advertised a paid bot that would allow users to query the database. Facebook claims the data must have been scraped prior to September 2019, before the vulnerability was addressed.

Facebook has no plans to notify individuals whose information was exposed because the company claims it does not know who was affected. Despite the patch in September 2019, 419 million records were leaked which contained user IDs and phone numbers that same month . Then in December 2019, a Ukrainian researcher discovered a database on the open Internet which included the personal information of more than 267 million Facebook users.

Interestingly, in July 2019, the FTC announced that it had completed a year-long investigation and concluded that Facebook had "used deceptive disclosures and settings to undermine users' privacy preferences" in violation of a 2012 FTC order . Specifically, third-party apps were allowed to collect the personal information of Facebook members whose friends had downloaded the apps.

According to the new 20-year settlement order :

With more than 140,000 members, Cyber Security Hub is the vibrant community connecting cyber security professionals around the world.

  • Facebook must pay a $5 bn fine which the FTC claims is unprecedented.
  • Facebook's board must form an independent privacy committee "removing unfettered control by Facebook's CEO Mark Zuckerberg over decisions affecting user privacy."
  • Zuckerberg and Facebook compliance officers must independently file certifications with the FTC quarterly, which state the company is complying with the order.
  • A third-party assessor must make biennial assessments of Facebook's privacy program to identify any gaps and report to the new privacy board on a quarterly basis.
  • The FTC can monitor Facebook's compliance using discovery tools provided by the Federal Rules of Civil Procedure .
  • Every new or modified Facebook, Instagram, or WhatsApps product, service or practice must undergo a privacy review before it's implemented.
  • If the data of 500 or more users has been compromised by a breach, the incident must be documented and shared with the FTC and the assessor within 20 days of the incident.

Other requirements can be found here , but yet another database of Facebook user information was just discovered .

Data privacy is a serious issue that organizations need to address proactively. While behemoths like Facebook can weather a $5 bn fine , lesser fines could be fatal to smaller organizations. A responsible approach to privacy should include:

  • Privacy by design so the right guardrails are built into products and services.
  • Penetration testing to identify weak areas.
  • Patching to avoid unnecessary vulnerabilities.
  • Board-level oversight to ensure that privacy is given the attention it deserves.
  • Compliance officers or a compliance officer, depending on the size of the company, whose job it is to ensure compliance.
  • Data governance to avoid data misuse.
  • Continuous monitoring to prevent or minimize data exfiltration.
  • Scenario planning in case a breach occurs.
  • A plan to notify affected victims and law enforcement should a PII leak occur.
  • Ongoing security awareness training for IT and non-technical personnel to reduce the risk of inadvertent mistakes.

FIND CONTENT BY TYPE

  • Case Studies
  • White Papers

Cyber Security Hub COMMUNITY

  • Advertise with us
  • Cookie Policy
  • User Agreement
  • Become a Contributor
  • All Access from CS Hub
  • Become a Member Today
  • Media Partners

ADVERTISE WITH US

Reach Cyber Security professionals through cost-effective marketing opportunities to deliver your message, position yourself as a thought leader, and introduce new products, techniques and strategies to the market.

JOIN THE Cyber Security Hub COMMUNITY

Join CSHUB today and interact with a vibrant network of professionals, keeping up to date with the industry by accessing our wealth of articles, videos, live conferences and more.

iqpc logo

Cyber Security Hub, a division of IQPC

Careers With IQPC | Contact Us | About Us | Cookie Policy

Become a Member today!

PLEASE ENTER YOUR EMAIL TO JOIN FOR FREE

Already an IQPC Community Member? Sign in Here or Forgot Password Sign up now and get FREE access to our extensive library of reports, infographics, whitepapers, webinars and online events from the world’s foremost thought leaders.

We respect your privacy, by clicking 'Subscribe' you will receive our e-newsletter, including information on Podcasts, Webinars, event discounts, online learning opportunities and agree to our User Agreement. You have the right to object. For further information on how we process and monitor your personal data click here . You can unsubscribe at any time.

websights

  • Internet ›
  • Social Media & User-Generated Content

Industry-specific and extensively researched technical data (partially from exclusive partnerships). A paid subscription is required for full access.

Facebook fines and penalties in 2019

Fines and penalties issued to facebook in 2019 (in u.s. dollars).

  • Immediate access to 1m+ statistics
  • Incl. source references
  • Download as PNG, PDF, XLS, PPT

Additional Information

Show sources information Show publisher information Use Ask Statista Research Service

October 2019

The list of fines was indexed by the source in October 2019. The source adds: "The list includes penalties that the company has tentatively agreed to pay, as well as those that the company is challenging or has yet to acknowledge, so the running total may change as the situation around each settlement and fine develops—and if Facebook racks up any more penalties before the year is over."

Other statistics on the topic

Cyber Crime & Security

  • UK largest fines issued for violations of GDPR 2023
  • Private browsing or incognito mode usage frequency in United Kingdom (UK) 2023
  • UK: personal data gathering awareness raise motivations among users 2023
  • Browsing history cleaning frequency in United Kingdom (UK) 2023, by age

Stacy Jo Dixon

To download this statistic in XLS format you need a Statista Account

To download this statistic in PNG format you need a Statista Account

To download this statistic in PDF format you need a Statista Account

To download this statistic in PPT format you need a Statista Account

As a Premium user you get access to the detailed source references and background information about this statistic.

As a Premium user you get access to background information and details about the release of this statistic.

As soon as this statistic is updated, you will immediately be notified via e-mail.

… to incorporate the statistic into your presentation at any time.

You need at least a Starter Account to use this feature.

  • Immediate access to statistics, forecasts & reports
  • Usage and publication rights
  • Download in various formats

You only have access to basic statistics. This statistic is not included in your account.

  • Instant access  to 1m statistics
  • Download  in XLS, PDF & PNG format
  • Detailed  references

Business Solutions including all features.

Statistics on " Online privacy in the United Kingdom (UK) "

  • Biggest data breaches in the UK 2024, by impact
  • UK number of breached data points in Q1 2020-Q4 2023
  • GDPR awareness level in selected European markets 2018-2022
  • Attitudes towards the internet in the UK 2023
  • UK: attitudes of internet users regarding online privacy Q3 2023
  • UK internet users who think that being online is beneficial rather than risky 2022
  • UK: personal data sharing in return for online services 2023
  • UK: personal data gathering awareness change among users 2023
  • UK: personal data tracking attitudes among users 2023
  • UK consumer willingness to share personal information online 2022
  • UK consumer opinions on data protection responsibilities 2022
  • UK consumer interest in installing a checkout-free store apps 2022
  • UK consumer ways to research data privacy reputation of firms 2022
  • UK consumer concern level about personal data tracking by stores 2022
  • Virtual private network usage frequency in United Kingdom (UK) 2023
  • Share of online time spent using VPN in United Kingdom (UK) 2023
  • United Kingdom: frequency of using fake personal information 2023
  • Frequency of dismissing cookies in United Kingdom (UK) 2023
  • Browsing history cleaning frequency in United Kingdom (UK) 2023
  • Main reason for using private browsing methods in United Kingdom (UK) 2023
  • Virtual private network usage frequency in United Kingdom (UK) 2023, by gender
  • Virtual private network usage frequency in United Kingdom (UK) 2023, by age
  • Private browsing frequency in United Kingdom (UK) 2023, by gender
  • Private browsing frequency in United Kingdom (UK) 2023, by age
  • Browsing history cleaning frequency in United Kingdom (UK) 2023, by gender
  • Frequency of dismissing cookies in United Kingdom (UK) 2023, by gender
  • Frequency of dismissing cookies in United Kingdom 2023, by age
  • UK: harassment experienced online 2022
  • UK: harassment experienced online 2022, by gender
  • UK: online harassment perpetrators 2022
  • UK: victims on how long online abuse and cyberstalking lasted 2022
  • UK: victims of image based sexual abuse 2022, by age and gender
  • UK: victims on reporting online harassment to internet companies 2022

Other statistics that may interest you Online privacy in the United Kingdom (UK)

  • Basic Statistic Biggest data breaches in the UK 2024, by impact
  • Basic Statistic UK number of breached data points in Q1 2020-Q4 2023
  • Basic Statistic GDPR awareness level in selected European markets 2018-2022
  • Basic Statistic UK largest fines issued for violations of GDPR 2023
  • Premium Statistic Attitudes towards the internet in the UK 2023

User attitudes toward privacy and awareness

  • Premium Statistic UK: attitudes of internet users regarding online privacy Q3 2023
  • Basic Statistic UK internet users who think that being online is beneficial rather than risky 2022
  • Basic Statistic UK: personal data sharing in return for online services 2023
  • Basic Statistic UK: personal data gathering awareness change among users 2023
  • Basic Statistic UK: personal data tracking attitudes among users 2023
  • Basic Statistic UK: personal data gathering awareness raise motivations among users 2023

Consumer approach to data privacy

  • Basic Statistic UK consumer willingness to share personal information online 2022
  • Basic Statistic UK consumer opinions on data protection responsibilities 2022
  • Basic Statistic UK consumer interest in installing a checkout-free store apps 2022
  • Basic Statistic UK consumer ways to research data privacy reputation of firms 2022
  • Basic Statistic UK consumer concern level about personal data tracking by stores 2022

User actions toward online data protection

  • Basic Statistic Virtual private network usage frequency in United Kingdom (UK) 2023
  • Basic Statistic Share of online time spent using VPN in United Kingdom (UK) 2023
  • Basic Statistic United Kingdom: frequency of using fake personal information 2023
  • Basic Statistic Frequency of dismissing cookies in United Kingdom (UK) 2023
  • Basic Statistic Private browsing or incognito mode usage frequency in United Kingdom (UK) 2023
  • Basic Statistic Browsing history cleaning frequency in United Kingdom (UK) 2023
  • Basic Statistic Main reason for using private browsing methods in United Kingdom (UK) 2023

Privacy actions by user demographic

  • Premium Statistic Virtual private network usage frequency in United Kingdom (UK) 2023, by gender
  • Premium Statistic Virtual private network usage frequency in United Kingdom (UK) 2023, by age
  • Premium Statistic Private browsing frequency in United Kingdom (UK) 2023, by gender
  • Premium Statistic Private browsing frequency in United Kingdom (UK) 2023, by age
  • Premium Statistic Browsing history cleaning frequency in United Kingdom (UK) 2023, by gender
  • Premium Statistic Browsing history cleaning frequency in United Kingdom (UK) 2023, by age
  • Premium Statistic Frequency of dismissing cookies in United Kingdom (UK) 2023, by gender
  • Premium Statistic Frequency of dismissing cookies in United Kingdom 2023, by age

Online harassment

  • Basic Statistic UK: harassment experienced online 2022
  • Basic Statistic UK: harassment experienced online 2022, by gender
  • Basic Statistic UK: online harassment perpetrators 2022
  • Basic Statistic UK: victims on how long online abuse and cyberstalking lasted 2022
  • Basic Statistic UK: victims of image based sexual abuse 2022, by age and gender
  • Basic Statistic UK: victims on reporting online harassment to internet companies 2022

Further related statistics

  • Basic Statistic Attitudes to online privacy in the United Kingdom (UK) as of October 2013
  • Basic Statistic U.S. perception on granting mobile apps location access 2019, by age
  • Basic Statistic Managing data privacy on connected devices in the United Kingdom (UK) 2014
  • Basic Statistic Perception of online data security in Germany 2014-2020
  • Premium Statistic Changing consumer behavior as a result of data breaches in the UK 2014-2015
  • Premium Statistic Increasing digital privacy concern APAC 2015 by country or region
  • Premium Statistic Global smartphone user opinion on how to increase confidence in mobile apps 2020
  • Premium Statistic Privacy settings for Facebook in the U.S. 2018
  • Premium Statistic Views on likelihood of personal data leak on internet India 2019
  • Basic Statistic Canadians who trust Facebook with their personal information 2018
  • Basic Statistic Leading websites which track users' information in Norway 2019, by cookies
  • Basic Statistic Most sensitive private information online 2017
  • Premium Statistic Attitudes towards privacy in the U.S. 2018
  • Premium Statistic Social media app permissions widely requested in China 2020, by type
  • Premium Statistic User opinion on Apple's new privacy policies in the United States 2021
  • Premium Statistic Share of mobile users who shared their school online Japan 2019, by age
  • Premium Statistic Share of Romanians using identification methods for accessing online services 2020
  • Premium Statistic Canada frequency of use of Facebook products and services 2018, by gender

Further Content: You might find this interesting as well

  • Attitudes to online privacy in the United Kingdom (UK) as of October 2013
  • U.S. perception on granting mobile apps location access 2019, by age
  • Managing data privacy on connected devices in the United Kingdom (UK) 2014
  • Perception of online data security in Germany 2014-2020
  • Changing consumer behavior as a result of data breaches in the UK 2014-2015
  • Increasing digital privacy concern APAC 2015 by country or region
  • Global smartphone user opinion on how to increase confidence in mobile apps 2020
  • Privacy settings for Facebook in the U.S. 2018
  • Views on likelihood of personal data leak on internet India 2019
  • Canadians who trust Facebook with their personal information 2018
  • Leading websites which track users' information in Norway 2019, by cookies
  • Most sensitive private information online 2017
  • Attitudes towards privacy in the U.S. 2018
  • Social media app permissions widely requested in China 2020, by type
  • User opinion on Apple's new privacy policies in the United States 2021
  • Share of mobile users who shared their school online Japan 2019, by age
  • Share of Romanians using identification methods for accessing online services 2020
  • Canada frequency of use of Facebook products and services 2018, by gender

Millions of customers’ data found on dark web in latest AT&T data breach

An AT&T store in New York. The telecommunications company said Saturday that a data breach has compromised the information tied to 7.6 million current customers.

An AT&T store in New York. The telecommunications company said Saturday that a data breach has compromised the information tied to 7.6 million current customers.

Richard Drew / AP

AT&T announced on Saturday it is investigating a data breach involving the personal information of more than 70 million current and former customers leaked on the dark web.

According to information about the breach on the company's website, 7.6 million current account holders and 65.4 million former account holders have been impacted. An AT&T press release said the breach occurred about two weeks ago, and that the incident has not yet had a "material impact" on its operations.

AT&T said the information included in the compromised data set varies from person to person. It could include social security numbers, full names, email and mailing addresses, phone numbers, and dates of birth, as well as AT&T account numbers and passcodes.

The company has so far not identified the source of the leak, at least publicly.

"Based on our preliminary analysis, the data set appears to be from 2019 or earlier," the company said. "Currently, AT&T does not have evidence of unauthorized access to its systems resulting in theft of the data set."

The company said it is "reaching out to all 7.6 million impacted customers and have reset their passcodes," via email or letter, and that it plans to communicate with both current and former account holders with compromised sensitive personal information. It said it plans to offer "complimentary identity theft and credit monitoring services" to those affected by the breach.

External cybersecurity experts have been brought in to help investigate, it added.

NPR reached out to a few AT&T stores. The sales representatives in all cases said they were as yet unaware of the breach.

On its website, the telecommunications company encouraged customers to closely monitor their account activity and credit reports.

"Consumers impacted should prioritize changing passwords, monitor other accounts and consider freezing their credit with the three credit bureaus since social security numbers were exposed," Carmen Balber, executive director of the consumer advocacy group Consumer Watchdog, told NPR.

An industry rife with data leaks

AT&T has experienced multiple data breaches over the years.

In March 2023, for instance, the company notified 9 million wireless customers that their customer information had been accessed in a breach of a third-party marketing vendor.

In August 2021 — in an incident AT&T said is not connected to the latest breach — a hacking group claimed it was selling data relating to more than 70 million AT&T customers. At the time, AT&T disputed the source of the data. It was re-leaked online earlier this month. According to a Mar. 22 TechCrunch article , a new analysis of the leaked dataset points to the AT&T customer data being authentic. "Some AT&T customers have confirmed their leaked customer data is accurate," TechCrunch reported. "But AT&T still hasn't said how its customers' data spilled online."

AT&T is by no means the only U.S. telecommunications provider with a history of compromised customer data. The issue is rife across the industry. A 2023 data breach affected 37 million T-Mobile customers. Just last month, a data leak at Verizon impacted more than 63,000 people, the majority of them Verizon employees.

A 2023 report from cyber intelligence firm Cyble said that U.S. telecommunications companies are a lucrative target for hackers. The study attributed the majority of recent data breaches to third-party vendors. "These third-party breaches can lead to a larger scale supply-chain attacks and a greater number of impacted users and entities globally," the report said.

Government rules adapt

Meanwhile, last December, the Federal Communications Commission (FCC) updated its 16-year-old data breach notification rules to ensure that telecommunications providers adequately safeguard sensitive customer information. According to a press release , the rules aim to "hold phone companies accountable for protecting sensitive customer information, while enabling customers to protect themselves in the event that their data is compromised."

"What makes no sense is leaving our policies stuck in the analog era," said FCC Chairwoman Jessica Rosenworcel in a statement regarding the changes. "Our phones now know so much about where we go and who we are, we need rules on the books that make sure carriers keep our information safe and cybersecure."

Copyright 2024 NPR. To see more, visit https://www.npr.org.

OPB’s First Look newsletter

Streaming Now

Snap Judgment

IMAGES

  1. Here Are The 8 Data Breaches Of 2019, With 4 Facebook Breaches Topping

    facebook data breach 2019 case study

  2. Analyzing the top 2019 data breach disclosures: Hindsight in 2020

    facebook data breach 2019 case study

  3. Top 5 Data Breaches in 2019

    facebook data breach 2019 case study

  4. Facebook data breach: Why it happened and what it means for the future

    facebook data breach 2019 case study

  5. Here Are The 8 Data Breaches Of 2019, With 4 Facebook Breaches Topping

    facebook data breach 2019 case study

  6. Facebook Data Breaches: Timeline Upto Dec. 2023

    facebook data breach 2019 case study

COMMENTS

  1. Case study: Facebook-Cambridge Analytica data breach scandal

    Case study: Facebook-Cambridge Analytica data breach scandal. Cambridge Analytica is a federal data analytics, marketing, and consulting firm based in London, UK, that is accused of illegally obtaining Facebook data and using it to determine a variety of federal crusades. These crusades include those of American Senator Ted Cruz and, to an ...

  2. What Really Caused Facebook's 500M-User Data Leak?

    The closest Facebook came to acknowledging the source of this breach previously was a comment in a fall 2019 news article. That September, Forbes reported on a related vulnerability in Instagram's ...

  3. Facebook data breach: what happened and why it's hard to know if your

    The data breach is believed to relate to a vulnerability which Facebook reportedly fixed in August of 2019. While the exact source of the data can't be verified, ... In the case of Facebook ...

  4. Facebook data privacy scandal: A cheat sheet

    The Facebook data privacy scandal centers around the collection of ... as well as to seal a document filed in that case. On March 31, 2019, ... The claim that this is a data breach is completely ...

  5. After Data Breach Exposes 530 Million, Facebook Says It Will Not ...

    Olivier Douliery/AFP via Getty Images. Facebook decided not to notify over 530 million of its users whose personal data was lifted in a breach sometime before August 2019 and was recently made ...

  6. Facebook-Cambridge Analytica data scandal

    In October 2019, Facebook agreed to pay a £500,000 fine to the UK Information Commissioner's Office for exposing the data of its users to a "serious risk ... Information on the data breach came to a head in March 2018 with the emergence of a ... Facebook agreed to settle a lawsuit seeking damages in the case for an undisclosed sum.

  7. The Cambridge Analytica scandal changed the world

    Facebook's PR machine spent much of the first 24 hours after the story broke engaged in a pedantic and self-defeating argument over whether or not what had occurred constituted a "data breach".

  8. Facebook's Massive Security Breach: Everything We Know

    Everything We Know About Facebook's Massive Security Breach. Up to 50 million Facebook users were affected—and possibly 40 million more—when hackers compromised the social network's systems ...

  9. What you need to know about the Facebook data leak

    April 7, 2021. AP. The news: The personal data of 533 million Facebook users in more than 106 countries was found to be freely available online last weekend. The data trove, uncovered by security ...

  10. What Leaked Internal Documents Reveal About The Damage Facebook ...

    Internal Facebook documents were leaked by a whistleblower and acquired by my guest Jeff Horwitz, a technology reporter for The Wall Street Journal. He's the lead reporter for The Journal's new ...

  11. Facebook Data Breach 2019: 540 Million Users' Records Exposed

    This marks the second such data breach for the social media site as the company faced a similar fate last year.The firm said that a Mexico-based media company called Cultura Colectiva was to blame ...

  12. Facebook sued for 'losing control' of users' data

    9 February 2021. Reuters. Facebook is being sued for "losing control" of the data of about a million users in England and Wales. The alleged failings were revealed in the Cambridge Analytica ...

  13. Facebook data leak: details from 533 million users found on website for

    In December 2019, a Ukrainian security researcher reported finding a database with the names, phone numbers and unique user IDs of more than 267 million Facebook users - nearly all US-based ...

  14. Big data and the Facebook scandal: Issues and responses

    'Big data' is a notoriously vague term: 2 essentially, it is used to signify the capacity of today's computers to capture and store enormous quantities of data. A number of commentators have noted that there is a step change in moving from the kinds of circumscribed datasets used by statisticians in the past and those which have become available through today's computing practices ...

  15. Facebook and Data Privacy in the Age of Cambridge Analytica

    Quickly recognizing this, Facebook pivoted, took some responsibility, and rather than argue the fine points of data breach definitions, apologized for what was experienced by customers as a breach of trust. Only five days after the story broke, Zuckerberg wrote in a Facebook post, "This was a breach of trust between Kogan, Cambridge Analytica ...

  16. IOTW: Facebook Data Leak Impacts 533 Million Users

    Facebook is garnering headlines for another data leak putting users' privacy at risk. The latest incident involves the personal information of 533 million Facebook users from 106 different countries as apparently discovered by Alon Gal, co-founder and CTO of cybercrime intelligence firm Hudson Rock.. The Facts. In an April 3 tweet, Gal said the data, which includes Facebook members' account ...

  17. PDF Case Study: Facebook In Face of Crisis.

    Facebook's 2018 data breach, and leading them to discuss crisis and reputation recovery initiatives. The current case study is structured as follows. The first chapter gives a brief overview of the company and its performance, followed by a story of how the data breach situation occurred

  18. Facebook faces investigation over data breach

    It had previously been looking into claims from Facebook that the data was old, from a previously reported leak in 2019. But it now says that there could have been a breach of data laws. Facebook ...

  19. Facebook-Cambridge Analytica Data Scandal|Business Ethics|Case Study

    Issues. The case is structured to achieve the following teaching objectives: Analyze the ethical issues arising out of the Facebook data breach scandal.. Understand the role of security in social networking. Study the impact of the data scandal on Facebook. Identify the various challenges Facebook is likely to face post the data scandal.

  20. Explained

    The latest instance stands out for the sheer number of accounts compromised. According to a report published by Business Insider , personal information of over half a billion Facebook users in 106 ...

  21. Fines and penalties imposed on Facebook 2019

    Apr 28, 2022. In July 2019, Facebook was subjected to fines of over five billion U.S. dollars, after an investigation by the Federal Trade Commission (FTC) revealed a number of privacy violations ...

  22. Millions of customers' data found on dark web in latest AT&T data breach

    The study attributed the majority of recent data breaches to third-party vendors. "These third-party breaches can lead to a larger scale supply-chain attacks and a greater number of impacted users ...