Week 9

Week 9 (30 March/1 April): Recommendations and Manipulations (The Social Dilemma)

Lead: Team 2
Blog: Team 5

Required Readings and Viewings (for everyone):

Optional Additional Readings/Viewings:

Response prompts:
Post a response to the readings that does at least one of these options by 10:59pm on Sunday, 28 March (Team 3 and Team 5) or 5:59pm on Monday, 29 March (Team 1 and Team 4):

  1. What do you think of the quote, “if you’re not paying for the product, you are the product”? How do you feel about the tradeoff between having a “free” website that includes targeting ads versus an ad-free website where you would have to pay to view the site? Are there other ways for “free” websites to make revenue?

  2. In Facebook’s response to the Social Dilemma’s portrayal of their recommendation system, they suggest that their platform “uses algorithms to improve the experience for people using [their] apps”, similar to other consumer-facing apps including Netflix. There are many criticisms of Netflix’s release of the film, as they are considered the pioneers of the modern recommendation system. What are the ethical issues involved with Netflix’s decision to release this documentary? What is your stance on Netflix’s decision to not include their involvement in the development of these recommendation systems in the documentary?

  3. What do you make of Facebook’s seven-point response to the film? Whose arguments do you find more convincing, the film’s or Facebook’s?

  4. Respond to something in one of the readings that you found interesting or surprising.

  5. Identify something in one of the viewings/readings that you disagree with, and explain why.

  6. Respond constructively to something someone else posted.

Discussion

Class Meetings

Lead by Team 2

Slides for Week 9 [PDF]

Blog Summary

Team 5

Tuesday 30 March

We began Tuesday’s class with an acknowledgement of the fact that there have long been studies of the harmful effects of social media before the release of the Netflix documentary. Criticisms of the documentary were presented, along with several notable Internet/media scholars who have long studied this topic: Safiya Noble, Sarah T. Roberts, and Siva Vaidhyanathan.

The Social Dilemma is then summarized as an intentional narrative that reveals the consequences of society’s dependence on social media through the life of a teenager while being narrated by experts, investors, doctors, researchers, and former employees of big tech companies.

The class then proceeds to conduct an activity in which every table plays the social dilemma game, which is a bingo board of confessions about social media usage. Each student would circle the confessions that apply to him/her before discussing the results with the group.

We then discussed these questions:

  1. If you are on social media (Facebook, Instagram, TikTok, etc.), were you aware of the data privacy concerns brought up in the film? Has your concern over data privacy on these platforms changed after watching the film?

  2. 80% of students mistake “sponsored content” ads for legitimate news. [NPR, Stanford, 2016]. From what sources do you typically get news and information? Do these feel trustworthy? How do you know that is the case?

Many students in the class thought the questions were revealing but also not surprising. Everyone seems to be aware of the data privacy concerns but have only grown to be more concerned about these platforms after watching the Netflix documentary. Many students do typically get their news and information from social media, but also quite a few state that they would seek out reputable news sources to learn more.

Facebook’s response to the Netflix documentary is then analyzed by the presenting group. It is pointed out that Facebook attempts to shift the blame by explaining that the documentary has oversimplified a very complex and nuanced set of issues. Facebook has addressed addiction concerns by making changes to how News Feed works and work with mental health experts to understand social media’s impact. Facebook claims that users are not the product and its ad model is what enables the platform’s high level of accessibility. Another criticism of the documentary from Facebook is that the depiction of algorithms is completely incorrect and user data privacy is taken very seriously. Facebook is also conducting research to understand its contributions to polarization, misinformation, and influences in elections.

The following questions were posed to guide the students’ discussion regarding Facebook’s response:

  1. What do you make of Facebook’s seven-point response to the film? Whose arguments do you find more convincing, the film’s or Facebook’s?
  2. Of the seven claims made by Facebook, which do you feel are the most important that they stand by? Do you think they followed through with their claims?
  3. Do you have any suggestions that could make Facebook a safer platform overall?

The class found Facebook’s response to be unconvincinga and misleading. The claims are relatively vague and fail to address the documentary’s pointed arguments about the company’s harms on society. Legal regulations and antitrust efforts are possibilities that can make Facebook a safer platform by forcing the company to adhere to a certain standard, thereby realigning its financial interest away from profiting off the feedback loop of user engagement, data collection, and advertising.

Thursday 1 April

(Starts on Slide 24)

We started by discussing homework questions. This entailed talking about what content do users get and how it is recommended. Students tried out going incognito and searching hoaxes. They found the Youtube algorithm tries to debunk certain hoaxes like Coronavirus not being real.

Next we looked into Youtube’s recommendation algorithm. Youtube’s recommendation algorithm uses two neural networks: candidate generation to generate the recommendation and ranking to rank the generated recommendations. The study “Algorithmic Extremism: Examining Youtube’s Rabbit Hole of Radicalization” (2019) found the algorithm does not promote radicalization after viewing polarizing content. Facebook uses a multitask model and incorporates both integrity processes and a contextual pass in its recommendation algorithm.

The Investigating Ad Transparency reading discussed how the concerns about transparency of social media led to social media platforms offering transparency mechanisms. Examples include Facebook’s “why am I seeing this?” ad explanations and ad reference page explaining inferred data on the user. The study examines transparency in ad explanations and data inference.

The 3 methods of advertiser targeting mentioned include traditional Facebook targeting, data broker targeting, and advertise personally identifiable information (PII) targeting. Traditional Facebook targeting exploits demographics, interests, behavior, and cookies for tracking outside Facebook activity. Data broker targeting, which accounts for 45% of attributes, captures financial information such as income and information from offline sources like criminal records. Advertiser PII targeting uses pre-collected information to target consumers on Facebook. The process for receiving an Ad requires the data inference process to determine user attributes, the audience selection process maps targeted attributes to advertisers, and the user-ad matching process matches users with particular attributes to advertisers.

We then had an activity on searching what Facebook set as our interests that determines what ads we see on its platforms. The discussion questions were:

  1. Were you surprised by any of your assigned categories?
  2. How did you feel after seeing some of the information/assumptions Facebook has about you?

Some people saw that the assigned categories Facebook set about them were very inaccurate whereas some were accurate. This may have been due to the fact that some people didn’t use Facebook as much as others. Some people noted that they don’t use Facebook, but do use Instagram, and were surprised how much profile information was found in their Facebook account. Of course, Facebook owns Instagram, so has many ways to link accounts between the two services even if users do not explicitly link them.

Next we discussed the study assessing Facebook ad explanations: Athanasios Andreou, Giridhari Venkatadri, Oana Goga, Krishna P. Gummadi, Patrick Loiseau, Alan Mislove. Investigating Ad Transparency Mechanisms in Social Media: A Case Study of Facebook’s Explanations. NDSS 2018.

Andreou et. al. defined five key categories: correctness, all attributes are used by advertisers; personalization, whether or not the attributes specific to a user are displayed; completeness, whether or not all attributes are displayed; consistency, how similar explanations are across users; and determinism, whether explanations are the same for ads with same targeting attributes. The study found Facebook contained potential attributes that were never specified by the user and that explanations can be different for users. Some attribute types, like demographics, have a higher priority in explanation.

Transparency in data is also lacking. Data-broker targeting is not disclosed on Facebook, and Advertiser PII targeting is disclosed by saying “you’re on their customer list” but does not explain what PII the advertiser provided. Data explanations were analyzed by: specificity of explanation, snapshot completeness of explanation (does it show all inferred attributes), temporal completeness (does it show all inferred attributes over period of time), correctness (does it show activity that led to inference). Explanations clearly specified a source of the attribute. However, no data broker attributes are explained, nor is historical information about inferred attributes explained. Consequently, data explanations are not complete. We then followed up with the following discussion questions:

  1. How bothered are you that Facebook doesn’t show you the entire ad explanation? (Note: this may have changed since publication in 2018)
  2. Did anything from the study surprise you?
  3. According to Facebook, “all election and issue ads on Facebook and Instagram in the US must be clearly labeled, including a “Paid For By” disclosure from the advertiser at the top of the ad.” Can you think of other steps Facebook and other companies can take or have already taken to increase their ad transparency?

Students found they were not too bothered by the lack of ad explanation since this is in their best interest. Students also added giving out explanations could help other companies and that unless there is user backlash or legal reasons it might be better not to include explanations. Students did not find the content of the study surprising, and found that Facebook could increase transparency by having explanations for specific ads such as attributes like age range.

AdAnalyst was introduced as a resource that allows users to check what advertisers are targeting them, who receives the same ads, and what Facebook knows about them.

The video by Joe Toscano was then introduced: Want to Work for Google? You Already Do.. The video was made by a former consultant at Google that left due to ethical concerns. Toscano argues tech companies use user data to generate profit, and that we should have ownership of our own data. Legislation in this direction has been passed: Wyoming passed legislation making digital assets personal property and California passed the Consumer Privacy Act. The following discussion questions were posed:

  1. Toscano criticizes Tesla for using large amounts of their consumers’ data. Data from all Tesla vehicles are sent to the cloud, which is used to create highly data-dense maps that helps develop their Autopilot AI. Do you think a shift to owning our own data could hinder technological innovation if companies like Tesla no longer have access to large amounts of data?
  2. Towards the end of the video, Toscano says “imagine living in a world where the terms of service is written in a language we can understand and reasonably be expected to give consent to imagine”. A recent study conducted by Deloitte found that 91% of people consent to terms and services conditions without reading them. Do you think it is the role of the consumer to try to interpret the language used in the conditions? What steps, other than Toscano’s suggestions, can companies take in order to assist the consumers’ understanding of how their data is being used?

Students found the shift to ownership of personal data could potentially hinder innovation. Students also found that to an extent it is the role of the consumer to interpret language used in the conditions and terms, but that it should still be readable.