Week 10

Week 10 (6/8 April): Privacy and Surveillance

Lead: Team 3 Blog: Team 1

Required Readings and Viewings (for everyone):

Optional Additional Readings:

Response prompts:
Post a response to the readings that does at least one of these options by 10:59pm on Sunday, 4 April (Team 1 and Team 4) or 5:59pm on Monday, 5 April (Team 2 and Team 5):

The main discussion question is:

  • Do you feel as though the information being collected by companies today is too invasive? Can you think of any specific examples in which you feel your privacy rights were violated in the context of your information on the internet?

You can also contribute any of the generic discussion prompts:

  • Respond to something in one of the readings that you found interesting or surprising.
  • Identify something in one of the viewings/readings that you disagree with, and explain why.
  • Respond constructively to something someone else posted.

Also, remember that your Project Progress Report is due on Wednesday, 7 April (4:59pm). It should be an update on your project and what you have done so far, and what you plan to do for the remainder. This should be sent as an email to me (evans@virginia.edu) with all of your teammates cc’d. It can be plain text in the email, or an attached PDF, containing at least this information:

  1. Title for your project
  2. How your purpose, plan, and hoped outcome have changed from the original proposal. For most projects, this should at least be refining your idea from the proposal; for some, you may have changed direction, and then should explain the reason for this and how things have changed.
  3. What you have done so far
  4. Your plan for completing the project - what you will do over the final weeks of the semester, and where you hope to get by the final deadline.

Discussion

Discussion

Class Meetings

Slides for Week 10 [PDF]

Blog Summary

Tuesday 6 April

The focus of this week was privacy and surveillance. The leading group opened with this quote from article 6 of the United States Constitution:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

This excerpt was presented to the class in order to generate discussion on how the Bill of Rights applies to privacy in the modern, technological world. Because privacy is valued in the United States, we were also asked to consider if Americans’ are currently being violated by technology.

Groups discussed reasonable amounts of privacy, consent given through terms of service, what constitutes private property, and the government’s role in technological regulation. One point that was brought up by a student in a small group discussion was that big companies' use of users’ data without consent is counter to our societal beliefs.

Another student brought up that many of the laws in the constitution are outdated and do not apply to private companies. Privacy, as defined in the Constitution, does not encompass user data because it is referring explicitly to physical property. It also does not limit what companies can do — it only constrains what the government can do. This made sense during the founders' era when no company had amassed the kind of powers companies have today, but in some ways companies have become more powerful than governments.

New laws and regulations outside of the constitution need to be instituted because the Constitution can not protect citizens from big tech companies. Therefore, in several of the groups, the discussion shifted to the question of whether or not one’s data should be implicitly owed by the user themselves. While several good points were made on both sides, the consensus was that this was not a simple black and white issue, but instead a very complex one that possesses wide ranging consequences on society regardless of the choice that is made.

After discussing how the official document defining rights for Americans (the Constitution) relates to technological rights, we moved on to discussing the Association for Computing Machinery (ACM) Code of Ethics. Check out more information on this here (one of our optional readings). These ethics guide ethical conduct for students, instructors, and professionals alike, as well as outline privacy in great detail in section 1.6. The leading team presented us with the following discussion question: Do you feel as though companies adhere to the privacy standard described in 1.6 of the Ethics code. Why or why not? Examples?

Class members shared that there is a focus on the companies that do not adhere to these standards, but that there does exist companies that adhere to these privacy standards. Companies also will use data privacy as a marketing tool. Another discussion question posed was in what ways should tech companies be held accountable for privacy violations? Class members suggested increased lawsuits and attention on privacy violations, as well as technology leaders taking a vested interest in protecting the rights of their users. Another suggestion was that the ACM should begin issuing certification “badges” to companies that demonstrate high commitment to users’ data privacy. Similar to how there is recognition of highly sustainable business practices, this strategy would put increased social pressures on businesses to prioritize data protection.

In addition to learning about the ACM’s ethics code, we were also presented with the Universal Declaration of Human Rights. The key takeaways from this are as follows: technology enables data collection with ease for companies, professionals should use personal information for legitimate reasons without violating others rights, companies should take necessary steps to ensure data privacy, companies should also be transparent about data collection and usage.

We ended Tuesday’s class with a discussion about our experiences with surveillance in the United States. We went over the recent history of the NSA. In the past 20 years, the United States government has spent million and millions of dollars collecting data about its citizens. The results have been surprisingly ineffective. The program has only produced results that would not have already been achieved by FBI investigations a couple of times. In discussion, classmates admitted that they often change their behavior online because they feel as though the government is watching. We believed that a surveillance program stifles democracy because citizens are afraid of speaking their beliefs that are not considered mainstream politics. We also acknowledged that there is a difficult balance between complete anonymity and holding people accountable for their actions online. People shouldn’t feel as though they can say anything online, even if they don’t believe it, because that causes harmful ideas to spread unchecked. We also agree that government surveillance shouldn’t stifle ideas as well.

Thursday 6 April

Thursday’s class continued conversations about privacy and surveillance via facial recognition with case studies involving local police and China’s surveillance system. The week was concluded with understanding HTTP history and exploring our personal cookie usages.

The topic that instigated our discussion about facial recognition began with Clearview AI, a controversial facial recognition algorithm that is marketed to help police solve cases. The leading group emphasized that “your face is not your own” in the context of this AI system since it does a “reverse google image search” of billions of photos across the internet. Clearview AI’s photo database is constructed by scouring the Internet for photos.

There were two uses in local Virginia by police departments in Norfolk and Virginia Beach that helped solve cases, but it was unbeknownst to superiors that subordinates were using Clearview AI. Small group discussions delved into exploring whether we should have claim to our personal data that is publicly online and be compensated accordingly. One discussion group member brought up the fact that the idea of compensation for personal data is not so far-fetched, as compensation is already offered in the context of research studies at UVa (ie. vaccine studies, etc.). Other members of the class noted that distributing compensation for personal data becomes a logistical nightmare and small amounts of data are almost completely worthless. In terms of logistics, this discussion also raised a point that some people may expect compensation to apply retroactively, which is even more complicated. Finally, it was noted that personal data is protected under copyright laws; putting your image online does not make it public domain.

The leading group also brought up a Virginia state bill that was unanimously passed that police may not use facial recognition until further laws were passed and the technology was debated about. Discussion revolved around debating whether or not Local, State, or Federal police should be allowed to use such technology and whether the good of the technology outweighs the bad. This discussion offered much discourse. There were definitely concerns about “Big Brother”, but these were countered with concerns of how a more connected/globalized society may require a scaled up law enforcement system. It was generally agreed that the government should be prohibited from using this technology for an unfair or unjust purpose. There was also some general consensus that the decision to prohibit police departments from blindly adopting a technology without exploring it first was a good decision. Finally, some discussion revolved around an idea that use of a facial recognition system should require a warrant (as this is the precedent with cell phone data).

Group members raised concerns that in a facial recognition system, it may be harder to only expose the data of one individual, since the system relies on the collection of mass data. Having discussed local usage of surveillance, our conversation turned towards China and the Panopticon that has been created via a nation-wide surveillance system that uses DNA sequencing, location tracking, gait analysis, etc. Due to the podcast we listened to for this week, we had a discussion of Kenneth Kidd, a Yale researcher who had collaborated on DNA sequencing research with researchers in China. We discussed whether he should be found guilty or not for his contribution to the current surveillance system, and although there was no intentional contribution to the system, there was to some degree responsibility in sharing knowledge or utilizing DNA sequences from China that may have been collected unethically. A point that was brought up was that China took small pieces of research and combined them into one big system. Therefore, it’s difficult for anyone that contributed to the smaller pieces to foresee that it would be utilized in the big surveillance system.

Another discussion we had was whether or not there are other examples of technologies that sound benign but could be used nefariously, vice versa. An important point that was made was how GPS was originally developed by the military but became ubiquitous in everyone’s lives, an example of a nefarious technology used benignly. Therefore, it’s possible that facial recognition could have a benign use in the future, but currently, it’s cons may outweigh the pros of its use.

Our last topic of discussion for the week was about web services, learning the basics of what it is and specifically focusing on cookies: what are they, how are they utilized, and what types of cookies exist. We continued into an activity of learning about our own cookie usage by exploring our browsers cookies usage via Storage look up through browser inspection.