Which entity – the government or the private sector – presents a greater threat to the future of our society?

[This position paper was written in April, 2021 as part of my graduate studies in tech ethics and policy at Duke University. The course explored privacy, technology, and national security. The title is the prompt.]

“You have zero privacy anyway. Get over it.”[1]  These were bitter words from Sun Microsystem’s CEO Scott McNealy, back in 1999 before the incredible growth of the Internet, the smartphone, and the billions of connected devices collecting our data today. In the two decades since, tech giants like Google’s former CEO Eric Schmidt[2] and Facebook’s CEO Mark Zuckerberg[3] have echoed his sentiment. It is telling that this admission comes frequently and boldly from the commercial sector. If privacy is dead, we can blame three establishments in the autopsy report: our government, commercial enterprise, and ourselves as consumers. However, while democratic governments are often criticized for failing to balance privacy with public safety and national security, commercial entities operate with no such obligation. The private sector is free to fundamentally reshape society in the name of innovation and profit. Society reaps tremendous benefit from this innovation, but the resulting seismic shift in the economy toward data collection at all costs has normalized humans as products to be surveilled, manipulated, and mined for resources. If left unchecked, the biggest threat to the future of society lies in the vulnerabilities created by this new norm.

Problems in the Surveillance Economy

Several technological advances laid the foundation for this new norm. A combination of increased computing power, widespread use of the Internet for pleasure and commerce, and the ability to connect wireless devices and sensors to a network – the Internet of Things (IoT) – revolutionized how data is collected and used. The potential to monetize vast amounts of new information incentivized the commercial sector to exploit data wherever it could. Virtually all industries gained new ways to improve, optimize, and monitor their production by embedding sensors at every point in their operation. These new technological capabilities gave birth to the surveillance economy.

Consumers could hardly have anticipated the consequences, much less resisted the trend. Sensors are now everywhere. They are in our homes, our cars, and our places of work. We carry them with us as smartphones, fitness devices, and radio-frequency identification (RFID) tags. They are attached to our utilities, in the places we shop, and on the roads we travel. They passively and persistently collect the summation of our lives.

The ubiquity and interconnected nature of these technologies make participation unavoidable and, in many instances, compulsory. Consumers consent to collection on the loosest of terms simply by purchasing a product, interacting with software, or using a service. The degree of consent is defined only by inconsistent and often unclear company data policies or does not exist at all. Once collected, information can then be bought, combined, and sold to partners and third-party data brokers, or simply stolen when databases are breached by malicious actors. The use of this information is limited only by imagination. Consumer agency is an afterthought. There is no real option to opt-out and no control over what happens to information once it is collected.

Society has only recently begun to feel the unintended effects of being treated as a commodity. Algorithms crunch data to understand behavior and enable precision advertising, but often capitalize on perverse human behavior to optimize their goals. Algorithmic bias toward content that elicits strong emotion may drive clicks and maintain attention, but rewarding manipulating content polarizes and misleads consumers. YouTube’s recommendation engine encourages content that pushes viewers toward extremes.[4] Facebook’s content algorithms create filter bubbles that limit what information people see, weakening public discourse. Both outcomes are bad for a free and democratic society.

In advertising, the effect is the same whether products or politics are the focus. Cambridge Analytica was able to successfully manipulate public opinion during a national election by exploiting intricate details about Facebook’s users.[5] Data informs decisions. With data pouring in from all industries, a social credit system has emerged where data compiled from social media, purchase history, and fitness devices are being used to determine hiring decisions, loan approvals, and insurance premiums.

Cell phones are a fundamental part of life. An exposé in The New York Times reveals the extent to which detailed location information is tracked through our phones and the apps we use. This information can be obtained from third-party data collectors by anyone with the means. When Edward Snowden leaked several surveillance techniques used by our government to collect the metadata of millions of U.S. citizens there was public outrage. The surveillance economy collects far more. Unlike our government agencies which operate with oversite and are limited by law in how and when they collect information, commercial data is gathered and used free from restraint.

Where once government agencies held a monopoly on surveillance, they now heavily depend on private sector entities to do the heavy lifting. The private company Palantir Technologies excels at analyzing vast amounts of surveillance information and has succeeded through partnership with governments around the world. Clearview AI specializes in facial recognition software that uses images scraped from numerous datasets to empower authorities to track their targets. Consumers drawn to the joy of sharing their lives on social media or storing their photography archives in the cloud are now swept up in industries of which they are entirely unaware.

A Global Concern

The negative effects of the surveillance economy are a matter of global concern that varies in degree around the world. Citizens of democratic societies may quibble over the tradeoff of privacy for products and services trusting that their governments will not overtly abuse the system, but those in more authoritarian regimes already face very tangible danger. Authorities in China and elsewhere use location data, transaction data, and CCTV footage to monitor the spread of Covid-19.[6] China uses similar technology and data in pursuit of “automated authoritarianism.”[7] Through a combination of private sector and state-sponsored technology, camera networks, mandatory smartphone apps, and facial recognition software, Chinese authorities track people with laser precision, spying on foreign visitors, silencing dissidents, and persecuting minority populations. Tech companies doing business in foreign countries may also be compelled by local governments to turn over information they store that is relevant to civil, criminal, or national security concerns.

The abundance of data collected and stored by private companies are valuable targets for criminals and enemy nation-states. In 2020, researchers reported just over 1,000 data breaches.[8] Medical and business industries are the most common targets and society has been conditioned to accept this unlawful collection of their information as an inevitable part of doing business in the surveillance economy. The fallout from data breaches is usually quantified in dollars lost and criticism centers around company data security practices when the public should be asking whether their data needed to be collected and stored in the first place. At best, citizens caught up in these breaches may be at increased risk of identity theft or become targets of social engineering and shady marketing. At worst, they can be discriminated against, blackmailed, or face life-threatening harassment. Regardless, the lack of meaningful laws, oversight mechanisms, and accountability measures in the surveillance economy leaves members of society with no option to protect themselves against either outcome.

Similarly, consumers may take for granted that billions of commercial IoT devices are manufactured in one country with software developed in another, and they transmit data across borders where standards and security measures are perfunctory if they exist at all. IoT is inherently vulnerable simply by being connected to the Internet and tethered to apps. These devices are difficult to update when security concerns arise and there are few mechanisms in place to alert consumers when they do. Yet they are welcomed into homes as interactive children’s toys, smart appliances, and fun new gadgets. Even when quality control is high, security is not guaranteed. The login information for thousands of owners of Amazon’s popular Ring Home Security System was compromised in 2019 and numerous customers have shared horror stories after their systems were hacked.[9] A massive, distributed denial of service attack that knocked out the Internet for much of the United States’ east coast utilized compromised DVRs to launch the attack. These risks are inherent to all networked technology, not just IoT. Nonetheless, the explosion of commercially connected devices reveals a market that is well ahead of any ability to manage the new threats it creates.

Boundaries for the Surveillance Economy

Laws and regulations that affect innovation, commerce, and competition should be weighed carefully and applied slowly. After two decades of economic transformation, there is enough evidence to warrant new limitations on the collection, storage, and sharing of data. Thankfully, lawmakers are beginning to respond. The European Union introduced its General Data Protection Regulation giving individuals control over their data and simplifying regulations governing international business. Not long after, state lawmakers in California introduced the California Consumer Privacy Act creating new privacy rights for their citizens including the right to know what information is being collected, to delete that information, and to prevent that information from being sold to third parties. Federal lawmakers in the United States now understand the national security issues created by insecure IoT and recently enacted the Internet of Things Cybersecurity Improvement Act. While the law focuses on IoT applications in the federal government, it has the potential to influence security standards in the private sector. These changes are a win for consumers, but there is room for additional legislation to further constrain the private sector and return to consumers the agency and privacy they have involuntarily forfeited.

For instance, lawmakers could mandate that the collection of specific types of information be opt-in, rather than opt-out, by default. Data collectors have long relied on ignorance and passivity to sustain the flow of information. Such a change would expose just how pervasive this practice is and prominently display the types of information being collected. Consumers can then make an informed decision whether to participate or not. Lawmakers could also limit access to certain types of information between industries. A company like HireVue, which collects data of all kinds to help employers screen job candidates, does not need to know a candidate’s medical history or what they like to watch on Netflix. Data brokers also depend on gathering information from as many sources as possible and operate outside the view of the consumers they commoditize. Senators Gary Peters and Martha McSally introduced the Data Broker List Act of 2019 to prohibit the acquisition of personal information through fraudulent means or from using such information for illegal purposes. The bill would also create a publicly accessible national registry which would allow individuals to inquire about their information and how it is used.

Tech giants, under increased scrutiny by citizens, the press, and Congress, are also beginning to respond. Their reaction indicates that market pressure can provide a counterbalance as well. Apple’s latest iOS update requires apps to request explicit permission before tracking data. Google has made plans to abandon third-party cookies – a fundamental online tracking technology – and replace it with something they claim will allow online advertising to continue while obscuring users’ identities. The tech industry might find an eager market ready to respond to products and services that are built on principles of “privacy by design.”[10]

Data informs decisions for consumers as well. Since 2010, over 70 companies have issued transparency reports providing much-needed insight into company operations that impact privacy as well as providing details on which governments and third parties request user data.[11] Unfortunately, the number of companies providing these reports fell sharply after 2013. A widely adopted global standard requiring the annual production of these reports would further empower society to push back against abuses of private-sector information.

A Data-Rich World Without the Dystopia

As long as consumers accept the death of their privacy as necessary for their participation in commercial innovation, additional laws or oversight mechanisms targeting government entities will miss the forest for the trees. Certain aspects of the surveillance economy may very well be unavoidable. Smartphones must broadcast location information to function, and advertising is still a necessary fuel for the Internet. But privacy can be resurrected through the careful control of the flow of information. The private sector can still be free to innovate with the information they can collect, and society will continue to reap the benefits without feeling forced to sacrifice their privacy in the transaction. Only then can concern over government overreach be meaningfully addressed.


[1] Sprenger, P. “Sun on Privacy: ‘Get Over It’.” Wired. 26 Jan. 1999. https://www.wired.com/1999/01/sun-on-privacy-get-over-it/

[2] “Inside the Mind of Google.” CNBC, 2 Dec. 2009. https://www.cnbc.com/video/2009/12/02/inside-the-mind-of-google.html

[3] Johnson, B. “Privacy no longer a social norm, says Facebook Founder.” The Guardian, 11 Jan. 2010 https://www.theguardian.com/technology/2010/jan/11/facebook-privacy

[4] Roose, K. “The Making of a YouTube Radical.” The New York Times. 8 Jun. 20019. https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html

[5] Ghosh, D. Scott, B. “Facebook’s New Controversy Shows How Easily Online Political Ads Can Manipulate You.” Time. 19 Mar. 2018. https://time.com/5197255/facebook-cambridge-analytica-donald-trump-ads-data/

[6] Prasso, S. “Counterterrorism Tools Deployed Against Virus Spur Privacy Fears.” Bloomberg Law. 6 Apr. 2020. https://news.bloomberglaw.com/privacy-and-data-security/counterterrorism-tools-deployed-against-virus-spur-privacy-fears

[7] Buckley, C. Mozur, P. Ramzy, A. “How China Turned a City Into a Prison.” The New York Times. 4 Apr. 2019. https://www.nytimes.com/interactive/2019/04/04/world/asia/xinjiang-china-surveillance-prison.html

[8] “Annual number of data breaches and exposed records in the United States from 2005 to 2020.” Statista. Jan. 2021. https://www.statista.com/statistics/273550/data-breaches-recorded-in-the-united-states-by-number-of-breaches-and-records-exposed/

[9] Vigdor, N. “Somebody’s Watching: Hackers Breach Ring Home Security Cameras.” The New York Times. 15 Dec. 2019. https://www.nytimes.com/2019/12/15/us/Hacked-ring-home-security-cameras.html

[10] Cavoukian, A. “Privacy by Design: The 7 Foundational Principles.” 2009. https://www.ipc.on.ca/wp-content/uploads/Resources/7foundationalprinciples.pdf

[11] “Transparency Reporting Index.” Access Now. Accessed 7 Apr. 2021: https://www.accessnow.org/transparency-reporting-index/