fbpx

Privacy: a key to avoiding dystopia

Posted On

By Lizzie O’Shea

Share


This article is based on a presentation originally given at the 6th LINE x Intertrust Privacy Summit in Paris, France on October 28, 2019

Privacy laws and regulations have traditionally been designed using the lens of contract. Contract law has been a foundation stone of modern capitalism, enabling people to do business by creating a regime that will enforce promises. The assumption of contract law is that agents in a marketplace are free and empowered to bargain with one another. The role of the state therefore is not to intervene into these private arrangements. Privacy involves regulatory standards that are undoubtedly complex. But the general presumption underpinning them is that the individual is responsible for their own privacy and in a position to negotiate with service providers about this right via contract.

The reality is that contracts in digital settings are often lengthy and complex and bear little resemblance to the ideal understanding of contract. The artist Dima Yarovinksy has a beautiful and frightening installation laying out the terms of service for major platforms. Instagram’s terms of service are more than 17,000 words long and would take the average person nearly 90 minutes to read. This is a classic example of “over-legalling,” wherein companies develop such complex legal mechanisms to protect themselves, that it eventually works in the opposite way to which it was intended.

Changing approaches to privacy

Legally and socially, the contractual approach to privacy is increasingly unsustainable. Many people’s understanding of privacy does not match current privacy laws or the contractual terms they may have formally consented to. This may well be resolved by law reforms that junk contractual notions of privacy and install a new regime that is less permissive for businesses in the data industry.

Additionally, a contractual understanding of privacy is becoming less acceptable politically. In the wake of privacy scandals involving Facebook and Cambridge Analytica, or Equifax, people have lost trust in technology companies. This translates to social movements with increased depth and sophistication, as well as an appetite for regulatory reform among lawmakers in many jurisdictions around the world. Perhaps most obviously, we have seen this in relation to Elizabeth Warren’s bid for the Democratic nomination for President of the United States. Her message resonates with voters who think tech companies are not accountable and that data is being misused. She may well win a mandate to act on that legislatively.

A key motivator for this growing political discontent is the imbalance of power between technology companies and users. This is reinforced by the opaque way in which the industry works, the enormous size of many technology companies, and the cavalier control that these companies can exert over our most sensitive information. Users are treated as resources to be exploited, rather than people with agency and dignity. As a result, industry dictates its agenda to citizens as users (through setting unfair contract terms) as well as citizens as voters (by allowing their business model to influence democratic voting processes). Unsurprisingly, the sense of disgust evolving around surveillance capitalism only worsens when industries seem to be manipulating the regulatory processes themselves.

Why is this important? Paris, the location of this conference, is a city where the decadence of the ruling elite, and a grotesque indifference to the suffering of the poor, gave birth to a revolutionary movement that transformed our understanding of citizenship and its associated rights. That movement also built the guillotine. Even if you personally are unconcerned with the prospect of growing inequality and obscene privilege—not to mention the powerlessness of citizens compared to companies that control their data—the experience of the ancien regime suggests that we ignore such social phenomena at our peril.

Privacy rights and human rights

Formal compliance with privacy laws is not going to save companies from the consequences of a data-extractive business model. This approach to privacy and data looks set to be heading in the same direction as fossil-fuel-extractive business models. People are tired of business models that privatize profit and socialize costs. They are increasingly becoming empowered to demand change.

Even if you appreciate that industry is currently too careless with privacy—or worse—understanding what an ethical approach to privacy might look like remains difficult.

One of the problems that arises in these discussions is metonymy: “privacy” is a single word that describes and stands in for many different concepts. So, when a company says that they “value your privacy,” what it often actually means is transactional and technocratic, ignoring the conceptual and material implications of the approach. Secrecy, security, and anonymity all describe slightly different components of privacy, but regularly end up lumped together. In reality, many companies that talk about privacy only offer one, maybe two, of these guarantees. Such a promise is a fudge. Or as the Center for Digital Democracy puts it: “This is merely a “don’t-look-too-closely” claim designed to head off the scrutiny their practices require.”

Significantly, privacy is often thought of by citizens as synonymous with autonomy or freedom. But privacy is more than simply having confidentiality and space hidden from view—it’s actually about the capacity to explore our faculties with individuals without judgment. In other words, data generated about yourself should not be used as an indelible record of who you are—and privacy is not just a technical approach to information management, but also substantively about the capacity to determine our own sense of self.

This is the key way in which the age of big data has transformed our understanding of privacy. Once enough information is known about the basic behaviors of certain segmented audiences, it is possible to draw conclusions about those who fit that demographic, even if they have never shared a thing. Our digital experiences are crafted for us on the basis of our membership in a class—an intersecting set of collective traits, rather than an individual with agency or dignity. To suggest a person alone can challenge these practices by taking individual responsibility for their privacy fundamentally misunderstands our social and technological environment.

The right to privacy is therefore a paradox, a combination of something that is both individual and collective. It is the right to be taken seriously as a unique person, not a set of assumptions based on stereotypes about how you might behave. But it is also the right to be able to collaborate, communicate, and explore our own personalities in shared spaces without judgment, the right to be part of a group on our own terms.

As such, how can industry respect privacy in circumstances where it is difficult to define, deeply sophisticated, and may mean different things in different contexts?

Primarily, it means putting meaningful consent at the center of design. This is not so straightforward, given that individual consent often has implications for the collective.

Beyond this, my argument is: a social license to work with data requires building technologies that prioritize human rights, and promote autonomy and respect agency.

When I say human rights, I do not mean whatever you define ethics to be. Google has provided us with an instructive example in this respect. The company developed its own Artificial Intelligence Principles, and set up its own ethics board in March of 2019, the Advanced Technology External Advisory Council. The council was designed to look at how its declared principles applied to technology under development by the company, such as facial recognition and fairness in machine learning.

A week later, Google’s council was disbanded. One of the largest, most sophisticated technology companies in the world failed in its attempt to design a system for resolving ethical considerations. This is because our ideas of ethics can be varied, contradictory and culturally specific.  Ethics is too amorphous a concept, too difficult to define, let alone enforce.

Path to respecting human rights

It is critical that companies look to human rights compliance as a way to earn a social license. Human rights are a widely accepted set of standards; they have been the subject of analysis and testing for decades and there are recognized ways of resolving conflicts as they arise. For these reasons, human rights are the place to start for those in the data industry interested in earning a social license.

There are a few ways in which this can work in practice. I’m going to give you three, but there are many more.

First, we can also start designing technology that aims to serve not just the average user, but with the concerns of the most vulnerable user in mind. This might look like an inverse of the threat modelling that is used in security testing—rather than imagining a sophisticated adversary, we conjure an at-risk user. So when we design a device that can collect locational data in a way that cannot be switched off, we need to think about how that might affect a woman who uses this device and is seeking to escape a violent partner, or a journalist trying to communicate with a source that requires anonymity. How might we treat that functionality, and how might we take responsibility for securing the data collected as a result? We need to stop thinking about the user in terms of the everyday person, but rather as a rights holder, who may be vulnerable for whatever reason. We need to recognize that that person is still entitled to expect their data to be used with respect for their right to life and freedom of speech.

Second, we can recognize the inherent limitations in the datasets we use. So, for example, Caroline Criado-Perez in her award-winning book, Invisible Women, has done considerable work showing how the experience of women is often invisible in the datasets we use, which has very real-world consequences. When we analyse datasets, we need to think about what assumptions we are making about what that data says, and what might be missing. We need to enshrine principles of non-discrimination into the use of data.

Third, find ways to empower users. That involves talking to users, testing products carefully before using them, asking questions, and listening to the answers about how users understand their own privacy needs. Rather than “move fast and break things,” we need to think about taking responsibility for mistakes and not treating users as lab rats upon which we can experiment with good technologies. The rush to get to market is an undeniable influence on the development of technology, but sustainable and responsible business is key. Show humility and consult with openness and integrity. Accountable decision making, a sense that users understand the rules and can have a say in how they are made, is the basis of the right to public participation which underpins democracy.

There are other ways in which respect for human rights can generate a social license for the data industry, but these are some to get you started.

You are your clients. The days in which companies that offer services to irresponsible, destructive, and oppressive technology projects without consequences are over. This may be activist groups who use targeted consumer boycotts, or workers in technology companies taking a stand against oppressive technologies. The Special Rapporteur on extreme poverty just issued a report specifically criticizing companies who participate in the digitization of welfare services at the expense of human rights. Some of the most powerful companies in the world are facing the challenge of users holding them responsible.

The best protection from censure and the most sustainable business models in the information age will have a social license to operate.

Why are the stakes so high?

Digital technology presents some of the greatest potential to overcome the pressing problems facing humanity. It’s more important than ever that we have the best minds working on problems that are in the public interest, in circumstances where people feel they can trust the institutions that are making decisions about them.

The great systems theorist Stafford Beer liked to remind us that “the purpose of a system is what it does.” If we create an industry of data collection, management, and analysis that solves social problems and empowers users, we will have a purpose we can be proud of.

Share

intertrust-xpn CTA Banner
Avatar photo

About Lizzie O’Shea

Lizzie O’Shea is a lawyer, writer, and digital rights advocate with Digital Rights Watch. She has lobbied elected officials on privacy, campaigned for law reform, and recently published Future Histories, which argues that historical thinkers and movements for social change have relevance for debates about technology today.

Related blog posts

Blog

Why is software neutrality important in the energy industry?

Read more

Blog

How can we both decarbonize the grid and meet rocketing energy demand?

Read more

Blog

Intertrust Platform now natively supports additional high performance databases

Read more