Category Archives: Privacy

National Privacy Day Panel: Driving Privacy and Security in IoT

As part of National Data Privacy Day on January 28th, 2016, we are proud to have been selected to participate in an event sponsored by the California State Governor’s Office of Business and Economic Development, CyberTECH and the Ponemon Institute. Called Securing the Internet of Things: National Data Privacy Day 2016, the event was held in the California State Capitol Building and brought together leaders from the California State Government, educational institutions and private industry to discuss how all can work together to better protect privacy and security in the age of IoT (some of our thoughts on the subject can be found here).

 

Intertrust’s own Vivek Palan participated on a panel discussion entitled, “Security, Privacy and Trust in IoT Platforms.” Moderated by Davis Hake from Palo Alto Networks, the panel also included Lance Cottrell from Ntrepid, also the founder of the well-known privacy tool Anonymizer, Peter Day from Bank of the West, and Ford Winslow from centrexIT. To start with, Vivek stated the breadth of the issue by saying, “Everything you see now from household products to medical devices will be affected by IoT. The only limit is our imagination… Intertrust believes that for IoT to be successful, there is a very strong need for a common security layer with open standards .”

Mr. Cottrell made a very interesting point regarding just how to define IoT. At the heart of things, IoT is really about computers but the difference between IoT and other computing devices such as personal computers and smartphones is more psychological than technical. “The user doesn’t think of a device such as a connected car, smart meter or SCADA system as a computer but as a device that does something. The person who built it doesn’t think about it as a computer either,” (Cotrell). This also affects security since a laptop user is expected to be responsible to a large extent for their security. The same expectation does not exist with IoT devices. Mr. Day put another spin on this, saying “IoT really means a radical loss of control to end users.”

Need to Act Quickly

The panel emphasized the need for quick action to develop trust in IoT. Given the potential ubiquitous nature of these devices and the intimate connections IoT devices will have for both homes and organizations, Mr. Day suggested that the risk environment for IoT is different from other types of computing environments.  With the scope and threats of IoT deployments yet to be determined, he is particularly concerned about unforeseen risks. “The situation is similar to right before 9/11…. Policy planners must think about freely about the possibilities free of what happened in the past,” (Day).

With a reference to the recent past, Intertrust’s Mr. Palan put forth one unnerving potential privacy risk around IoT. In June 2014, it came out in the press that Facebook had been manipulating some of their user’s newsfeed posts to see if it could change their emotional state. With consumer IoT devices potentially having access to very sensitive personal data throughout an individual’s life, “imagine the type of subtle manipulation these devices could do, (Palan).”

According to Mr. Cotrell, the dangers are increasing as many IoT manufacturers are putting out product without any clear guidance on who is liable for privacy and security. “IoT is essentially creating cyber security smog. Everyone can produce it but no one has to take responsibility for it, (Cotrell).”

Building Trust

Much of the discussion was about how to establish trust for IoT devices. Mr. Palan has had some experience working for startups in the past. Noting that many companies active in the IoT space are startups, “I can understand how the pressures of releasing a product quickly can sometimes  lead to skipping non-visible aspects like security and reliability,” (Palan).  According to Mr. Palan, however, this is likely to be just a temporary state of affairs for as time goes on business pressures will make sustainable user trust a competitive advantage.

The panel as a whole saw a real opportunity for open standards, protocols and industry organizations to play a large role in IoT privacy and security. Mr. Cottrell stated that the industry needs to get away from the stance of relying on end-user education. “When you buy a phone charger, you don’t expect to have to do your own testing to make sure it is safe, you just look for a UL (Underwriter’s Laboratory) code on it,” (Cottrell). As to how this sort of “UL mark for IoT” security will actually work, “Open standards and protocols will be baked into products as a matter of course and standards bodies will make sure devices comply with security,” (Palan). The idea of introducing clear lines of liability for IoT privacy and security and coming up with indemnification mechanisms was a recurring theme throughout the panel.

Beyond the usual drumbeat of privacy and security hacks, Mr. Winslow suggested that a move from selling IoT devices to selling IoT services could provide an effective economic incentive for IoT security. “Six months ago, I saw a medical device manufacturer move to giving a device away for free and charging a subscription fee, getting 10 to 20 times the revenue,” (Winslow). With additional revenue and an added incentive to keep the service up and going, a subscription model means more resources available for security measures. 

—-

Photo Caption:

From left to right: Davis Hakes (Palo Alto Networks), Vivek Palan (Intertrust), Lance Cotrell (Ntrepid), Peter Day (Bank of the West) and Ford Winslow (centrexIT) 

Consumers Agree: Fix IoT Security and Privacy for Market Growth

As we greet the New Year (Happy New Year everyone!), like every year the tech industry starts things off with a bang at the annual CES show. This year, with introductions of everything from new AI technology for connected cars to talking sunglasses, the consumer electronics industry is looking for their next big market. And, it’s clear that consumer IoT (Internet of Things) is the theme of CES 2016. So, it’s not surprising that the consulting firm Accenture picked this time to release an international survey of consumers’ attitudes toward IoT. The Accenture survey shows what many in the industry have been pointing out for a while; for the consumer IoT market to really take off, security and privacy concerns have to be effectively addressed.

Accenture surveyed 28,000 consumers in 28 countries in October and November, 2015. They found that consumer intent to purchase such IoT products as smartwatches and fitness monitors in 2016 was around 7 to 13 percent, with little change compared to 2015. This relatively tepid enthusiasm can be explained by the perceived barriers, the first being cost with 62 percent of consumers feeling IoT products were still too expensive. The second, though, is the security and privacy risks of these products, with 47 percent of consumers citing this concern. In the expected high-growth markets of China and South Africa, security and privacy risks were cited by 58 percent of those surveyed.

Most likely consumers have been influenced by the spate of news stories about actual security and privacy risks found in the IoT products in the market today. Wired Magazine has a good roll up of some of the more egregious incidents in 2015, including a demonstration of the remote takeover of a Jeep Cherokee and security holes found in smart refrigerators and dolls. A poll of U.K. based security experts found that 75 percent felt that IoT device manufacturers were not implementing appropriate security measures.

Consumers are not the only ones concerned; IoT privacy and security concerns have reached the government level as well. In Fall 2014, an organization of the European Commission released an opinion on IoT privacy, followed by the Federal Trade Commission (FTC) in January, 2015. In December 2015, the U.S. Department of Homeland Security (DHS) put out a call for startups in Silicon Valley and others to help develop IoT security.

Some of the issues with IoT security can be attributed to the fact that many companies now getting into the market are ones that haven’t had much reason to worry about computing security in the past. This makes it even more urgent for the IoT industry to move on and create appropriate standards and best practices for security and privacy. There are, in fact, quite a number of standards consortiums and industry organizations working on this issue. Of course, we recognize that hastily cobbled together standards could lead to even more potential problems down the road. Still, given the threats to today’s consumers and tomorrow’s corporate profits, it seems a wise course for industry participants to commit even more resources in hopes of speeding the process along.

Impediments to Interoperability: Why Can’t We All Just Get Along?

Tomorrow I will be giving an invited talk at the Human Genome Variation 2015 conference in San Francisco. The title of my talk is Impediments to Interoperability: Why Can’t We All Just Get Along? For ethical and scientific reasons — some of which I will briefly outline in my talk — data sharing is imperative in genomics. And yet, very little data sharing actually happens. Why?

Technical standards are an integral part of the problem — systems need to be able to interact with one another at the level of bits and bytes. But making technical standards is the easy part. Organizational issues, often overlooked, can impede interoperability just as effectively as lack of common technical standards. For example, if a system requires that users be recognized as bona fide researchers, which authority is empowered to make this assertion? If there are multiple such authorities, how do they interact with one another? Are assertions made by one authority necessarily respected by the others? Are their assertions semantically equivalent? The assertion that a given person is a legitimate researcher depends on knowing the person’s identity — which authority (or authorities) have verified the identity? To what degree? And so on.

Even organizational issues seem minor in comparison to the many systemic factors and disincentives that frustrate interoperability. For example, academic or commercial competition can (and often do) prevent systems from working together. Privacy risks and fear of liability impede the flow of data, as do national laws and institutional policies. These issues appear to be almost insurmountable barriers to interoperability, but I will argue that organizations like the Global Alliance for Genomics and Health are ideally positioned to address these challenges, and indeed, are making progress in understanding the underlying dynamics through a series of high profile demonstrator projects. If we can clearly understand the factors that impede interoperability, then we can design systems that account for them rather than wishing (in vain) that they did not exist. This reality-based approach is the best way to unlock the potential of data sharing.


The official abstract for my talk.

At this very moment, a hard drive full of data is lying idle in a box under a desk in a nondescript laboratory. Buried among the countless gigabytes, there is a single bit of information that can save a life — a variant of unknown significance observed only once before, halfway around the world. This variant and its associated phenotype hold the key to understanding and treating a devastating disease. Unfortunately, the connections will never be made. A paper has been published, the researchers have moved on, and an anonymous child in a foreign country will finally succumb, without answers. Studies have shown that approximately 96% of variants predicted to be functionally important are rare, with an allele frequency below 0.5%. As a result, it is clear that no single institution will hold enough samples to achieve sufficient statistical power. In this context, collaboration is a necessity, not an option. Never before have we seen such a confluence of enabling technologies: inexpensive next-generation sequencing, virtually limitless storage and computational power, high-bandwidth communications channels, sophisticated machine learning. These technologies, which have revolutionized so many aspects of our lives, are poised to do the same for human health. And yet data sharing is still the exception rather than the norm. Why? The Global Alliance for Genomics and Health (GA4GH) is addressing these questions on multiple fronts, from file formats and API interoperability, to security and privacy, to nuanced regulatory and ethics considerations. In this talk, I will outline some of the challenges that impede our collective progress and describe how the GA4GH is helping to overcome them. As a member of the Security Working Group, I will focus in particular on issues of security, privacy, and trust, which can enable or inhibit interoperability just as effectively as agreement or disagreement over data formats. No single individual can ensure that the data on the hard drive underneath the desk will get into the hands of the geneticist that desperately needs it, but by working together we can build systems and institutions that do. I will conclude my talk with a discussion of how everyone in the community can get involved to help realize this important vision.

Squaring the Circle: Access and Genomic Privacy

Here’s a recent talk I gave at a Friday lunch seminar on balancing issues of data access and genomic privacy. Please enjoy, and don’t hesitate to contact us with your questions or comments!



Squaring the Circle: Access and Genomic Privacy

There is an apparent contradiction at the heart of genomics research. On one hand, we as a society have a moral imperative to share data broadly — it is no exaggeration to say that access to data saves lives. On the other hand, genomic data contains a wealth of private information that must be protected for both ethical and legal reasons. We must not be tempted to choose between access and privacy protection; this is a false dichotomy. We must demand both.

In this talk, Knox Carey, Vice President of Technology Initiatives at Intertrust, will argue that access to data need not come at the expense of privacy. He will describe methods being developing at Genecloud for data privacy-preserving genomics research, and will explain how these methods address existing and emerging challenges in genomics research. He will also discuss the role of standardization in this area, focusing in particular on our work with the Global Alliance for Genomics and Health.

The Unknown Unknowns

A recent NPR poll indicates that many Americans would be willing to share health data, including health records, for research purposes. NPR and Truven Health Analytics interviewed roughly 3,000 people to determine whether they’d be willing to share their anonymized health data with researchers. Just more than half (53%) said that they would. A large percentage, but still a decline from last summer, when a similar poll revealed that 68% of respondents would be willing to share their info for research.

Why the change? Part of it could be that when we asked the first time around the question came after others on the use of electronic medical records by doctors, employers, insurers and hospitals. The context might have affected how people responded. It could also reflect heightened sensitivity about data security … major privacy breaches have been a hot topic in American culture — from leaked pictures of celebrities to the extensive Sony hack.

While the 15% decline is interesting, the fraction of people who would make their private data available is still very high. I think that the majority of people are data altruists — people who believe that making their personal health data available will advance research on diseases like cancer, cardiac disease, and Alzheimer’s. We should applaud their willingness to share.

What concerns me are the “unknown unknowns”. It is unlikely that many of the respondents draw a clear distinction between more routine health data (such as blood pressure) and genomic data. Genomic data is not strictly individual — it contains very sensitive information about relatives as well, including predisposition to disease, physiological conditions, etc. By making a decision to share genomic data, you are also making a decision on behalf of your family members. Furthermore, data released today may yield more and more personal information as the science improves. A decision to share needs to be made in the realization that both uses and abuses of the data will increase over time. Finally, the combination of genomic data from one source with information from elsewhere poses a serious threat to privacy, one that worsens as more and more information is collected.

Faced with these unknown unknowns, a rational strategy would be to ensure that the use of sensitive genomic data is governed persistently, now and into the future. This is our approach with Genecloud. Allowing access to private information for one purpose today should not automatically mean that you accept all uses of your data going forward, or leave the data exposed for arbitrary data mining. We believe that providing security and privacy assurances will ultimately increase data sharing, to the benefit of all.

Is Digital Privacy a Myth, or can we Win the War on Controlling our Data over the Open Internet?

The rise of the “information economy” has been fueled by a combination of efficiencies (higher performance at lower cost) in processing power and storage capacity, rabid consumer adoption of sensor-rich devices, and refinements in machine learning.  This progress has made data an indispensible component of growth for any modern enterprise.   As a result, entities of all kinds – commercial, governmental, academic – are motivated to collect and analyze as much data as possible to better understand us as individuals.  Whether we are tracked as customers, constituents or research subjects, more of our consumption, preferences, geographic movements, biometric information, and even our DNA is being recorded and interpreted.   The end goal may be to improve our experiences with products and services, allocate scarce resources more efficiently, cure disease, or simply gain an edge on a competitor. 

Whether harmless or not, all of these data collection practices are becoming more invisible to us as we are “lulled” into giving up our digital privacy by accepting that such information can no longer be practically controlled if we are to engage as consumers in a connected world.

This raises questions around the wide array of seemingly “free” services offered via the Internet.  Of course, none of these are truly costless to us, and our data is the currency we are paying with.  Every time we browse, shop, make a reservation, pay, share, read, like or follow, we submit trackable, identifiers that are collected and sold to marketers, insurance agents, app developers, publishers and others who profit from knowledge of our behavior.  Disclosures about these data privacy practices may be written out in the lengthy legal agreements presented when we register for a new app or service, but they are wholly impractical for laypeople to read and understand.  So by clicking “ACCEPT”, we are effectively signing away our data privacy and rights to control how our data is being collected, resold, and used. 

Are any online services truly free? What are the true but hidden data privacy costs? 

To be clear, the “Internet era” did not create the practice of information management.  Long before we were blindly accepting Terms of Use, we were sharing personal information for particular purposes: to get a credit card, diagnose an ailment, travel internationally, comply with a census, etc.  In these scenarios, we were more comfortable in sharing this personal information because it was handled by an entity that we believed we could trust like a bank, government agency, physician, or co-worker.  And at that time, we could reasonably assume that such information would not jeopardize our digital privacy, and would remain with that entity, and used for only the explicitly stated purpose. 

What makes us uneasy today is the notion that our personal information might be distributed to a broader group than we’d thought or used for a different purpose than we’d intended.  The notion that many disparate sets of data could be amalgamated to build a very accurate model of us is uncomfortable.  The fact that profit-seeking entities are now incentivized to accelerate and refine this process is all the more troubling because we don’t even know them, let alone trust them.

Would this change if these entities were both known and trusted? Is trust over the Internet the cornerstone to achieving digital privacy?

A host of regulatory bodies around the globe have produced rules around the ways in which personal information may be collected and used, including: US-EU Safe Harbor Privacy Principles, COPPA, HIPAA, EU privacy directives, standards bodies such as IAB, NAI, and CNIL are putting forth frameworks to set limits, and the US NITRD is actually researching the challenge as well.  Most of these groups advocate for a rigorous regime of transparency, disclosure, accountability, consent (through opt-in), and “fair use” which are necessary to preserve people’s “Datarights,” that is, the rights of individuals to control access to and use of personal data. But the elephant in the room is that this is not sufficient until the ownership structure of data is rebalanced in favor of the individual.

At the moment, the balance of power on the Internet is highly skewed towards service providers with massive capital resources, while individuals are left with a difficult choice – use the Internet and surrender some measure of data privacy, or not use these services and become marginalized from society. With the right mechanisms in place, this dilemma can be solved.

What regulatory framework is required to manage personal data online? 

Twenty-five years ago, Intertrust Technologies pioneered Digital Rights Management (DRM) to protect the rights of copyright holders and continues to make significant advances to the field of trusted computing. The company’s technologies have been at the core of both first-hand and externally developed solutions to fundamental security challenges such as code tamper resistance, content protection and authentication of IoT (Internet of Things) devices. Today, Intertrust continues its work in these fields as well as taking its technology into the fields of protecting individual digital privacy.

To learn more about how Intertrust proposes to preserve privacy, please visit our blog.

 

Photo Caption: Die Erste Lücke in der Berliner Mauer, The first gap in the Berlin Wall

Data Privacy Day 2015: Governments Move Quickly to Address IoT Privacy Concerns

Today, January 28, 2015, is Data Privacy Day. As such, it’s apt that the US Federal Trade Commission (FTC) chose the previous day to release a major staff report on privacy and security for the Internet of Things (IoT). It’s worth noting that the FTC isn’t the only major government body taking action on IoT privacy. On September 16, 2014, a key European Union body tasked with protecting data privacy, the Article 29 Data Protection Working Party, also adopted a report on IoT privacy and security.

Such government action on protecting data privacy is welcome and timely. Consumer use and acceptance of IoT devices and services such as fitness monitors, connected cars, thermostats, smart watches, etc. is on the rise, and is a promising new market for technology companies around the world. Still, the market is nascent and vulnerable to changes in public sentiment. Already we are seeing IoT privacy issues with potential threats to individual privacy in the IoT market. Digital advertising, an industry which in the past has seen members adopting questionable techniques which could infringe privacy, is clearly signaling interest in the IoT market. Already at the beginning of 2015, a mobile advertising company has announced an advertising platform for the Apple Watch. Advertising can benefit individuals and clearly can play a beneficial role in the IoT market. Given the potential for data privacy violations of IoT data to expose very sensitive data, the advertising industry, and technology industry as a whole, needs to proactively take concrete action to protect data privacy on IoT. A failure to do so could jeopardize this promising market before it fully takes off.

Looking at the the recommendations the FTC released in its report, they are basic, common sense actions applicable to most any digital business. They include adopting security by design, minimizing the data collected and retained by organizations, and ensure that individuals have proper notice and consent around services which collect their data. Yet there is another recommendation the technology industry needs to take very seriously. It’s important to note that the FTC only has power to enforce existing laws, but they can be a powerful influence on legislation. In the report, the FTC calls on the US Congress to pass “strong, flexible, and technology-neutral” data security legislation. Should the technology industry ignore the FTC and the inevitable IoT data privacy scandal becomes part of the national conversation, this legislation could take a form which the technology industry won’t like.

Today is Data Privacy Day in the US and Data Protection Day in the EU

Did you know that today (January 28th) is Data Privacy Day here in the US? In Europe it’s referred to as Data Protection Day. In fact, this day is celebrated in the US, Canada and 27 European countries as a day focused on raising awareness among families, consumers, and businesses the importance of protecting the privacy of personal information online.

According to the StaySafeOnline.org – a great site loaded with resources to help promote and make the most of the day – the Data Privacy Day goals are to:

  • Raise awareness and educate consumers to better understand how their personal information may be collected and the benefits and risks of sharing personal data
  • Empower consumers to express their expectations for the use, protection and management of their personal data
  • Inspire consumers through concrete, simple and actionable tips to more actively manage their online lives
  • Encourage and motivate consumers to consider the privacy implications of their online actions for themselves and others
  • Encourage businesses to be data stewards by being open and honest about how they collect, use and share personal information and clearly communicating any available privacy and security controls

Be sure to visit StaySafeOnline.org to get details on how you can be part of this important day.

  • Download materials such as posters, buttons, logos, and web banners to put your website to work in spreading the message of data privacy and protection
  • Tips for getting involved, as a social media leader, champion or simply see ideas on how to spread the message
  • See privacy tips on how you can protect your personal information and protect your data
  • Check your privacy settings and get advice on how to update your privacy settings at many of the common websites the public uses

whiteCryption takes data privacy and protection very seriously. Our high-end security applications from automotive and banking to government facilities and high-value applications from big data analytics engines and oil/gas exploration require strategic solutions that help avoid any compromise to these secure environments. Our Cryptanium enterprise solutions add a layer of protection to help avoid the limitations and risks involved with conventional application security. These security solutions deliver the next level of obfuscation, self-defense and tamper resistance technology against piracy. 

Along with many other data privacy and protection organizations and advocates, today and every day we are ALL IN! Be sure and help spread the word.

Data Privacy Day

A Very Special Data Privacy Day Message from your Friends at Genecloud

Ah, Data Privacy Day! The day when we pause to reflect on those (data) we have lost. Nowadays it’s so easy to get caught up in all the tinfoil and anonymizing proxies that we lose sight of the true meaning of Data Privacy Day. Sure, it’s fun to think about encryption algorithms and deidentification, but let us not forget that these are merely the visible trappings of much more important phenomena: choice, freedom, and control. On this Data Privacy Day I would like for us to think more deeply about privacy — not as a set of techniques and countermeasures, but rather as a reaffirmation of our own agency.

If you want to share, share. If you want to stay anonymous, stay anonymous. It is your choice: don’t let anyone else make that decision for you. Giving away access to your private information should be a quid pro quo, but too often it’s all quid and no quo. Even the most paranoid among us will acknowledge that there are absolutely legitimate reasons for sharing your information… sometimes. In healthcare, data sharing can be a beautiful, selfless act in which you intentionally exchange some of your personal privacy for the knowledge that your information is being used to help other people. If you are not careful, however, your altruism will do little more than inflate the valuations of companies whose business strategies consist solely of trading on your personal information. I am sure that their shareholders are grateful for your sacrifice. Let’s remember Henrietta Lacks, and learn from her example.

Science advances when the right researchers have access to the right data. Unfortunately, this has often led to wishful thinking about privacy — perhaps if we just check that the researchers are “legitimate” and remove the subject’s name and address, everything will be fine? In fact, no, things will not be fine. Recent work has shown that no amount of scrubbing can remove identifiable information from genomic data. The sooner we acknowledge this fact, the sooner we can get down to the serious questions: how do we create an environment in which access and privacy can coexist? How can we put patients and research subjects back in control of their own data? Genecloud was created to address these questions.

It’s an ambitious undertaking, and we need your help. If we, as individuals, do not demand control over our information, then we cannot expect anyone to cede it to us. Data holders will continue to take the most expedient course, which so rarely takes your privacy seriously. If we wish to change the status quo, we must make it inexpedient to ignore privacy concerns. So today, as you’re busy configuring your Tor exit node and checking the Bitcoin futures, take a moment to remember who is really in charge of your data. If that isn’t you, then by all means, do something about it!

Digital Security: A Look in the Mirror

As 2014 draws to a close, It is essential to look back on  financial and other implications for companies in dealing with digital security and privacy, particularly as IoT implementations are likely to grow. There are rays of hope for 2015, despite the pessimism-privacy is not dead.

A Look in the Mirror

This past year, the world experienced such a high volume of brutal cyber attacks (which are showing little sign of decreasing) that they have become seemingly commonplace. These many privacy breaches of a wide variety of companies make it more important than ever to take action to protect personal and corporate information online.  This must be done in order to support an open society and to protect users’ flow of digital information the way they intend it. That is what the Internet of trust is all about-for individuals and enterprises alike.

Increasing Costs of Digital Security and Privacy

It is important to understand that digital security and digital privacy are two different things: digital security is necessary to ensure digital privacy. As criminals become more sophisticated and the need for digital trust and privacy increases, a new industry, cyber insurance, has risen to help companies weather the very real consequences of cyber attacks. The extreme consequences of attacks on corporate data could lead to a jump in cyber insurance premiums. This is just with the amount of data companies currently hold. What will happen when premiums take into account the increase in the amount of data companies hold due to IoT implementations? At this rate, cyber insurance is likely to become a necessity for small and mid-size companies. How many can afford it, and what can companies expect regarding policies?

What Companies Can Do

To successfully achieve digital privacy and security, companies are already taking a number of newsworthy steps. Companies can:

Be prepared!

Incorporate Tripwire technology into the Internet of Things (IoT) security.

Restructure and Reorganize to focus on IoT security.

Put policies in place to address fundamental security practices.

Looking Forward to 2015

As more and more people and companies continue to use connected devices, how is it best to go about securing, making trustworthy, and managing access to networks and massive sets of data for a connected society? Where do trusted intermediaries fit in to this approach? This will be the primary focus of 2015.

As 2014 comes to a close, despite the pessimism, there are in fact rays of hope in the continuing struggle to protect digital security and privacy amongst the concerns. For starters, these concerns are bringing about necessary change: government and industry authorities are recognizing the value of and are calling for more digital-and mobile application-privacy and security. Furthermore, it’s encouraging to see the American technology industry take a principled stand to ensure people that privacy is not dead.

Privacy is not dead.