By Kate Alper, CRB Program ManageriStock_000035478748Small

When I bought a new computer at the Apple store this fall, the Apple specialist diligently advised, “Do not save your pictures to Dropbox. Your agreement with them allows them to view and use your photos for marketing.” While I was unsure of the validity of this claim, her warning nonetheless raised the specter of an increasingly vexing question for users and all company stakeholders: What responsibility do tech companies have to protect customers’ public and private personal data?

Earlier this year, the European Court of Justice (ECJ) ruled in favor of user privacy, supporting the “right to be forgotten” and requiring Google to delete data that is “inadequate, irrelevant or no longer relevant” if a user requests it. In the U.S., where the tension between the right to privacy and the freedom of the press has a long and complex history, Americans have been slower to adopt such legislation.

Americans are still questioning our “right to be forgotten.” Should the Internet and the innovations of the last twenty years change our approach to privacy and the attendant issues of freedom of speech? The burden is on both companies and consumers to help define the future of this issue, as it has been throughout American history. U.S. law protects the freedom of the press for social media posts just as it does for printed articles in newspapers and magazines. The 140 character Tweets that leak celebrity selfies today are not so different from scandalous newspaper articles that helped define American views on the freedom of the press in the early 20th century.

My Dropbox data is private. But I also have public data available to anyone with a search engine. In a recent Forbes article, Joseph Steinberg made the argument that information protected by the Fair Credit Reporting Act, which allows data to drop off an individual’s credit report after a set period of time, might easily be found by credit companies on the Internet. With a single click in my search bar I can find any news article, or public Tweet, that mentions my name. The adage that “yesterday’s news is tomorrow’s fish and chips paper” is no longer true.

Likewise, the famous quip, “freedom of the press is guaranteed only to those who own one” (A. J. Liebling), may well apply to today’s tech company giants. Google, Twitter, Facebook, and Apple have unlimited access to the data that we give them, and some use it to support their business models through targeted advertising revenue.

These companies seem increasingly powerful. Making news this month, Google leased NASA’s Moffett Federal Airfield in Sunnyvale, California for $1.16B to support both their private jets and research in space exploration and robotics. The argument has been made that today’s corporations are more powerful than governments, and as we know from the recent Supreme Court Citizens United ruling, corporations have certain rights as persons. As they enter new markets, and file lawsuits in their interest and in the interests of their users, tech companies have the power to shape the policies of the future. Yet they themselves navigate the tension between privacy and security, even as they influence its implementation in the U.S. and abroad. When are tech companies overreaching? What are their responsibilities as global citizens to protect their stakeholders?

Our Peterson Series’ panelists this semester have shed light on these questions:

  • Brian Farhi of Nest, a member of our Disruptive Innovation panel and a Haas alum, told us that, while protecting user anonymity, Nest technologies offer huge potential to improve our lives. The average person thinks about energy for six minutes a year. Nest thermostats are smarter than we are, and can use our data to help us make better decisions – using our data to save us money and to protect the environment.
  • In our panel on conflict minerals, Zoe McMahon of Hewlett-Packard described creating an online chat room to discuss human rights issues for consumers. Companies are responding to real time issues on the ground, just as we are. HP connected their online dialogue with consumers to work with NGOs in rural communities and compliance with government policies in ways that promote transparency. This kind of dialogue, whether about hardware or data, can help foster understanding.
  • Manas Mohapatra of Twitter, on a panel this November, explained how the company sued the Turkish government to protect the right to keep a Tweet live that accused a former Turkish official of corruption. Though controversy about the issue briefly shut down Twitter in this emerging market, the company ultimately prevailed, showing how businesses can strike a balance between privacy and security as they move into new markets.

I still save my photos in Dropbox. But I was provoked that day at Apple to think about perhaps the biggest lesson in all of this: that protecting my own privacy requires me to be informed as a consumer. The tech industry cannot do this alone. Tech companies respond to pressure from consumers and supporting third parties. Websites such as Chilling Effects, a project at Harvard’s Berkman Center for Internet and Society, capture sensitive data for companies that they themselves may hesitate to publish in their yearly reports. Third parties are vital in supporting honest dialogue about user privacy.

This is the type of dialogue that we hope to foster at the CRB. Join us next semester as we continue our Peterson Series with conversations on human rights and labor issues. Hope to see you there.

Previous Sustainable, Responsible, Impact Investing: The Next Wave Next White House Enlists Berkeley-Haas for Action Plan on Responsible Business