Advancing the Dialogue: AI, Business, and the Future of Human Rights
Artificial intelligence (AI) is rapidly reshaping how we live, interact, and do business. With this radical digital transformation comes a wave of important ethical questions: How do we ensure AI is developed responsibly? How do we protect equity, justice, and human dignity in a world increasingly influenced by black box algorithms?
Earlier this year, the Berkeley Haas Center for Responsible Business (CRB) brought these critical questions to the forefront through a timely and powerful event: “At the Brink: Confronting Business, AI, and Human Rights for a Just Future.” The conversation explored the real and growing risks AI presents and what companies must do now to prevent harm, protect human rights, and build emerging technologies that serve our communities for the better.
Lindsey Andersen with Business for Social Responsibility presents on the current landscape of AI and human rights.
AI is Taking Off… But at What Cost?
As governments and corporations race to invest in an AI market projected by the UN to grow to $4.8 trillion by 2033, the scope of the social and environmental costs still remains largely unclear. What’s often missing from this conversation is a deep and thoughtful focus on impacts to human rights — from discrimination and mental health impacts to privacy and surveillance.
On February 20th this year, CRB invited Lindsey Andersen, Associate Director of Technology and Human Rights at Business for Social Responsibility (BSR), for the third installment of CRB’s Human Rights Series. Andersen laid a powerful and insightful foundation for understanding AI’s human impact. One of the main takeaways for students was if an application affects humans in any way, then it is affecting human rights in some way too. To protect these rights, she outlined actionable strategies companies can use to govern AI responsibly, such as:
- Identifying high-risk use cases
- Conducting human rights risk assessments
- Creating cross-functional AI governance teams
- Embedding ethical AI policies across business units (and hopefully government too!)
These aren’t just best practices — they are critical safeguards for any business seeking to mitigate risk and align innovation with responsible business, justice, and equity.
Jenny Vaughan, Haas professional faculty, explains the basic principles of human rights through a business context.
From Paris: A Shift from Safety to Speed
To build on Andersen’s insights, we can look to the global stage — specifically the recent AI Action Summit in Paris. Formerly called the AI Safety Summit, this year’s rebrand was more than in name. It reflected a deeper ideological and policy shift: a prioritization of rapid innovation and global competition over safety, governance, and accountability.
To unpack what that means for the future, I spoke with expert Genevieve Smith, Founding Director of the Responsible AI Initiative at the UC Berkeley AI Research (BAIR) Lab and faculty at Haas.
“In previous iterations, this summit was actually called the AI Safety Summit — not the AI Action Summit,” Smith shared. “So that kind of tells you something about how the period we’re in now is a little bit different. My reaction to the summit as it relates to governmental action and progress, or lack thereof, is that the predominant notion right now is innovation above responsibility – a trend led by the current US administration.”
This new summit rebrand is not surprising. A noticeable shift began shortly after the Trump Administration took office and rolled back ethical AI commitments, including rescinding President Biden’s executive order on trustworthy AI. Still, Professor Smith focused on hope in the responsible AI space and emphasized that the summit wasn’t only about governments’ new priorities.
“The summit drew together folks from industry, academia, and civil society as well. There were all sorts of different side events, and those side events had really interesting and exciting conversations. People are bringing up and talking about different tensions that exist, asking how we grapple with AI innovating super rapidly when we have not sorted out the societal impacts and risks.”
Although many global leaders are not talking about responsible AI, there are still many people researching, discussing, and implementing responsible AI. The growing conversation on how to create technologies that are built for people and not just profit presents an incredible opportunity for students and young professionals to make a difference during this new digital era.
Students and Young Professionals: Your Voice + Expertise Matters
One of the most exciting moments in our conversation with Professor Smith was her call to action for students and early career professionals to get more involved, especially here at UC Berkeley where we sit at the heart of technological innovation. Many of us will go on to shape, work for, or partner with the world’s leading technology companies and startups.
To help students engage with responsible AI from the start, Smith shared a practical resource: “We created a Responsible Generative AI Playbook for product managers and business leaders,” she said. “It translates academic research into super usable, real-world guidance — a practical toolkit for students and professionals.”
Critical concerns the playbook covers include (from top to bottom): Privacy, Inaccuracy, Transparency, Bias, and Safety
Whether you’re studying business, computer science, engineering, public policy, law, or any other discipline, your perspective and expertise matters. Bringing a human rights lens to your work — internship, research, or future job — can influence the trajectory of AI for the long term. Professor Smith recommends three key actions to prepare for the evolving workplace:
- Read the AI Playbook.
- Attend AI-related events and courses on campus. (To stay informed about opportunities, sign up for the UCB Responsible AI Newsletter!)
- Get involved in cutting-edge campus research.
These steps offer practical steps to not only stay informed but also to contribute actively to the responsible evolution of AI technology. If you are interested in learning more from Professor Smith, she will be teaching a course called Responsible AI Innovation Management (UGBA 192T) this fall. You can also sign up for our CRB newsletter if you would like updates about future opportunities.
A Just and Equitable Future Starts Now
AI will continue to evolve quickly and adoption will only grow. The question is no longer if will people use AI — but how. The insights shared by Lindsey Andersen and Genevieve Smith remind us that responsibility isn’t an afterthought to innovation — it should be a part of the process.
We each have a role to play in shaping a future where AI positively serves everyone for the long term. Whether you’re in a classroom or a boardroom, your voice and expertise matters now and in the future. The story of AI shouldn’t be written by black box algorithms. It should be shaped by the values, policies, and choices we choose to elevate and support today.
About the author: Missy Martin is a first-year MBA candidate and a Fellow at the Center for Responsible Business. Prior to Haas, she served as Chief Sustainability Officer at the Web3 startup, Sapien Network, where she focused on community building through technology, advancing social impact via blockchain, and exploring the potential of decentralized science. Before Sapien, she supported business development at circular economy platform, Loop, as well as led leadership development and mentorship programs at Tesla and the Bay Area Environmentally Aware Consulting Network. She is a passionate community builder who volunteers her time consulting on climate justice initiatives and supporting her local food ecosystem. Missy is a graduate of Harvard’s Executive Education in Sustainability Leadership program and earned her B.S. in Society and Environment from UC Berkeley.