Written by Jeff Hartsough, MBA Candidate Class of 2019 and Ada Maksutaj, MBA Candidate Class of 2020

On March 7th, the Center for Responsible Business and the Human Rights Center, in collaboration with Microsoft, hosted the third annual conference on Business, Technology, and Human Rights. The event gathered technologists and practitioners across industry, nonprofit, and academia, as well as students around the topic of Artificial Intelligence (AI) for Social Impact.

As student advisory board members at the Center for Responsible Business, we want to be part of the movement to move business to the forefront of social and environmental impact. Attending this conference and meeting some of the leading practitioners in the field of AI gave us the chance to understand how we can ensure this burgeoning technology can help achieve that goal. Some of our key takeaways were as follows:

AI can serve humanity and promote positive social, environmental, and economic outcomes through a focus on human-centered, inclusive AI design.

This means adopting a more conscientious design of AI algorithms and tools that puts the full diversity of human needs and consequences at the center. Adopting this approach will help alleviate the risks posed by AI, which have historically been where nongovernmental organizations (NGOs) and governments have focused. It will also help identify new opportunities for AI to have a positive impact, which has historically been the focus of technologists and industry.

Bringing diverse voices into the process is especially critical for marginalized groups—where AI could either amplify their marginalization or uplift those underrepresented voices. In the case of refugees and migrants, human rights defenders are leveraging AI to generate insights for and help protect advocates who are working on the ground. AI can provide these advocates with more objective tools to raise concerns or ask for investigations of abuses, or alternatively, to identify opportunities to provide better services where they are needed. At the same time, we need to ensure these tools are not used for surveillance or violence by oppressive regimes.

Companies also need to make sure that AI and automation will help create good, meaningful jobs for workers. Designing training systems with diversity and inclusion in mind is key to achieving this goal. This also begins with sourcing data inputs responsibly, as we are already seeing data collection and processing create a new kind of sweatshop in places like China. Closer to home, companies have a responsibility to invest in marginalized communities to ensure individuals in these communities have the training to take advantage of new opportunities created by AI systems. Focusing on the fundamentals of our systems, not just the technology, is critical in this work. For instance, education and skills training can improve access to new job opportunities created by AI. These efforts will contribute to building a more equitable world that balances automation and human wellbeing.

There is great concern over how AI can reinforce or create bias and discrimination, especially for marginalized groups.

AI systems can do this in three main ways:

  1. Data fed into the system may contain bias (e.g., gender stereotypes appearing in word-vector spaces).
  2. The algorithm itself can develop bias when one trains it on an existing data set—it will optimize for what’s already there. For example, imagine a resume screener trained on an existing workforce, which will produce more of the same candidates and less diversity.
  3. AI trained on one data set and in one context may not be able to be applied to a new context. It may have blind spots or may have been trained in ways that create bias in the new space.

There is increasing awareness of these issues in industry and some organizations are creating tools and solutions that help developers eliminate bias and ensure fairness in their AI systems, such as the IBM 360 Fairness Toolkit, the Google What-If-Tool, and the Data Science for Social Good program at the University of Chicago. These solutions are helping AI developers clean and modify data, design algorithms, and test the resulting AI systems to identify and eliminate bias, as well as mitigate negative impacts of deploying these systems. Inclusive design and comprehensive checks-and-balances can help make sure AI is developed with a diverse set of voices.

Existing human rights principles can help us frame these challenges going forward.

While these frameworks may need to be adjusted as we learn more about AI, we can begin by applying them more rigorously to these issues throughout the entire AI development and deployment process. These principles have worked across technologies, geographies, cultures, and various challenges for decades, particularly since the UN Declaration on Human Rights was codified in 1948. The 2011 UN Guiding Principles on Business and Human Rights provides even more relevant and direct guidance on how to incorporate human rights into these kinds of decisions.

Critically, companies and technologists are starting to move away from the “move fast and break things” mentality and beginning to implement these principles and frameworks. They are doing more beta testing to make sure they are designing for everyone and eliminating bias. Companies and technologists need to make sure AI systems are not just focused on accuracy or recall, but on the impact of the systems on real humans and on unintended consequences. What happens when the model goes wrong? Who is impacted by the mistake case, and can they bear that cost? Closing the space between the silos of technologists and human rights can help all of us get better at our jobs and improve outcomes for all.

We are beginning to understand how applying and scaling AI and data centralizes power and increases the need for appropriate protections and oversight. There are no quick fixes for the ethics and responsible design and use of AI. Regulations, such as the General Data Protection Regulation (GDPR) in the European Union, will play a critical role in data governance, data security, discovery, classification, control, and intelligence. But as these regulations emerge, human oversight, inclusive design, diversity, and an impact-first approach must also be top-of-mind in making AI a positive force for social impact and in helping us answer fundamental, normative questions about technology, power, and society.

About the Authors

Jeff Hartsough, MBA Candidate Class of 2019

Prior to Haas, Jeff worked in CPG data analytics, cleantech product management, and public sector management consulting. He is committed to creating technology that ensures a sustainable and equitable future.

BREAK

BREAK

Ada Maksutaj, MBA Candidate Class of 2020

Prior to Haas, Ada worked as a strategy consultant with Accenture, partnering with clients including the UN, World Economic Forum, and Pearson Education. Her work included digital strategy, business model innovation, and social intrapreneurship.

Previous Corporate sustainability leadership: What we can learn from the Nordics and California Next Applied Learning and Sustainable Value Creation: The Haas SRI Fund at 10 Years