This guide provides good practices for business leaders to advance language for equity and inclusion within artificial intelligence (AI) systems and machine learning (ML). The practices focus on management strategies and actions to take throughout the product lifecycle.
The guide also includes key understandings on language, power, and AI that informed the practices. We include tools to execute on certain practices.
This guide is for current and future business leaders seeking to learn about responsible innovation practices in the research and development of AI systems using ML. It is particularly relevant for MBA (Master of Business Administration) students who are pursuing roles in which they may need to make business decisions related to AI and ML research and development.
The area of language and AI is particularly important for business leaders. Language runs through AI — in data, data labels, and language-specific applications like natural language processing (NLP). These systems are susceptible to the same harms that occur in human communication, including reflecting and reinforcing harmful biases. Yet, existing management strategies to tackle bias and advance language for equity and inclusion are insufficient. The guide is a launching point to address this gap.
Advancing language that supports equity and inclusion within AI and ML systems can promote positive norms and lead to a more inclusive product experience, while also better reflecting a company’s mission, ethical principles, responsible innovation commitments and stated product goals. This can enhance user trust and brand reputation, while mitigating risk — both reputational and regulatory. Responsible AI leadership is a competitive advantage that can serve as a driver for the company, including through being a business of choice for local and national governments (a large customer for AI technology). As investors are increasingly seeking to incorporate ESG framings into investment decisions, centering equity and inclusion as core drivers for AI products can set companies apart.
Business leaders have a central role to play, while bearing responsibilities to connect the mission and values of the company to the products and services it develops.
The guide was developed by looking at real-world business challenges for management and technical teams in leading global technology companies. We reviewed relevant academic literature across linguistics, sociology, computer science, engineering, and management. We received feedback from practitioners at a leading tech firm, as well as prototyped the guide with MBA students.
Tools
Terminology guide: Harmful terms & alternatives
Use this to identify harmful terms in code and replacement options
Individual worksheet
Give this to your team members to individually practice critical thinking skills related to responsible language in AI and ML
Data labeling lesson plan (for data labelers)
Give this lesson to data labelers to learn about more responsible labeling of ML training datasets
Data labeling lesson plan (for software engineers)
Give this lesson to software engineers to learn about more responsible labeling of ML training datasets
Case study: Creating a responsible AI finance chatbot
Use this to have your team collectively practice critical thinking skills to develop an AI application using responsible language practices
Guide for difficult discussions
Share this with team members to learn about strategies in discussing difficult issues in AI / ML research and development
Part of the initial research that informed this work was funded by Google.
Interested in going deeper?
Read our paper on Advancing social justice through linguistic justice: Strategies for building equity fluent NLP technology presented at the Equity and Access in Algorithms, Mechanisms, and Optimization Conference (EAAMO ’21) and published by the Association for Computing Machinery (ACM).
Read our action plan on NLP Tools to Promote Justice, that outlines actions to advance justice in NLP research and development, co-authored with participants from our workshop at the 2022 Fairness, Accountability, and Transparency Conference (FAccT).