Agentic AI for the Public Good: Expanding Access to Critical Services
Discussions about artificial intelligence ethics often focus on its adverse effects, including bias, job loss, opaque decision-making, and the challenge of aligning technology with human values. However, when deployed responsibly, AI has the potential to provide a plethora of societal benefits by expanding access to essential services and strengthening underserved communities. For example, it can help reduce barriers in education, healthcare, legal aid, and finance in underserved communities.
Throughout society, underserved populations (ie, low income and disadvantaged communities, historically underrepresented groups, and people with disabilities) continue to face serious obstacles in gaining access to the services they need most. In healthcare, limited providers, high costs, and remote locations often leave many individuals without access to affordable or timely care. In education, students in low-income or rural areas often lack access to qualified teachers, individualized attention, and reliable technology. Access to legal help remains restricted by overworked public defenders and the high cost of private attorneys. In finance, individuals without credit histories or formal identification are often excluded from the banking system. These problems, combined with poverty, language barriers, underinvestment, redlining, and weak infrastructure, create long-term cycles of exclusion that limit opportunity. The result is not only a lack of access but a lack of tools to improve quality of life, standards of living, and economic stability.
AI service agents can help bridge these gaps by offering mobile, affordable, and scalable services. Unlike traditional providers that are restricted geographically, these tools can operate online and reach anyone with an internet connection (notwithstanding persistent gaps in coverage, recent gains from satellite networks, and the availability of offline or low-bandwidth alternatives). Their low operating costs make services more affordable for individuals who might otherwise be unable to access care or support. Because they can be quickly adapted to local needs and languages, they are flexible and accessible. They can work continuously, respond instantly, and be built into mobile devices or community platforms, creating a steady presence where human resources are limited. This combination enables AI agents to deliver essential services to people who have long been underserved. When used responsibly, these systems could help drive social and economic progress in underserved areas.
To be effective, these tools must be deployed with an understanding that technology alone cannot reverse entrenched systemic disparities. Their reach and efficiency offer meaningful support, but they cannot compensate for weak institutions or long-standing inequities in funding and opportunity. Their potential will matter only if they are paired with sustained efforts to strengthen infrastructure, protect equity, and build long-term community capacity.
Legal Barriers
Two significant legal barriers stand in the way of scaling these technologies. The first is licensing. Licensing requirements for professions such as medicine, psychiatry, law, and finance are designed for human practitioners who meet established educational and certification standards. AI agents that perform similar tasks do not fit within this framework, leaving developers uncertain about how to comply with existing rules. In early cases, governments require human oversight from licensed professionals to monitor or verify the system’s work. While this approach provides safety and accountability, it also eliminates much of the cost efficiency that makes these tools valuable in the first place. The continued need for licensed supervision limits the ability of such technologies to serve low-income or geographically isolated populations that most need affordable access.
The second barrier is liability. When a digital system makes an error or causes harm, the law provides no clear answer about who bears responsibility—the developer who built it, the institution that deployed it, or the user who relied on it. Traditional malpractice laws assume human decision-making and professional discretion, which do not translate easily to autonomous systems. Without updated legal frameworks that define and assign responsibility, innovation will remain constrained.
Early Legal Movement
Early regulatory efforts are beginning to explore how to govern the rise of AI agents. The Utah Office of Artificial Intelligence Policy has taken a leading role in creating frameworks that allow experimentation while maintaining oversight, working with developers to test real-world applications under structured mitigation agreements. At the same time, legal scholars such as Dazza Greenwood are examining how traditional principles of agency and accountability can be applied to artificial agents acting on behalf of humans. His work focuses on defining the legal status, responsibility, and liability of AI systems within existing frameworks of contract and professional conduct. These early initiatives represent important progress toward understanding how law and policy can adapt to emerging technologies and ensure they operate safely, ethically, and in the public interest.
Funding Barriers
Funding remains one of the most significant obstacles for socially oriented startups and nonprofits developing technology to serve underserved communities. Unlike commercial ventures, these organizations often prioritize accessibility and equity over profit, making them less attractive to traditional investors seeking short-term returns. Many rely on grants or philanthropic support, which are competitive, limited in duration, and often insufficient to sustain long-term growth. As a result, promising ideas struggle to move beyond the pilot stage, even when their impact is clear. The lack of stable, mission-aligned funding also discourages experimentation and innovation in areas where technology could deliver the greatest social benefit. Building a viable ecosystem for these organizations requires new financing models—such as blended capital, public-private partnerships, and impact investment—that recognize the social value of expanding access and ensure that technology serving the public good can thrive.
Potential Risks of Deploying Agentic AI
Despite their promise, relying on artificial intelligence to deliver essential services carries significant risks for underserved communities. Many AI systems are built on datasets that mirror existing social and economic biases, which can lead to unequal treatment in critical areas such as healthcare, education, finance, and law. These systems often lack transparency, making it difficult to identify or challenge biased outcomes. When decision-making is automated, discrimination can occur on a large scale, reinforcing the very inequities the technology is intended to address.
Furthermore, even when AI improves access in the short term, it cannot replace the broader structural reforms needed to achieve real equality. Digital tools may be the first step in supporting these communities, but these very communities still operate within institutions shaped by unequal funding, policies, and opportunities. Without investment in physical infrastructure, education, and inclusive governance, these systems risk becoming a patch rather than a fundamental solution. True progress will depend not only on technological innovation, but also on ensuring that the systems surrounding it are just, inclusive, and designed to serve everyone.
Haas Students’ Role in Shaping the Future
Business education has a critical role in shaping how emerging technologies are built and governed, and institutions like Haas can prepare future leaders to approach artificial intelligence with a focus on long-term societal impact. By fostering a mindset that questions assumptions, values ethical decision-making, and recognizes the broader consequences of innovation, Haas creates an environment where students learn to balance technological innovation with social responsibility. Through the Center for Responsible Business, students are encouraged to examine how innovation can advance equity and strengthen communities rather than deepen existing divides. Leaders trained in this setting will enter industry, policy, and entrepreneurship equipped to guide AI development in ways that support the public good.
Conclusion
AI agents have the potential to significantly improve access to essential services for underserved communities by lowering costs, increasing efficiency, and extending reach beyond traditional institutions. Yet realizing this promise requires navigating complex barriers. These challenges highlight that while agentic systems represent an important first step toward expanding opportunity, they are not a substitute for the broader structural reforms required to achieve lasting social change. True progress will depend not only on responsible technological development, but also on ensuring that the communities most affected have a meaningful role in shaping how these tools are designed, governed, and deployed.
As these systems evolve, the central question becomes how to balance the market pressures that drive AI development with the need for innovation that serves the public good. What mix of government incentives, regulatory adaptation, and mission-driven leadership will ensure that emerging technologies advance collective well-being rather than short-term profit motives?
Case Studies:
- Dentacor, a Utah-based dental services provider, deploys an AI diagnostic agent that assists hygienists in identifying common oral health conditions such as cavities, gum disease, and tooth loss. The system analyzes dental radiographs and generates preliminary assessments that are verified by licensed professionals under the supervision of the Utah Office of Artificial Intelligence Policy (Dentacor and the Utah AI Policy Lab entered into a mitigation agreement that allows Dentacor to use dental hygienists instead of dentists when making a diagnosis). By combining this AI tool with mobile dental units, Dentacor brings preventive and restorative care to shelters, community centers, and transitional housing programs. This model enables patients to receive dental care at a lower cost, helping individuals without insurance or access to traditional dental care and reducing preventable oral health complications in low-income communities.
- ElizaChat offers continuous mental health support to students in under-resourced schools through an AI conversational agent. The system engages users in guided conversations that teach coping strategies, encourage self-reflection, and monitor emotional well-being. When a student’s responses suggest potential risk, the system automatically notifies school counselors or designated mental health professionals. This design allows schools with limited staff to extend support beyond school hours and provide immediate, low-cost access to psychological assistance. For students in districts without sufficient counseling resources, ElizaChat offers a reliable first point of contact for emotional help, promoting earlier intervention and consistent care.
- Astra Tech operates a microfinance platform within its BOTIM super-fintech app, providing small, low-fee loans to workers and families excluded from traditional banking systems. The AI agent automates credit evaluation and identity verification, enabling users—many of whom are migrant laborers and low-income earners—to access funds quickly and securely through their phones. These loans help recipients manage emergencies, support relatives, or start small businesses without the heavy documentation or interest rates common in conventional lending. By lowering barriers to credit, Astra Tech’s agent enables financial participation for millions who would otherwise remain outside the formal economy.
- Khanmigo, created by Khan Academy, serves as an AI tutoring agent that provides students with personalized academic support and assists teachers in the classroom. It interacts with students through adaptive dialogue, helping them reason through math problems, practice writing, and build subject understanding at their own pace. In schools with limited staff or overcrowded classrooms, Khanmigo provides individualized attention that many students would not otherwise receive. For learners in underserved regions, it turns a low-cost digital platform into a scalable tutoring tool, improving access to quality instruction and helping close achievement gaps that persist across socioeconomic lines.
- JusticeText equips public defenders with an AI system that automatically transcribes and organizes video and audio evidence, including police body-camera footage, interrogation recordings, and jail calls. Attorneys can search, annotate, and extract key segments within minutes, rather than spending hours on manual review. This efficiency allows underfunded legal aid offices to prepare stronger defenses for clients who cannot afford private counsel. By making evidence review faster and more accessible, JusticeText helps ensure that low-income defendants receive more thorough and equitable representation in the criminal justice system.
Author’s Note:
My decision to study AI policy at Berkeley was shaped by the widening gap between academic discussions of safety and governance and the realities of how rapidly these systems are being deployed. My work with the Utah Office of Artificial Intelligence Policy demonstrated how quickly agentic tools are entering real-world environments and how difficult it is to align innovation with legal, ethical, and community needs. Haas has further emphasized the pivotal role business leaders will play in determining whether these technologies advance equity or reinforce existing divides. This article is a culmination of perspectives that I have been exposed to in the last half decade, reflecting both the risks that persist as these systems evolve and the opportunities they offer to broaden access and strengthen underserved communities when designed and deployed responsibly.
About the Author:
Kiyan Mohebbizadeh is a CRB fellow and a JD/MBA candidate at the University of California, Berkeley School of Law and Haas School of Business. He holds a M.S. in Data Science from Columbia University and a B.A. in Business Administration with an emphasis in Finance from the University of California, Irvine. His work and academic interests focus on the intersection of emerging technologies, financial systems, and legal frameworks.