The Haas AI Access Pilot Project is an exciting initiative at UC Berkeley’s Haas School of Business, designed to explore how advanced artificial intelligence (AI) tools can enhance learning, teaching, research, and administrative workflows. The pilot provides selected students, faculty, and staff with access to cutting-edge Large Language Models (LLMs)—including OpenAI’s GPT-4 (via Microsoft Azure), Anthropic’s Claude, and Google’s Gemini—through a secure and centralized platform.
The project has several key goals:
- Foster equitable AI education: Provide all students with equitable access to advanced AI tools, empowering them to confidently develop skills and harness these technologies for learning and career development.
- Enhance productivity and problem-solving: Enable the Haas community to utilize AI for academic research, coursework, administrative tasks, and creative projects.
- Shape the future of AI at Haas: Gather feedback from participants to inform the development of ethical and scalable AI integration strategies across educational programs.
- This pilot places a strong emphasis on privacy, security, and responsible use. The platform complies with UC Berkeley’s data-handling policies, ensuring that sensitive information is protected when interacting with AI models such as GPT-4 (via Microsoft Azure). While tools like Claude and Gemini are made available, participants are advised not to input sensitive or confidential information into these models, as they are not yet part of Berkeley’s privacy agreements.
By joining the Haas AI Access Pilot Project, participants are not only gaining valuable real-world experience with advanced AI but also contributing to the development of ethical, innovative, and inclusive AI applications for the Haas community.
Explore the FAQs below to learn more about this platform, its features, and the guidelines for using it responsibly.
Introduction and Access
The Haas AI Access Pilot Project is an initiative designed to explore the integration of advanced artificial intelligence tools into education at Haas. Faculty, students, and staff gain access to AI models, such as OpenAI’s models via Microsoft Azure, Anthropic’s Claude, and Google’s Gemini, through a secure, centralized platform.
The project’s goals include:
- Enhancing learning, teaching, and administrative workflows for the Haas community.
- Gathering feedback to inform future scaling and implementation.
- Providing equitable access to advanced Large Language Models (LLMs) for all students, ensuring they can develop skills for working with cutting-edge AI technologies regardless of individual resources.
Access to the Haas AI Platform is limited to participants selected for the pilot, including:
- Students: Gain access to models from OpenAI (via Microsoft Azure), Claude, and Gemini.
- Faculty: Have full access to OpenAI (via Microsoft Azure), Claude, and Gemini models. Faculty whose classes are not yet part of the pilot may still request access.
- Staff: Restricted to OpenAI models via Microsoft Azure to ensure compliance with UC Berkeley’s privacy agreement, which allows staff to work with up to P3-level data.
Access the platform by visiting https://chat.haas.berkeley.edu/login and logging in with your Calnet ID. No additional credentials are required for participants selected for the pilot. Faculty members not included in the pilot can request access by emailing [email protected].
No, special credentials are not needed. You can use your Calnet ID—the same credentials you use for other University systems.
The platform provides access to the following models based on participant type:
- OpenAI Models via Microsoft Azure (e.g., GPT-4, GPT-4 Turbo): Available for all participants (students, faculty, and staff).
- Claude by Anthropic: Available for students and faculty.
- Gemini by Google: Available for students and faculty.
Note: Staff are restricted to OpenAI Models via Microsoft Azure due to established privacy agreements ensuring compliance with P3-level data handling.
Important Reminder: Sensitive or confidential information should not be entered into the Claude or Gemini models, as they are not yet subject to the same privacy agreements as the OpenAI models via Microsoft Azure.
The Haas AI Platform offers several advantages over the free version of ChatGPT:
- Advanced Models: You gain access to GPT-4 (via Microsoft Azure), Claude, and Gemini, which are more powerful and capable than the free GPT-3.5 model typically available with ChatGPT.
- Privacy Compliance: OpenAI models are hosted on Microsoft Azure, which complies with UC Berkeley’s privacy agreement for handling up to P3-level data securely.
- Additional Features: LibreChat, the interface used in the pilot, offers features such as presets, bookmarks, multi-conversation support, and anonymous chat history sharing.
- No Cost: Participants can use the platform at no cost during the pilot.
- Feedback Opportunity: Faculty, students, and staff can provide feedback through surveys to shape the implementation of AI tools at Haas.
Costs and Privacy
No, there is no cost for participants to use the Haas AI Platform during the pilot. All usage fees are covered by Haas as part of its efforts to explore AI tools and their potential for enhancing teaching, learning, and operations.
Prompts and chat data submitted to the platform are collected and stored for operational purposes but are not actively reviewed or monitored. These records are handled in accordance with UC Berkeley’s Email Service Policy, which ensures proper use and privacy protections.
- Data generated using OpenAI Models on Microsoft Azure complies with the university’s privacy agreement, which allows the secure handling of data classified up to P3 level.
- Claude and Gemini models are not yet part of this privacy agreement, so avoid entering sensitive or confidential information when using these models.
Chat history is stored by the platform and is not actively monitored. However:
- All usage must comply with UC Berkeley’s guidelines for ethical and responsible tool usage.
- The platform respects the same university privacy policies as email or other systems.
To protect your data, always avoid sharing sensitive, personal, or highly confidential information in your interactions. Refer to UC Berkeley’s Email Service Policy for more details.
The pilot collects aggregate usage data to evaluate the platform’s performance, scalability, and educational impact. This includes metrics such as:
- Number of queries
- Query lengths
- System-level costs
Important: No sensitive, personal, or identifiable user information is collected or stored.
Yes, to ensure fair access for all participants, the platform may enforce limitations such as:
- Limits on the number of queries per session or day.
- Character or token limits for prompts or responses (determined by system constraints).
Participants will be notified of specific limits if they become relevant during the pilot.
Responsible Use
To use the Haas AI Platform responsibly for your coursework:
- Follow your course’s Generative AI policy: This policy should be outlined in your syllabus or shared by your professor.
- Avoid using the tool to complete assignments unless explicitly permitted. Misuse of AI in coursework may violate academic integrity standards.
- Do not input sensitive or personal information: Keep queries focused on learning and avoid providing private or identifiable data.
- Leverage the platform to enhance learning: Use it for tasks like brainstorming ideas, exploring concepts, practicing questions, or improving your understanding of course material.
When in doubt, consult your professor for guidance on allowed uses.
Follow these precautions to ensure safe and effective use of the platform:
- Do not share sensitive or confidential information: Even though data is protected, avoid entering anything highly sensitive (e.g., passwords, private records).
- Use the Claude and Gemini models with extra care: Since these models are not covered under UC Berkeley’s privacy agreements, refrain from sharing sensitive data or information.
- Respect your course’s policy: Always verify with your professor whether a specific use of generative AI is permitted.
If you encounter accessibility challenges while using the Haas AI Platform:
- Contact Haas Digital at [email protected].
- The team will provide assistance and resources to address your concern and ensure equitable access to the platform.
Features of the Platform
The Haas AI Platform, powered by LibreChat, offers several productivity-enhancing features:
- Presets: Save predefined settings for conversations, including system instructions, tones, and preferred response styles.
- Bookmarks: Save important conversations for easy access and future reference.
- Recently Used Files: Access uploaded files during active sessions through the “Attach Files” tab on the right-hand side menu.
- Chat History: Review, revisit, or build upon previous conversations securely.
- Saved Prompts: Save commonly used queries or instructions for repeated use.
- Anonymous Chat History Sharing: Share chat summaries securely without revealing personal details.
Presets allow you to save commonly used settings for new conversations, such as:
- System Instructions: Predefine roles for the AI, such as “Provide financial analysis” or “Act as a business consultant.”
- Response Style: Adjust the tone, creativity, and length of the AI’s responses based on your needs.
To access or manage presets:
- Go to the “Presets” tab in the platform interface.
- Select, create, or edit presets for recurring tasks.
The bookmarks feature lets you save specific conversations for future reference. This is helpful for keeping track of important ideas, summaries, or tasks.
To bookmark a conversation:
- Select the Bookmark icon at the top of the chat window.
- Access all your bookmarks from the “Bookmarks” tab in the left-hand menu.
Uploaded files remain accessible during an ongoing session and can be located via the “Attach Files” tab on the right-hand side menu.
Reminder: Uploaded files are not stored permanently once your session ends. Be sure to save your outputs or retrieve necessary materials before completing your session.
Yes, chat history is available for viewing and revisiting previous conversations. Chat data is stored to enhance functionality but is not actively monitored.
Your chat history is also managed according to UC Berkeley’s Email Service Policy. For privacy, avoid entering sensitive or confidential information.
Saved prompts let you store frequently used inputs for quick and easy reuse in future sessions. For example:
- If you commonly ask, “Summarize this article in bullet points,” you can save this prompt and use it repeatedly without retyping.
To save a prompt:
- Enter text in the chat input area.
- Click the Save Prompt button to store it for later use.
- Access your saved prompts in the “Saved Prompts” panel.
The fork option allows you to create a new conversation that branches off from specific messages within an existing chat. It’s helpful in exploring alternative responses, starting fresh discussions based on earlier content, or revisiting specific messages without altering the original conversation.
How it works:
“Start Fork Here”:
Use this option to begin a new conversation starting from a chosen point. For example, you can select a message in your chat history and create an entirely new thread continuing from that point.
Target Message Options:
- The fork can begin either from the specific message you have selected or from the latest message if “Start Fork Here” is unchecked.
- The platform creates a separate conversation while keeping the original intact.
- Situations where forking is helpful:
- Exploring different paths or ideas while keeping your original conversation intact.
- Revisiting part of a conversation to rephrase or reanalyze content, such as brainstorming or generating new drafts.
Comparing outcomes when asking the AI alternate questions beginning from the same starting point.
You can access the fork option in the chat toolbar by selecting the fork icon and deciding how to configure the new conversation.
Using the Platform for Coursework/Projects
The usage of the Haas AI Platform for group projects or non-coursework tasks depends on your course’s Generative AI policy, as set by your instructor.
Here are specific guidelines:
- Group Projects: Instructors may allow the use of the platform for brainstorming, analysis, or creating drafts, but this must align with the course policy.
- Non-Coursework Purposes: The platform may be used for academic research, class-related administrative work, or operational purposes at Haas. Always follow responsible-use guidelines for such cases.
If you’re unsure about the policy, consult your professor or supervisor.
Each course has a unique Generative AI policy, which outlines how you can use AI tools in support of your learning. These policies vary depending on the instructor and syllabus.
Typically, you can find the policy:
- In the course syllabus
- On bCourses (your course’s online hub)
Tip: If you cannot locate the AI policy for your course, ask your professor directly for clarification.
While students and faculty have access to Claude by Anthropic and Gemini by Google, these models:
- Are not included in UC Berkeley’s privacy agreement, meaning they do not comply with the same P3-level security standards as OpenAI Models via Microsoft Azure.
- Should not be used for sensitive or confidential tasks. This includes entering private data or sharing personally identifiable information (PII).
Feedback and Long-Term Plans
Feedback will be collected through surveys distributed to students, faculty, and staff toward the end of the pilot. These surveys will include questions about:
- Overall user experience
- The platform’s benefits and challenges
- Suggestions for improvement across workflows and learning scenarios
The feedback will play a critical role in shaping the future of AI integration into Haas education and operations.
Only faculty members and administrators will have access to the aggregate survey results. This will include:
- Anonymized feedback from all users
- Usage metrics from the pilot
Students will not see detailed survey results but may receive updates on broader conclusions and decisions based on the feedback.
Your feedback is essential for determining the pilot’s success and identifying areas for improvement. Specifically, feedback will help Haas:
- Refine the platform’s features to better address user needs.
- Identify the most effective ways to integrate AI into learning, teaching, and operational workflows.
- Evaluate the feasibility of expanding the platform for broader access post-pilot.
By participating in surveys and providing thoughtful feedback, you directly contribute to Haas’s effort to remain at the forefront of technological innovation in education.
When the pilot concludes, usage metrics and feedback will be analyzed to assess the platform’s effectiveness and determine feasibility for broader implementation. Key steps after the pilot include:
- Sharing findings with the Haas community.
- Determining whether the platform will be scaled or expanded.
- Creating guidelines for long-term use based on the pilot results.
Participants will receive updates on the pilot’s outcomes and next steps once the evaluation is complete.
Access to the Haas AI Platform will end at the close of the pilot project or the semester, depending on the timeline. Information about future availability and scaling plans will be shared as part of the pilot’s closure communications.
The decision to expand AI access will be based on the outcomes of this pilot project. Factors influencing the decision include:
- Participant feedback from surveys
- Usage data and educational impact metrics
- Feasibility considerations, including costs and technical resources
Haas is committed to staying at the forefront of innovation and will share updates with the community about broader access once the pilot concludes.
For Faculty
You were selected because your course explores topics or methodologies related to artificial intelligence or advanced technologies. Your role in the pilot is essential to:
- Test the platform in an academic setting.
- Provide insight into how AI tools can support student learning and faculty workflows.
- Help shape AI integration based on your feedback and experiences during the pilot.
As a participating faculty member, your responsibilities include:
- Adopting a Generative AI policy for your course: This policy must outline how students can use AI tools like the Haas AI Platform.
- Communicating policy details clearly to your students.
- Encouraging responsible use of the platform in alignment with course learning objectives.
- Completing the feedback survey: By participating in the pilot, you agree to complete the survey at the end to share your experiences, including benefits, challenges, and suggestions for improvement.
- Addressing student questions about the use of AI tools in your course.
If you plan to incorporate the platform into your course:
- Clearly define your Generative AI policy: Specify how students are allowed to use AI tools (e.g., for brainstorming, project analysis, or error-checking code).
- Communicate your policy: Include it in your syllabus or course materials (e.g., on bCourses) and discuss expectations with students at the start of the semester.
- Provide resources: Share guidance on how students can responsibly and productively use the AI platform for your course tasks.
For help designing your policy or using the platform, contact Haas Digital at [email protected].
Yes, faculty whose classes are not part of the pilot may still gain access to the Haas AI Platform. To do so, simply:
- Send an email to [email protected], requesting access.
- While attending training is encouraged, it is not mandatory for these faculty members.
This access allows you to explore the platform’s capabilities and consider integrating it into your courses in the future.
For Staff
Staff members have access to OpenAI Models hosted on Microsoft Azure only. This restriction ensures compliance with UC Berkeley’s data-handling policies and privacy agreements, which allow sensitive data up to P3-level classification to be securely processed on Microsoft Azure.
Students and faculty have broader access to additional models, such as Claude by Anthropic and Gemini by Google. Staff access is limited to prioritize privacy and protect sensitive administrative or operational information.
Complete the “AI Essentials” training module: This training is available through the UC Learning Center and provides foundational knowledge on responsibly using AI tools.
Completion of these training requirements ensures that staff are fully prepared to use the platform responsibly and efficiently.
Yes, staff can use the Haas AI Platform for operational or administrative tasks directly related to their roles at Haas. This includes projects such as:
- Preparing reports
- Drafting internal communications
- Brainstorming ideas for Haas-related initiatives
However, staff must adhere to these guidelines:
- Do not input confidential, restricted, or P3+ classified data.
- Follow UC Berkeley’s responsible use policies.
If you have specific questions about permitted uses for staff, contact Haas Digital at [email protected].
P3-level data is categorized as “internal data” that requires a moderate level of protection due to potential harm from accidental exposure. Staff can safely use the Haas AI Platform for tasks involving P3 data, but must avoid processing any data classified as P4 (highly sensitive).
Examples of acceptable P3-level data include:
- Summarizing non-public internal documents meant for operational use (e.g., meeting notes, administrative updates).
- Drafting sensitive but non-restricted schedules (e.g., academic schedules, department event logistics).
- Preparing training materials, operational procedures, or reports that do not include restricted or highly confidential content.
Avoid using the platform for these types of P4-level data:
- Personally Identifiable Information (PII): Social Security Numbers (SSNs), driver’s license numbers, passport details.
- Protected Financial Data: Payment card numbers, bank account information.
- Confidential Student Information: FERPA-protected data or student health records.
- Authentication Data: Passwords, private encryption keys, or security codes.
For more detailed information on data classification levels, consult UC Berkeley’s official Data Classification Standard to ensure compliance.