We tend to treat organizational structures—such as job titles, departments, and reporting lines—like furniture: always there, moved around a bit, but rarely questioned. But what if AI is about to redesign the whole office? And in a world where you have humans and agents working alongside each other, how can leaders build a cohesive culture?

Stanford professor Melissa Valentine anticipated some of these changes in her book, Flash Teams: Leading the Future of AI-Enhanced, On-Demand Work. In this episode of The Culture Kit, Melissa joined organizational culture experts Jenny Chatman and Sameer Srivastava to discuss how AI and online labor markets are enabling leaders to assemble teams, solve problems, and then disband at superhero speeds. They also explore tensions between algorithmic decision-making and human structures, the challenges of deploying AI agents alongside humans, and how to recognize the invisible labor that keeps everything running smoothly.

Melissa is an associate professor of management science & engineering at Stanford University and a senior fellow at the Stanford Institute for Human-Centered AI.

3 Main Takeaways:

  1. Hierarchy isn’t going anywhere, but departments might. While hierarchy will remain essential for accountability and coordination, departments as we know them are likely to blur and collapse as AI puts design, engineering, and product capabilities in everyone’s hands.
  2. Adopt a mindset of “experts everywhere all the time.” Instead of thinking in terms of “expert scarcity,” leaders should recognize how easy it’s becoming to assemble the right talent—human or AI—for any given challenge.
  3. Management is now org design. The core management loop of scoping a problem, assembling resources, and evaluating the outcome is accelerating and becoming more like a design practice. Leaders aren’t just managing people anymore; they’re architecting the structures of work teams.

Show Links:

*The Culture Kit with Jenny & Sameer is a production of Haas School of Business and is produced by University FM.*

Transcript

[00:00:00] Jennifer Chatman: Hey, Sameer, think about the best team you’ve ever worked on. What made it work?

[00:00:06] Sameer Srivastava: That’s a really good question, Jenny. I think the team I would highlight is one that I worked on over a decade ago. It was one of the first teams I worked on that developed a computational approach to measuring culture.

And I think what made it work is that we had at least three different disciplinary perspectives represented on the team, linguistics, computer science, sociology, organizational behavior. There was a mix of students and faculty. And what was really key to making that work was just a lot of mutual respect, really understanding that we all had different perspectives, and we could generate something better by learning from each other.

[00:00:42] Jennifer Chatman: Yeah, that’s super interesting. And I can actually think of a lot of my research teams over the years that were really incredible, produced great work together, but I think the one that’s most on my mind right now as Dean at Haas is my management team. And I think the two qualities that I’m just most grateful for is that each person is a deep, deep expert in their field and brings a kind of level of thinking and excellence to the team, you know, that is diverse and important.

And second, everyone is completely committed to doing what’s best for the school. So, you know, this kind of deep expertise with a sort of superordinate orientation and commitment to the school, I think, makes it a killer team. But, you know, this is what we’re going to talk about today. And before we bring in our guest, who knows all about teams and will set us straight, Sameer, we should do our formal thing first.

[00:01:40] Sameer Srivastava: Oh yeah, I forgot about that. Well, I’m Sameer Srivastava, a professor at UC Berkeley’s Haas School of Business and co-founder of the Berkeley Center for Workplace Culture and Innovation.

[00:01:51] Jennifer Chatman: And I’m Jenny Chatman, a professor and dean at the Haas School of Business, and this is the Culture Kit.

[00:01:59] Sameer Srivastava: Today, we’re joined by Melissa Valentine, an associate professor at Stanford in the Department of Management Science and Engineering, and a senior fellow at the Stanford Institute for Human-Centered AI. Melissa studies how AI and algorithms are fundamentally transforming how organizations work, not just the people in them, but the structures themselves. She’s also the author of a new book called Flash Teams, a bestseller. Melissa, welcome to The Culture Kit.

[00:02:27] Melissa Valentine: Ah, thank you. It’s so great to be here. It’s been so fun to have the book out in the world. I’ve been just delighted with the reception. I get all sorts of LinkedIn DMs where people are, kind of, like visioning what they can do with flash teams. So, yeah, it’s been fun, and I love this podcast, so I’m glad to be here.

[00:02:42] Jennifer Chatman: Well, welcome, Melissa. I have to say, your argument that AI is going to make us rethink the organization chart itself, not just automate tasks within it, is one that I find both exciting and a little bit scary and destabilizing. So, I’m curious whether you find most leaders are ready for that conversation.

[00:03:03] Melissa Valentine: You know what’s funny? I was talking to leaders about this before ChatGPT, sort of, took the world by storm, and I think at that time, leaders were interested in it. It felt like a way that they could, you know, bring more innovation into their companies. I do feel like in the last, like, two or three years, all that anybody has wanted to talk about is generative AI. So, we’ve really had to, like, do what everyone had to do and, kind of, pause and be like, “Okay, what’s this new generative AI thing?” And then bring that into the conversation.

Something that’s fun about generative AI, though, is people are now vibe-coding some of the software that we describe in our book. So, I think there’s a way that it is, kind of, scary and destabilizing, but I think more and more people are having hands-on experience designing products that can help them do stuff like this. So, I think that’s been pretty fun.

[00:03:49] Sameer Srivastava: Well, we’re going to come back to AI and talk about it in a lot more detail, but Melissa, you were at Berkeley back in January at our annual Culture Connect Conference and gave a terrific talk in which you made a really provocative statement. And the statement was that the most important invention from an organizational perspective over the last 200 years isn’t AI, but rather the org chart. Why the org chart?

[00:04:15] Melissa Valentine: Yeah, so I love that conference, by the way, you guys do such a good job with that. And I have to put the provocative comment, kind of, in context, because I start all of my talks like that, and I’ll be like the org chart, and then I, like, flash up a picture of, like, Dwight Schrute from The Office, like pointing to this really, like, archaic org chart.

So, I’m, kind of, like, teasing us as a society right now, because I think we’re all very hyped on AI. Me, too. But I do that to sort of, like, tease us and to remind us that we as a society invented organizations because it’s through organizations that we do really complex work together. And I like to talk about how organizations have these nice properties that allow us to do so much complex work, things like specialization and coordination. And a lot of that honestly comes through, hierarchy, which I think, like, we’re going to need to talk about how hierarchy remains very relevant in the age of AI.

I make that joke to remind us of all these nice properties of organizations that we’ve relied on, like all of us rely on them. Specialization, coordination, all the expertise, and accountability that comes through organizations, we rely on it. So, it’s like an invitation for us to rethink organizations, remember how powerful they are, remember their special nice properties that allow us to coordinate so well, and then, kind of, think together about how do we update this organizational form.

[00:05:31] Jennifer Chatman: Well, let me just ask, based on our conversation that Sameer and I just had about ideal teams, how do you critique what we came up with as concepts?

[00:05:42] Melissa Valentine: Oh, I love your descriptions of them. I think my hope and vision for what AI can do for us is that it can, sort of, help us have more experiences like that. You both were describing something where it’s people at their best, people with their deep expertise, like really finding a way to, like, draw each other’s expertise and become more, kind of, collectively valuable together.

I can think of all sorts of ways that, you know, data and tools could help that. My vision is always like, what if we’re not, like, moving information around? What if we’re, like, no longer copying and pasting updates everywhere, and that we could just, like, meet together, sit together like this, you know, and just, like, think really deep human thoughts, like really deep human expertise with a real kind of, like, conscious connection with one another? That’s what I heard in what you guys were describing.

[00:06:30] Jennifer Chatman: So, you study how concepts like jobs, teams, departments, and managers were actually constructed over time. They, kind of, feel like furniture, but somebody invented them. Which of those do you think AI is most likely to fundamentally disrupt?

[00:06:47] Melissa Valentine: Oh yeah, I totally agree that it’s like wallpaper, like, yeah, they’ve, like, faded into the background, and that’s, kind of, like, what organizations have done for us, too. So, I, yeah, I like that framing. I mean, this is my really active research area right now: I’m trying to watch how organizations are actually changing, especially as people are really actively adopting generative AI.

And in the Flash Teams book, we do draw on a lot of economic theory about why hierarchy remains relevant, and hierarchy is, like, ultimately, like, a lot about accountability and coordination. So, my prediction is honestly that hierarchy sticks around. I do think that we’ll have, sort of, accountability and centralization that can be super useful.

So, hierarchy and managers go together like that. I do think departments are likely to change pretty dramatically, and we’re, sort of, already seeing that with engineering and design and product are, sort of, like, now, like, in everybody’s hands. Like, I keep returning to vibe-coding. That’s just what I’m hearing so much about right now. But it does feel like those departments, kind of, collapse in an interesting way or combine maybe in an interesting way. So, I would say that’s my prediction, is that’s the first one to change with the most transformation, I think it would be departments.

[00:07:54] Sameer Srivastava: Well, let’s then turn to teams, and you’ve already mentioned the title of your book, Flash Teams, and, of course, that term conjures up different images in people’s minds, for example, of really fast-moving superheroes, but that’s not what you mean.

[00:08:08] Melissa Valentine: That’s right.

[00:08:09] Sameer Srivastava: So, tell us what is a flash team, and what problem is it trying to solve that traditional structures can’t solve so well?

[00:08:14] Melissa Valentine: Well, it’s funny that you say superheroes, because we were super stoked to get a blurb from Adam Grant, and he described it as, like, if you’ve ever wanted to convene the Avengers at work, this book will show you how. So, there is sort of, like, a superhero quality to the idea of people coming together and solving problems.

So, but more generally, the way we think about it is basically the internet online labor markets give us so much access to experts everywhere all the time. There’s so many people in the world who are so deeply expert, and you can now, kind of, access them using all these digital tools just really easily. So, the idea is what if we could combine some of those nice properties I was talking about of organizations, like all of those nice, like, role structures, team structures, hierarchy, what if we could combine those structures with, kind of, the speed and scale of online labor markets, of the crowd, is how we, sort of, think about it.

So, you could imagine recognizing that you need a team and you need an expert, and you could reach out in an online labor market. You can hire people in minutes sometimes. You don’t stop there, like you could actually build a whole team like that. I can actually give you an example from one of our early flash teams users. So, he was a guy who worked at a hospital, and he just saw the same problem, kind of, play out time and again in the ER, which is ambulances would pull up, and it wasn’t until the ambulance got there that they started gathering the doctors and nurses, and equipment that was needed.

So, he had just had this simple idea of, what if we built an app that the ambulance could use to coordinate with the ER? And that’s a really simple app, but when he tried to get it done within the hospital, all of the, sort of, folks who had the expertise were really locked out doing other things. So, he used our flash teams approach to pull together a team of experts that built that app for him in, like, six weeks. And it was really complex, it was really, kind of, exciting to watch it come together. I always have to joke that, like, six weeks, that was, like, pre-gen AI, it’d be even faster now, but it was, like, a very fast way to pull that team together from the crowd, solve the problem, and then they, sort of, disassemble.

[00:10:13] Jennifer Chatman: I mean, what’s, kind of, interesting about that example and others that I’ve heard you talk about is that the leaders are not just managing anymore, they’re actually designing. They’re, sort of, figuring out what to build, who to assemble, how to structure it. So, is that, like, fundamentally different from what we’ve traditionally asked of managers?

[00:10:33] Melissa Valentine: Oh yeah, I really like the way you’re framing that. I think there’s, like, a way that managers, kind of, always have to keep in mind, like, what’s the problem to solve, and who and what are the resources? And they have to, sort of, match them. That’s kind of a matching problem between the needed experts on the problem. And then managers always have to, kind of, evaluate how that’s going. So, it’s scoping, it’s resourcing, it’s evaluating.

And when it’s really stable, management looks different, right? Like you’re managing people over a number of years, then you’re helping them think about their careers, like skill development, you know, different sorts of things. But when you really speed up how quickly this is happening, then you see managers doing all of those things, like, in a really quick iteration. They scope, they resource, they evaluate. They scope, they resource, they evaluate. So, they’re, kind of, constantly doing that, and that, to me, is team design, that’s org design. So, management as org design is, I think, yeah, I think that’s the way to think about it now.

[00:11:29] Sameer Srivastava: Great. So, going back to the talk you gave at the Berkeley conference, someone in the audience raised a question that I think merits some more attention. A lot of the coordination that happens in these fast-moving teams is invisible and often not valued. It’s often described as non-promotable work, and it tends to fall on certain people, often those who have less power in the organization. So, how can organizations better recognize and value that work?

[00:11:56] Melissa Valentine: I think it’s very astute to, kind of, recognize how much coordination goes into coordination, how much, kind of, invisible work goes into making things work.

And I brought this up at the culture conference, and I just, I really think it’s a useful relevant phrase to, like, think of the word invisible labor. It’s been coined in sociology and used in a number of different places. I actually learned the most about invisible labor when I was doing a different study at an academic cancer center. And at the cancer center, some of the patients and their families actually were trying to get the employees and, like, the doctors and the staff at the cancer center to recognize how much work they were doing to coordinate their own care.

And this was an organization that was, like, very well-meaning, right? Everyone was trying to do right by the patients, but they just really didn’t understand how much invisible labor the patients were doing to, like, make their cancer care come together.

So, it was through the process of really making the invisible labor visible that the patients were able to get the cancer center to value it. And eventually what this, what the paper’s about is the cancer center eventually took some of the coordination work into the organization and took it on themselves. To sort of generalize that back to flash teams, it’s the idea of recognizing the usefulness of making invisible things visible. Now, not everything needs to become visible. I mean, to our earlier point, like sometimes stuff is furniture, sometimes stuff is chairs, not people, I mean, you know, but, like, sometimes, like, the system is just, like, wallpaper, and you don’t need to pay attention to it.

But I think it’s maybe a more useful leadership skill of the future now that we have all these automated systems to really recognize where is the invisible labor happening? When do you need to make it visible? When do you need to value it more? Sameer, you’re recognizing some of the inequality that can happen, right? Are there particular, you know, groups of people who are expected to be doing invisible labor and not have their labor valued? Maybe, you know, as we’re not moving information around so much anymore, that could be the kind of thing that we could have more, like, conscious conversations about. What labor are we assuming somebody else does and not valuing it enough?

[00:14:04] Jennifer Chatman: So, kind of, relevant to this topic, you have a recent paper with a title that sounds like a children’s book. It’s called The Algorithm and the Org Chart.

[00:14:14] Melissa Valentine: That’s right, a bedtime story.

[00:14:17] Jennifer Chatman: So, what happens when the algorithm says one thing and it conflicts with what the formal structure says you should do, and who wins in that case?

[00:14:28] Melissa Valentine: My answer is always, measure it. But to put a little bit more context on that, so that’s a study that I did several years ago. It was a retail tech company, and they had these very talented data scientists who developed this very powerful optimization algorithm, and they deployed it for their frontline workers. And they were getting, like, really great recommendations and, like, really great adoption.

And then pretty soon, over time, they saw that the optimization was being deployed in these tiny little decision spaces that were just one person’s job. And pretty soon they were like, “Oh, for an optimization algorithm, you should actually be exploring the entire decision space, which right now is being divided up for people.”

So, for an algorithm, you want to give it as big a decision space, for people, you want really little, like manageable domains. That’s what our brains are here for. So, algorithms just have a very different kind of decision capability. It’s not an argument for full automation, it’s saying, like, recognize the ways that algorithms make decisions. So, for an optimization algorithm, you want to give it a big decision space, a big search space, and people need much kind of smaller-scoped decisions. That’s, like, what we do well with.

So, the paper is basically saying we, as a human society, invented organizations based on the fact that humans used to make all the decisions. And then, meanwhile, we invented these cool new technologies called algorithms that make decisions very differently. And right now we’re deploying these powerful decision tools within human decision structures, and it just doesn’t make sense. But humans do need org charts, like we do need roles, we do need teams, we do need hierarchy. That does help us do our work really well.

So, it’s an invitation for leaders to recognize, how do your algorithms make decisions, and have you given them the right infrastructure for that? How do your humans make decisions? Have you given them the right infrastructure for that? And then the kicker, and I do not have the answer, even though I’ve been studying this for a long time, is how do you combine them? No, like, I don’t know. That’s, like, what I feel like that’s the org design challenge of the future, is how do we combine those two decision-making infrastructures, the one for algorithms, the one for people, and they’re very different.

[00:16:39] Sameer Srivastava: Well, one concrete way in which that fusion is happening is through the arrival and deployment of AI agents, and increasingly we’re seeing organizations deploying various agents for different functional purposes and the AI agents working alongside human employees. You have a recent paper that speaks to this, so say a little bit more about how you think this new phenomenon changes the social contract between organizations and workers.

[00:17:07] Melissa Valentine: Yeah, I do think that AI agents are an interesting tool that might be able to bridge those two sort of infrastructures that I was describing. So, I’m going to describe what I’m hearing as, a near-term technical organizational problem, which is, sort of, like, illustrates what I was saying before of just, like, how differently data tools versus humans make decisions.

So, I’m hearing all sorts of stuff in companies right now where they’re trying to deploy agents, and they’re just noticing how differently agents will, like, consume information than humans. Like, a really dumb example would be, like, a human would see, like, a file name that maybe has, like, “Recent, recent, this one, save this one, final version two,” you know what I mean? And then we’d be like, “Oh, that’s the final one.” But an agent maybe wouldn’t recognize, like, “Oh, that’s the most recent one, and based on recency, that I should retrieve from there,” it’s going to, like, just search across the whole space and, like, find something maybe from, like, 2019 or something that’s, like, not super relevant.

Anyways, that is, to me, like, a near-term technical organizational problem, is recognizing that agents and humans just really process information so differently, so we can’t just, kind of, do, like, a one-to-one substitution in that case.

[00:18:14] Sameer Srivastava: Yeah, but just bringing this back to the topic of culture, since this is a culture-related podcast, in a world where you have humans and agents working alongside each other, how do you think about building a cohesive culture with both sets of workers involved?

[00:18:29] Melissa Valentine: Yeah, it seems like another kind of leadership challenge or opportunity right now is for leaders to think about what does culture mean when the employment relationship is changing so much. So, I could just give, a simple example. There’s a bunch of different companies that I’ve been studying, their agentic implementation, and, like, let’s take one simple example. Maybe you used to have 200 people who were doing something more manually within a system, and this is not happening that often, but if you do get to, like, 99% agentic implementation, then maybe you only need 10 people.

But everyone who trained the system is now redeployed somewhere else, and then the few folk that remain have to do all of the exceptions where the agents can’t do it. So, I think it’s useful for leaders to just recognize that as, like, a fundamentally different workforce. That’s a fundamentally different job design, and I don’t know that we know what it looks like to create a culture around that. I think we, like, sorely need people who are really good at leadership and are really good at culture to, kind of, get into spaces like that and think about what that means.

[00:19:36] Sameer Srivastava: So, one other topic we want to come to is the pace at which all of these changes are happening. One thing we hear from a lot of leaders is that they’re really uncertain about whether they should move as fast as possible on AI adoption or be more measured and find some places to potentially hold back. What’s your perspective on it in light of everything we’ve discussed?

[00:19:54] Melissa Valentine: Yeah, this is a conversation I’m really enjoying having with executive leaders right now. I am thinking of it in, three different changes that are happening at the same time, and I think that they’re all happening fast. I think leaders are feeling that urgency, and I think it’s appropriate to, sort of, move quickly on them.

One is frontline transformation. We had a new Harvard Business Review article come out that talked about what it takes to actually get to high impact from frontline transformation, but there’s, like, getting everyone a license and helping them learn how to use it, and I think that’s got to go. I think that needs to be happening right now.

There’s enterprise use cases where, you know, the C-suite has really sponsored and is actually, like, paying for, like, the new infrastructure and the new agentic systems, and there’s so much to be learning there, so that’s, kind of, got to be happening as well.

And then there’s, like, the third where just people are really inventing something brand new. I’ve seen some examples where just people are vibe-coding their org chart in a really different new way. So, I think all of that is happening, and it’s a lot to be handling at the same time, but I do think that is the pace that the tools are asking for right now.

[00:20:57] Jennifer Chatman: Well, Melissa, you’ve given us a lot to think about, so let’s wrap up with what leaders can actually do. If someone is listening to this and they want to start designing a more adaptive, AI-ready organization, what are, say, three things that they could do differently?

[00:21:14] Melissa Valentine: When Michael and I were writing the book, we were trying to think of a phrase that really captured this idea of recognizing how easy it has become to pull together a team to help you solve a problem, and he and I liked the phrase “experts everywhere all the time.” So, it’s like a shift from thinking in terms of expert scarcity to just recognizing there’s so many expert people in the world, and there’s so many ways to find them, collaborate, consult, have really high-value conversations. So, that’s one, shift your mindset. I think that is hard for people right now, because I think we’re all really focused on AI, which is probably where we need to be focused. So, it’s a little bit hard to be thinking, like, “How do I get human collaborators?” I think we’re all, like, “How do I use this new tool?” So, I’ll recognize that shift, but I do think that that is a useful thing, is to recognize how easy it is to, like, pull people together. So, “experts everywhere all the time” is the first one.

And then we think a lot about the moment where you’re resourcing, where you’re fitting the right resource to the problem, and sometimes I could be speaking of an expert. What’s fun right now is, like, actually matching the right gen AI tool or the right… There’s, like, so many different data tools you can draw on right now. So, I have an idea, I have this, like, entrepreneurial idea, then there’s this whole world of, like, resources. Maybe it’s, like, experts from the crowd, maybe it’s somebody within the organization, maybe it’s a gen AI tool, maybe it’s an agent.

Part of this flash team org design mindset that you’re describing, Jenny, is matching the resources to the problem that you envision. Something that I think is not as fun or talked about, but I think it’s getting talked about more and more, is how all of us need to become better at evaluating what’s being produced compared to our goal. So, even when you just, like, prompt something really simply, you just prompt gen AI and you get the output, you have to evaluate, “Is that good? Is that what I wanted?”

And I think that there’s, sort of, this way that managers or org design people or, like, leaders just have such an opportunity to, like, identify problems, use an AI tool or build a team, and then evaluate how things are going. And I think that, like, last piece is so important for the feedback loop and the learning and making sure that you’re not creating lots of new stuff but not landing the value. So, I think that, like, really evaluating it at the end is really important as well.

[00:23:25] Sameer Srivastava: Great. Well, Melissa, thank you so much. That was super insightful. I think we all benefited a lot from hearing your perspective. And if you haven’t already found the book, please do, Flash Teams. Thank you so much for joining us.

[00:23:37] Melissa Valentine: Thanks. This was really fun. Thanks, you guys.

[00:23:39] Jennifer Chatman: Thanks for listening to The Culture Kit with Jenny & Sameer!

[00:23:42] Sameer Srivastava: The Culture Kit Podcast is a production of the Berkeley Center for Workplace Culture and Innovation at the Haas School of Business and is produced by University FM. If you enjoyed the show, be sure to hit that subscribe button, leave us a review, and share this episode online so others who have workplace culture questions can find us, too!

[00:24:03] Jennifer Chatman: I’m Jenny.

[00:24:04] Sameer Srivastava: And I’m Sameer.

[00:24:05] Jennifer Chatman: We’ll be back soon with more tools to help fix your work culture challenges.

Previous Jack Goncalo on what organizations get wrong about creativity—and what it’s costing them