Definitely, Maybe Agile

One Pizza Teams vs Two Pizza Teams: When Size Actually Matters

Peter Maddison and Dave Sharrock Season 3 Episode 192

Can AI really shrink your development teams from two pizzas to one? Peter and Dave explore the promise and reality of smaller teams in the age of AI agents. While AI can handle documentation, test automation, and other "hygiene" tasks teams often skip, the real question isn't whether you can reduce team size, it's whether you should. They dig into when one-person teams make sense (startups and greenfield projects), when they don't (complex legacy systems), and why the biggest gains might come from augmenting existing teams rather than downsizing them. Plus: why most AI initiatives fail and how to find the real business problems worth solving.

 

This week´s Takeaways

  1. AI as Capacity Booster, Not Team Replacer: AI agents excel at handling the "hygiene" work that teams often skip: documentation, test automation, release notes. Rather than shrinking teams, this gives existing teams ephemeral capacity to tackle work that improves long-term system quality and maintainability.
  2. Context Determines Team Size: One-person teams work brilliantly for startups and greenfield projects where you can build from scratch. But complex legacy systems in large organizations still need the diverse knowledge and experience that comes with larger teams to navigate technical debt and organizational complexity.
  3. Solve Real Business Problems First: The biggest AI failures happen when teams focus on cool technology instead of actual business needs. Before experimenting with smaller teams or AI agents, identify genuine business problems that need solving; that's where you'll see real returns and organizational support.

Peter [0:04]: Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and David Sharrock discuss the complexities of adopting new ways of working at scale. Hey Dave, how are you doing?

Dave [0:15]: Peter! Good to catch up. We have both taken a bit of time off, so it's great to reconnect. Full of energy, lots of ideas bouncing around.

Peter [0:26]: Of course. I spent the entire vacation basically just reading things. You know, just for the podcast.

Dave [0:35]: Just for the podcast, right. Okay, hit me. What's the topic for today?

Peter [0:40]: Well, one of the articles I was reading was about this idea of "one pizza teams." It's building on that old Jeff Bezos concept from Amazon - two pizza teams, five to eight people. The idea was that should be the right size so you get optimized communication between individuals while still having enough cross-domain knowledge to effectively deliver something of value. But now, in the age of AI, this has been boiled down to the one pizza team. We can effectively do more with fewer people.

Dave [1:17]: Now, when you say "in the age of AI" - what you're saying is we don't need as many people on the team because many of the things those people were doing can now be automated? AI agents can help us out with writing test automation, configuration, patching, that sort of thing?

Peter [1:40]: Exactly. Writing documentation, updating product requirements, writing release notes. All of these things can now be automated via our friendly neighborhood AI agent.

Dave [1:53]: This is interesting, because I have launched a lot of - let's call them two pizza teams - agile teams. I have helped organizations get those off the ground. And that list of stuff you described that AI agents are going to take over? Those are often secondary responsibilities. They get less attention than they should, even on a two pizza team. Documentation - I mean, I'm not a fan of overdoing documentation, but I also rarely see teams that document enough to create a maintainable, sustainable application. Test automation? Again, I have seen some really excellent teams that are absolutely in the zone on that, but it's not as common as we think.

Peter [2:54]: I would 100% agree with you. What I have definitely seen, especially in larger organizations, is you're going to have a small handful of teams - especially if they're working closer to the edge, closer to the customer - who are going to have a better handle on things like building test automation frameworks that work for their rapidly changing environment. Those people are more the exception than the norm, especially people looking after large legacy systems. They're not going to have that capability. One of the pieces I'm observing is exactly what you're describing - the AI functions are really helping increase the capacity of the team. It's giving them access to additional capacity they wouldn't have had before, to go do these things they were never going to get to anyway.

Dave [3:48]: I'm not sure that's even increasing capacity. If I think about what we want our teams to do so they're building sustainable code, a sustainable feature set over a few years - we need them to have the discipline to do some of this housekeeping, hygiene work. So if we take the view that AI agents are providing the housekeeping hygiene type of work, then you don't want to reduce the size of the team. It just means what that team is delivering has that hygiene and disciplined documentation, test automation, other bits and pieces maintained at a higher level. So now I have better quality code that I can work with more quickly. It's usable by customers, maintainable and so on. That doesn't necessarily drive you to reduce the size of the team. It just means what they're delivering is higher, more sustainable caliber work.

Peter [4:56]: Yeah, when I talk about capacity, I'm talking about ephemeral capacity. So if I had eight people, I still have eight people. It's just as if I had another four people who are now doing all this other stuff that I would have loved to have done a better job of, but never had time for before.

Dave [5:08]: Right. I think those eight people are still busy and the volume of work the team is delivering is about the same, because there's work that wasn't getting done - the technical debt buildup that we're always trying to avoid. We go into those cycles of "okay, let's turn and clean up and come back." Well, that goes away. So over time they're going to gain capacity because they can move quicker. They're not having to deal with emerging technical debt issues that really affect the production environment. They get more time to actually expand and deliver more features, but over time, not on a sprint by sprint basis.

Peter [5:53]: Yeah, so it's increasing their capacity to deliver. They can deliver more.

Dave [6:01]: Eventually, yes. I think that one is a slow burn in some ways, because I have seen so many teams just keep pushing that work down the road like a bulldozer.

Peter [6:14]: But I'm seeing it happening now. I have seen teams where they weren't able to get to the test automation, weren't able to get to that level of detail in documentation, weren't able to get to properly written out user stories because it would just take too long. Instead of one sentence user stories, you end up with more description and properly written ones, which can then become better discussion points between product owners and teams around "okay, so what exactly are we trying to do here? What are we trying to build?"

Dave [6:45]: If that was happening, wouldn't you think that overall, your experience using digital tools - any sort of technology apps - should be getting better and better? Less swearing at the screen when something doesn't behave the way you thought it would, or crashes, or there are outages? Do you really see that happening?

Peter [7:13]: Not yet.

Dave [7:16]: Okay, so we're talking futuristic at the moment, or hypothetical.

Peter [7:20]: I think a lot of those pieces - and as you were saying, part of the reason you do that documentation is not necessarily for you, it's for whoever comes after you to understand what this does. Being able to dynamically maintain and update that type of capability - we didn't really have that before. It was very difficult, very manual.

Dave [7:44]: Some places definitely have it, and it depends on the environment you work in, what sort of industry. But is what we're describing - certainly what I think both of us experience when we go work with various industries - is that a cultural thing? If it's a cultural thing about how much documentation, how much of the hygiene work, the disciplined work has to be there versus not, it means we're going to rush to one pizza teams just so that we still have that gap.

Peter [8:28]: I think it's, as with everything, very complex. The more complex the organization and the more things that organization is doing across different systems, the harder it is to maintain the discipline across all of that complexity. It becomes a big ball of mud where you're going to have some areas that excel and some that don't.

Dave [8:49]: But this is what AI should really help you out with. If I'm able to apply that routinely across all those different modular development capabilities across the organization - of course it's going to get difficult. You'll have legacy systems being maintained that really aren't suited to that sort of AI-driven hygienic work.

Peter [9:14]: There's this piece - going back to your earlier point that we should see things getting better over time as a consequence of this. There's an underlying assumption that if you have good hygiene, well-defined requirements, you will deliver the right things in the right way. You're going to have better test frameworks, better quality overall. That will lead to better outcomes. Which I hope is true, because otherwise I feel like...well, I don't know.

Dave [9:45]: I think we certainly both believe that is the right way to go forward, but that doesn't mean organizations buy into it. They have pressures. It's a complex environment.

Peter [9:55]: They have different pressures around where they spend time and energy. I'm sure we still see a lot of the common problems around "where should we be focusing? What should we be doing? What should we be building?" These AI tools aren't necessarily doing a good job helping in that space. We still see organizations struggling to figure out "what should I be prioritizing next? What's the next part of this system I should be evolving? Where is my next customer coming from?" That side is still a challenge. Where I'm seeing the biggest adoption of AI capabilities - and this is widely spoken about in the press - is within technology, solving technology problems. Not in the business space. It's not solving business problems.

Dave [10:52]: Yeah, and it doesn't get the attention across the organization if that's the case. What I wanted to turn the conversation around to a little bit - we have taken the idea of a two pizza team shrinking down to one pizza team, and I would describe my perspective as cautious around it because I don't think we're necessarily getting what we think we're getting when we do that. But there's a concept where single pizza teams and even half or quarter pizza teams really make sense, which is the startup world. Again, with the tools that are out there around rapid coding systems, you do not need to grow your development capability as quickly or at anything like the scale you would have a few years ago.

Dave [11:54]: When you have an idea, you get an app to market, you're beginning to gain traction. We certainly see it - we work with companies and help them get their feet off the ground. They're in a situation where it's basically one individual contributor running nearly everything in the company apart from maybe partnerships or sales or some sort of pipeline growth. So much of the "get the idea out into the market, get it tested, validate, build it out, put the first run of customers on it" - all of those things can be done with very small teams. Individuals, maybe two or three people at most.

Peter [12:35]: Yes, it certainly can. There's an awful lot you can do there, especially once you have an audience. This still comes back to one of those pieces - you see somebody market something and get it out to tens of thousands of people, and when you look under the covers, a lot of the time it's because they already had an email list they could do that with. They already had some followers who they knew were going to buy whatever they put out there. So you can start a product very quickly when you already have that, because it gives you a test bed. It's almost like you have to build that out first. But I completely agree that all the tooling is there to help get that much faster than ever before.

Dave [13:25]: Yeah, not just faster but with fewer people, fewer resources on the ground. In many cases, as a product took shape, as an audience was being identified, the next priority would be "you need a team." You could outsource it for a bit, but at some point you were going to need to have a team working on that. Well, that need for a team is now a need for one or two individuals.

Peter [13:52]: Exactly. Much more with far fewer people. One of the interesting pieces is the old adage about the number of communication channels that get created. When we talk about this from a project management perspective, part of the reason for the two pizza team was that smaller groups means fewer people to talk to each other. As you go above eight to nine people, you're drastically increasing the number of communication channels needed to keep everybody on the same page. So if I have a smaller team, can I now get more done with fewer people who need to communicate?

Dave [14:30]: Yeah, that's Metcalfe's law, right? The number of communication interactions. Now there's an assumption there, or at least an open question - if I have an AI agent, is that a line of communication?

Dave [14:43]: If we're looking at an agentic AI system where the agents are feeding into other agents and the communication channel isn't to a human - well, there you're looking down the pipeline. But there's a difference between an automated service that's just doing what it's doing versus an agent that's interacting with other agents or with developers to figure out what's going on. I would argue that is a communication channel. I'm interested - I haven't really seen that figured out, but I think that does count in your communication channels. So just because I'm a two or three person team, if I have a couple of agents, isn't it like having a four or five person team?

Peter [15:32]: I think it depends if you're able to have those agents take care of dependencies between themselves.

Dave [15:40]: We are consultants at heart, so...

Peter [15:43]: Of course it depends and it's complex. I'm being somewhat flippant here too, intentionally. One of the big concepts is how much autonomy do we want to actually allow the agents to have? That's largely going to depend on the sensitivity of the data those agents are dealing with. We may want to enforce that there's a human in the loop to ensure things aren't going wrong. When we want to do that probably depends on the criticality of the system and the interactions being dealt with, and most likely the data being processed. For example, if I'm setting up a marketing chain of things, I probably don't care because most of the data isn't going to be PII or internal data - maybe, depending on what exactly you're doing in marketing. So I can maybe allow that to run with more autonomy than I would otherwise. However, I might be very conscious of "is that going to impact my brand? How others see me?"

Dave [16:47]: Maybe we need to see more examples, but from personal experience, any interaction with an agent or with tools that are supposed to help me out - I find the overhead of communication, of at least learning how to communicate in a way that makes sense, is not insubstantial. It's at least like having a new team member and having to figure out how they understand what we want of them and what they need to ask of us.

Peter [17:21]: If we're interacting with them in that fashion, then yes, I would agree. If they're very targeted at a particular workflow and it's effectively a tool as part of the system or function that area is performing, then it becomes a bit different.

Dave [17:42]: I think we have hit quite a few things and it's been an interesting conversation. How do we wrap this one up? What are the key takeaways?

Peter [17:59]: For me, some of the key takeaways would be: yes, you can get a lot more done with fewer people. However, you may still need a larger team to deal with the complexity of underlying systems. I wouldn't immediately go out and say "okay, I'm going to take all my two pizza teams and break them into one pizza teams." You'd certainly struggle to do that at scale, just due to the level of maturity and understanding of underlying systems they'd have to deal with, especially in a large organization. However, there's an awful lot of value for all your existing teams in increasing their capacity to deal with busy work, which should ultimately result in higher quality, more supportable systems over time. I'm certainly seeing some amazing uses of the technology to tackle problems that probably never would have been tackled before. But since we know how productivity is calculated, we're not necessarily increasing productivity by doing them. We're increasing the ephemeral capacity we have in the organization and dealing with problems that are good to deal with.

Peter [19:18]: So it's an interesting conundrum. That would be one of my biggest takeaways. My other takeaway - I would agree with you that in the startup world, it's a very different story. If you're greenfielding something into a new environment - and that would apply in a large organization if they're greenfielding something too - you can do an awful lot more with far fewer people than before.

Dave [19:44]: As you were describing that, what I would add is - let's put it slightly differently. The question "should you be chasing one pizza teams?" is difficult to say, there are a lot of dependencies. Should you be experimenting with it? Identifying greenfield development where you can put one or two people on it and say "run with it"? Should you be trying it in areas where you do have clarity about support, where you're already seeing benefits of using some AI tools to provide the hygiene and discipline of certain things developers are doing? Go chase those down and chase them hard, because you can guarantee other organizations are already doing that. There's a lot to be said for where you can go in those areas in the technology space. I just find the experimental... You mentioned earlier about business benefit and the business being involved, solving business problems.

Dave [20:49]: I think in many cases we're seeing a lot of press right now about how many AI initiatives have either been considered failures or just haven't delivered on their promise. The general consensus, from what I have been reading and certainly our experience, is if you're not tackling a real business problem, then go find a real business problem to tackle. That's where you're going to see the biggest return, get the most take up, get the most support. Sometimes it's taken more as "it would be nice to try this" rather than "let's go find a business problem we can really use some of these ideas with."

Peter [21:33]: Exactly. I think some of that is going to require rethinking the way certain processes are done, and that's a very hard thing to do.

Dave [21:43]: Awesome. Good to catch up.

Peter [21:49]: Likewise, and I look forward to the next one. Until next time!

Peter [21:52]: You've been listening to Definitely Maybe Agile, the podcast where your hosts Peter Maddison and David Sharrock focus on the art and science of digital, agile, and DevOps at scale.

People on this episode