Definitely, Maybe Agile

AI Tools for Product Managers: Beyond Just Writing User Stories

Peter Maddison and Dave Sharrock Season 3 Episode 201

Product managers and product owners are drowning in documentation, vision statements, roadmaps, and backlogs. But what if AI could handle the heavy lifting, freeing you up to actually talk to customers?

In this episode, Dave and Peter explore how large language models are changing product management. They go beyond the obvious use cases (like generating user stories) to discuss upstream opportunities: building product strategy, validating market positioning, and testing ideas against competitors.

You'll learn:

  • Why documenting your product strategy matters (and why most PMs skip it)
  • How to prompt AI to be critical, not just complimentary
  • The danger of accepting AI outputs without evaluation
  • Temperature settings, context windows, and other practical techniques
  • What to do with the time you get back (hint: talk to real customers)

Dave and Peter also share a key practice: write down what you expect before you prompt. This simple step helps you critically evaluate AI responses instead of accepting them at face value.

If you're a product manager, product owner, or anyone building digital products, this conversation will help you use AI as a tool for better thinking, not just faster output.

Peter:

Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and Dave Sharrock discuss the complexities of adopting new ways of working at scale. Hello, Dave. How are you today?

Dave:

Peter, good to catch up with you. So uh I hear this is what one of the first snowy weekends in Toronto. Yes, yeah.

Peter:

Uh somewhat unexpectedly. It was supposed to last about half a day, and it's now supposed to last all the way through the next day.

Dave:

So bit so Okay, that's quite a lot of snow. I just can reassure you that here in Vancouver it is as wet and damp as is expected for this time of year.

Peter:

Yes, yes. So you you you get um your precipitation in the uh non-snow kind of exactly, yeah.

Dave:

So what's the topic? What's oh, product management. We just briefly touched on this. We've got some ideas around this. I think so. Let me just put context around this one, Peter. We're teaching AI, Gen AI in our courses. Whenever we do public courses, private courses, we're bringing large language models into all of the exercises for how people use it, what they'll they get the benefits for from using Gen AI tools. And one of the things that's really apparent is product management, product ownership courses. There is a lot of documentation expected from a product owner or product manager, kind of doing the basics, getting the grounding, the foundations laid before they start developing or once they pick up a product, making sure they understand it, a natural home for Gen AI, large language models and the like to be used. Discuss.

Peter:

Discuss. Okay. Well, so I I I get it. I mean, fleshing out the the ideas, the concepts, the uh, or even ideating like how can I prove whether this is a good idea? Like, where could I go looking for this? The ability to gather large amounts of information, uh, do research, a lot of these pieces, certainly become an awful lot easier with these models. It becomes a lot uh easier to get your hands on that information, pull it together. Uh there is a need to go and check the references and validate the information that's given back, but it cannot even just the simple act of doing that can uh sort of spark the imagination potentially and uh maybe make you think of things in a different light.

Dave:

So certainly the and and this is one of the things I find is we're always talking about putting a vision statement together. Just if I just pick one artifact as an example. And of course, if you talk with most product owners, they've either they're brand new at the role and they're looking at the vision statement very briefly, or they've been in the role for ages and their vision statement is hanging around somewhere, but it's very rarely right in front of the team and something that's referenced all the time. So being able to generate that based on a lot of different things, being able to get that pulled together and validated becomes so much quicker and easier. And that's just one example, one single artifact. So as you kind of carry on, we can build out a number of different things, what the goals are for the quarter, what that kind of delivery roadmap might look like. And of course, everyone's talking about writing user stories and shoving them straight into Jiro, basically, or whatever tool that you're using. So lots of um opportunity there. Where where does it what do you watch for? When where does it not work or where does it go wrong?

Peter:

Well, there's I I think it works very, very well in most instances, and I think it's getting better all the time. There is a need to uh ensure that uh you're you're prompting it well, that you're providing it sufficient information so that if especially if it's going and creating user stories, they need to be reviewed and validated. Are we build are we actually building out things we need to do? There is a view, of course, that eventually if you've got enough robustness in that and you've prompted it, then you can start to get more of a flow through from that into well, why don't we just have it build something we can actually look at and see if that's the thing that we want? And then we can then we can start to build out and test from there, which becomes uh an interesting way of being able to move it forward. To do that, of course, you've got to have a lot of the the pipe in place to make that possible so that uh so things will stand up and you can actually access them and you can try them out, which of course requires um network access and authentication and identity management and a whole bunch of other things that uh we sometimes forget about.

Dave:

Well, I actually really like going upstream as much as I mean, everybody's looking downstream, and that makes a lot of sense. But what I've um often seen and worked with a lot of product owners and product managers, where they just don't have the time to be able to kind of sit down and document their product strategy and clearly articulate what's you know, what's the vision behind what they're trying to do, what the outcomes should be, what the personas are, things like this. And these are all great practices. We know in order to get product market fit, in order to build products that really aren't just another Me Too product, but that really resonate with customers, that become these sort of profitable digital experiences that we all really want to get to, you need to do the groundwork. So it's always surprising how many organizations, how many product managers and product owners just don't have that time. They may have given it some consideration early on, but they've not really updated it. They don't keep it relevant, they don't keep an eye on the market as much as they perhaps should do. And this certainly, large language models give you a lot of a leg up there.

Peter:

And I think another way of uh framing that too is that very often one of the things that is holding back the teams who are trying to build out the solution is that they don't have um sufficient guidance or sufficient time with the product owners or sufficient communication to be able to decide what direction do I go in? Like I've I've hit this juncture and I need to be able to make a decision. Um, but I've got one product owner spread across five different teams and I can't get access to them because I got this tiny slice of their time, and so it becomes very difficult because when organizations underinvest in that product management space, you quite often see that. And so I agree with you, and I think it enables uh the creation of the assets which make that it which amplifies the strength of that, uh, which gives means you've got I I like to talk about there's a richness of um design information that's now suddenly available, both from like product design and product management type perspective. So you can start to understand uh more richly what is it that we are actually trying to build here.

Dave:

I think um there's also just as you're describing that one word I'll just mention this very briefly because uh is the word succinct. The danger is we've had this conversation before, but the danger is that because I can create these artifacts, I'm going to end up with you know a 20-page document trying to outline the product, keep it short. Just because you can create a lot doesn't mean we should, because some poor soul somewhere, if it's going to be used, it has to be red. If it's going to be red, it needs to be short.

Peter:

Well, of course, they're just going to feed it. If it's 20 pages long, they're just going to feed it into an LLM to make it succinct, and then they're going to miss all of that beautiful nuance that you had the LLM create for you. What? So, yes, there's definitely that point uh that it has to be right size. I uh this is actually a whole other piece of the problem that if we're it's great that we're creating all of this, but that also means that we've now uh you've still got if you still have the same number of humans trying to digest and understand all of this material, then that uh becomes a another source of cognitive overload that um we didn't necessarily have before when uh all of the user stories were one word or maybe a sentence.

Dave:

There's there there's a continuum there. We want somewhere in the middle, right? We don't want somewhere in the middle. We don't need essays, we don't need one-word requirements, right? I do agree that move to moving to the left. There's a whole space even further to the left that we should explore. Well, and one of the things that that I'm finding is really it's almost an essential for this sort of prompting and getting lots of information back is we have a ten, if we're not careful, we accept what comes back as if it's true. And and you know, that there's been talk of hallucinations. This is when the the large language models sort of make something up to please you. And that's definitely not as prevalent as it has been in the past with for many of the models, but it's still there. And one of the kind of tricks, if you like, that we're kind of standardizing in our organization is writing down what you expect first. And I don't mean like the full essay, but what are the key points you expect to see in, say, a vision statement, or you know, let's say if you're looking for success measures, you're going to map out some of the things that you want first. What we've found is if you tell the large language model, give it some guidance, it will never let go of them because it kind of weights your idea higher in many cases. So I find not telling it what you're thinking, but having it written down at least means I can now critically evaluate what comes back and see if we're sort of moving in the same direction or whether it's just making something up for some reason, whatever it might be.

Peter:

Yeah, the coach in us understands the concept of leading questions. And LLMs are very, very prone to leading questions. They are, yes. Yes, yeah. It's like, what type of blue button would you like? Oh, all the buttons should be blue.

Dave:

And you yeah, and kind of shaking it by the shoulders to try and get it to try and innovate around it is difficult, right? Um, and that's that that's an actually an interesting comment. Just of um sometimes we think we need to give it all of this information, and it's quite interesting how little information we need to provide and how it's much more important. And I, in fact, I was um reading over the weekend and came across the idea of enabling constraints again, something we're very familiar with. We've talked about enabling constraints many times. Well, in some ways, the constraints that we want to put around our large language model as it's going away to do some work for us are these concepts of enabling constraints, things open-ended that they can explore around rather than you know definitive hard edges that they can't go past, because that tends to change uh the way the model behaves.

Peter:

Yep. Indeed. Some of the other interesting pieces you can do as well, depending on what you're trying to get it to do, is playing with temperatures and understanding so lower temperatures more expansive, and uh yeah, you get wider guesses and stuff and higher ones being other way around.

Dave:

But typically typically you have to be you're you're coding for you're you're interfacing basically coding um to get access to those dials if that or those parameters, right?

Peter:

Or or some web interfaces, depends what you're interested interacting with. You can also tell it in some cases to uh to start to behave, but basically you can start to control like how do I how do you want me to respond?

Dave:

Like well, and and from a product manager perspective, like if I'm exploring where should my product development go, where it then adjusting that temperature, making the temperature higher so that you get much more variability, much more sort of like broader range of options coming back is really quite powerful.

Peter:

Yep, and uh the generally the the prompts that um that we put into the system as well, like the underlying prompts, like how are you guiding it? Well, that's um that helps sort of set the scene as well, and understanding. So I I guess kind of what we're saying there is understanding how these systems work as a tool. Yeah.

Dave:

Like how are you going to apply them to this? Yeah, and it's it it's interesting as we're discussing this, because certainly what we've experienced on the training side is many of the other roles don't don't need to be as broadly sort of functionally operating these large language models at such a range of different ways of looking at it. If I'm trying to get user stories out, I need that temperature to be low. I don't need them to be making things off to the side, I need them to be quite tightly focused. But if I'm exploring, you know, how who may be the personas for a particular product or service that we're building, or what could we build to differentiate our product or service from the competitors that we're competing with, we want to be going in the other direction, broadening what can be delivered, what what responses we're going to get.

Peter:

Yep. Um another another interesting piece is context windows, like uh, because yeah, the uh somebody I uh work with at one of the organizations calls her LLM Lucy, like Lucy from 51st dates. Because she only remembers so much and then it forgets. Yeah.

Dave:

Yeah, yeah, that's true. Yeah.

Peter:

Yeah. Which is uh but it as funny as it is, it is something to keep in mind when working with these systems and keeping an eye on that. Although some they do have very large context windows in a lot of cases these days. So but you will, if you if you find yourself with an ongoing conversation, it it will it doesn't have to get all the way to the end before it starts going a little squirrely and it starts just making up crazy things or just going around in circles and hallucinating. So the best thing to do there is just start start again, yeah, start a new one. Um and there are ways of bringing stuff to like summarize this and bringing it across and do other pieces like that to start to but again, all of this is just really big details of how you work with the tools to get the best results. These all are ways of where you can see that they will help somebody in that uh product to augmenting their role and maybe helping them kind of get things done that they weren't necessarily getting time to.

Dave:

I'm also interested in the other side of it, which is what are the things that we would never have done because it's too time consuming, too expensive to go out and do? Whether it's just one of the things that that really came to mind in the last couple of weeks is if we come up with a vision statement for a product or an articulate a clear identity for a product that we're building, can we use the large language model to go out and look at competitors and come back and say, are we really differentiating? Are we are we not doing, you know, if we're in a financial institution, a bank, well, every bank has a mobile app, how do you make your mobile app in any way a differentiation rather than just a Me Too um service that you've got or product you've got?

Peter:

Yeah, and uh prompting it to be critical, what's wrong with these ideas? Where give me the the pros and cons or or various other different ways of prompting it to get more out of the response. I was the the prompt you're describing there, I was using just last week for a startup product to say, okay, what are the other products like this in the marketplace? How does this differ? What are the things? And and to your point, uh I said, here's the website. What do you think it does?

Dave:

Yeah, yeah, exactly. Well, I mean, there's you get some eye-opening responses. I mean, some of them is exactly what you think it should be saying, and other times that comes up with some quite um in difficult different interpretations.

Peter:

Which is valuable in itself. You're saying, so wait a minute, you drew that from the Yes, exactly.

Dave:

Yeah. And then I I would say just building on that whole I kind of devil's advocate, you know, act as a critic here, there are lots of other personas you know you can give this persona to the large language model. So what is really interesting now is if you give it as personas that you're actually using and see what it sees as the benefits, the strengths, and so on of your product or even of your competitors' products to see how those personas might be interpreting different products around you. Again, information is, you know, we've got to be careful. We're not talking to real customers. So we're getting a sort of a an 80-20 perspective of what might be going on. I still think we should be talking to customers as often as we can to validate these ideas that we're coming up with.

Peter:

Yep. Well, exactly. And I think that's possibly one of the main things that becomes true. Like, what can you do with that extra time you've got back? Well, we can go talk to the customers. Yes, exactly. Yeah. The thing that we wish we had time for, but we were too busy doing all of these other things, which now we can go let the uh AI take care of. Great. So with that in mind, we like to sum this up with three points, right? So go for it.

Dave:

Well, so so I think one of the points is uh that role of product owner and product manager. There we if we're using our favorite large language model the same way over and over again, there is many more things that we can be doing by exploring just exactly how to use those prompts to adjust things like temperature, to change the role, the personality of the of the LLM as we're interacting with it for different needs. So I think there's that need to be more aware of how to structure prompts in a way to get what you're looking for at the end. Because there are many, you're not just doing the same thing over and over again.

Peter:

Yes, yeah, indeed. Uh I think um some of the important tooling there is like thinking about, well, asking it like what's wrong with this idea? Like the critical piece. So we're we I think a lot of the times I think partly because what it feeds back to us is a bunch of uh you're wonderful, that's a fantastic idea. Um trying to turn that off is difficult, but you can you can do it if you ask really nicely. Yeah. Um but that but expanding on that to say, okay, be critical, like what's wrong with this idea? Why does this not work? Uh uh what might I do to differentiate this in a crowded marketplace? Like uh what would what would this person over here want from this product? Um it's very good for doing that type of thing.

Dave:

Yeah, yeah, and very rapidly doing that as well. So it's the sort of thing that would take a lot more time if you're gonna do that in in the sort of in a real context. I'm gonna close out with my my sort of thing that I talk to anybody when we're talking about prompts and so on, which is um not leaving all the cognition to the machine and making it, you know, what do you expect to see coming back so that you're in a position where you've already thought a little about the problem, you've got some ideas, you've got context, experience, plenty of things to kind of bring to the table. What do you expect to see? Then when you get the response, you're already in that critical evaluation mode of looking at it instead of just accepting what comes back.

Peter:

Yeah, exactly. I I think that's uh I think it's a wonderful practice to have because um otherwise you can fall into the trap because you're very busy of just saying, oh, that's good enough. And that sometimes that can be a not a good idea.

Dave:

Perfect. Right, a great way of uh wrapping things up. So again, Peter, thanks again for leading the conversation.

Peter:

Yep, awesome. It was wonderful to uh talk again and uh wait until next time. Until next time. Thanks. You've been listening to Definitely Maybe Agile, the podcast where your hosts Peter Maddison and Dave Sharrock focus on the art and science of digital, agile, and DevOps at scale.