Definitely, Maybe Agile

Generative AI Readiness with Justin Trombold

Peter Maddison and Dave Sharrock Season 3 Episode 200

In this episode, Peter Maddison and Dave Sharrock welcome Justin Trombold, President and Founder of Antison Advisors, to discuss the parallels between agile transformation and generative AI adoption in organizations.

Justin shares insights from his work helping companies navigate generative AI readiness, revealing that the biggest challenges aren't technical; they're organizational. From end-user proficiency to cross-functional collaboration, the conversation explores why companies struggle to move beyond "toy apps" to create real business value with AI.

Key topics covered:
• Why organizations need an AI strategy before investing in tools
• The critical importance of end-user proficiency with LLMs
• How cross-functional collaboration enables AI success
• Why annual planning cycles may be holding your AI initiatives back
• The parallels between agile adoption and AI transformation
• Moving from efficiency gains to true value creation

Whether you're leading AI initiatives, managing agile transformations, or wondering why your organization's AI investments aren't paying off, this conversation offers practical frameworks for thinking about organizational readiness in the age of generative AI.

THREE KEY TAKEAWAYS:

1. End-user proficiency is everything. 
2. Define the sandbox before choosing the toys.
3. Innovation in planning matters as much as innovation in products. 

Contact us: feedback@definitelymaybeagile.com

#GenerativeAI #AgileTransformation #OrganizationalChange #AIReadiness #DigitalTransformation #LLM #CrossFunctionalTeams #Innovation

Peter Maddison: 0:04
Welcome to Definitely Maybe Agile podcast with Peter Maddison and Dave Sharrock to discuss the complexities of adopting new ways of working at scale. Hello everybody. So Justin, would you like to go ahead and introduce yourself?

Justin Trombold: 0:20
Yeah, thanks. Thanks for having me, Peter and Dave. And you know, we had a chance to chat before the conversation. So you got to know you guys a little bit, but didn't get a chance to share much about myself. And I'll start with what I'm currently doing right now and then loop it back around. So I'm the president and founder of a consulting firm called Antison Advisors. I do a lot of independent consulting work with generative AI and specifically organizational generative AI readiness, which will be a big part of what we talk about today, I believe, and how that links into not the technology as much as the ways of working and the need for paying perhaps more attention to that than some folks might like to generate value from in these turbulent, exciting times. But before that, I worked at Deloitte as well as some other consulting firms. And before that, I was an academic teacher and researcher for over a decade. And so came out of the research field and moved into consulting and for better or worse, you know, through that journey, ended up splitting off on my own. So, you know, and it was in biological sciences, so as I guess it has overlap in terms of critical thinking, but certainly was a brave new world for a while getting into business and reading balance sheets and trying to understand why certain things mattered or didn't matter in business. But you know, we all learn and adapt. And I think that's a nice segue to what we do now. And I'll be brief with it and we can loop back around. You know, we help organizations, whether it's a small local business that's wanting to just get their people up to speed on how to use LLMs within their current work, some basic augmentation work, all the way up to larger organizations that are a bit curious or a bit confounded as to why they have a lot of toy apps, so to speak. You know, we hear that term a lot when we talk to clients. So helping them understand what are the capability gaps to that value creation. And it often comes back to the principles that very well align with agile methodologies. And that was some of the inspiration to what made me start thinking about generative AI readiness, where were principles like agile, what were common characteristics that you'd see in companies that let's say have been consistently accretive for decades and seemingly it's like, well, are they just good because they have a lot of money? Or do they have a lot of money because they're really good and they can adapt and they can thrive in uncertain environments? And it's been a fun transition, you know. But as we talked about at the start, you know, agile and changing your ways of working, it's a very easy thing to say and a very hard thing to do, particularly as you all see a lot working in software. It's even very difficult where you take DevOps or even just development of any sort of technology, and it's hard within an organization where that's their native way of working. But if you take an organization that doesn't work that way at all, you know, they're very hierarchical, and you need to say you need to adopt an agile way of working. It's like I thought we were past that. I thought the conceived that phase came and gone from it's all been solved at this point, right?

Peter Maddison: 3:40
We all know how to move forward from here. And so I'm curious. I mean, you mentioned there's a lot there to hook on to, but there was a piece there where you were talking in particular about the impact of AI on organizations and on the ways of working in particular, because there is obviously a big difference between, hey, I've got this tool and I can use it in a variety of ways, and it's definitely transformative, but how do you then bring it into your organization in a way that's actually benefiting the organization and so that you actually see the value from your investment?

Justin Trombold: 4:14
Yeah, and that's a great question. I mean, I think a couple points that are perhaps unique. So working in any environment over the last several decades, we're all used to new tools coming or different platforms changing as let's say the end consumer of the product in an organization or even in our personal life. You know, what I've noticed being a bit different, and this started a couple of years ago talking with leaders at different companies when they were just getting their feet wet with what is generative AI. But you have this tool now that is perhaps more sophisticated than what we've seen in the past, depending on who you ask, of course. But there's an end user proficiency and a need for the end user, at least at this stage, to have some fundamental understanding that they didn't quite have before. You know, there are of course a lot of generative AI applications that can streamline things. But particularly if people are working in an LLM type prompt, there's a level of expertise that you layer that on top of the difficulty of developing an application or even just deploying an LLM that makes sense for your company. And it just makes it very hard to connect that line of sight from whatever we deployed, are they using it correctly? And is it augmenting the business? And so just on its face, without even thinking about is it the right solution, is it in the right people's hands, are they doing the right activities with it? Are they engaging with each other the right ways? Just the way in which it's deployed in some ways creates a fundamental challenge in certain respects.

Dave Sharrock: 5:52
I find it just as you were describing the journey coming through to the conversations you're now having with customers around generative AI, a lot of the conversations are we have a tool or we have a solution for a particular problem. And the interesting thing is it's a little bit like agile in the early days in the sense that it actually impacts a whole range of different areas where you may have challenges or there are opportunities that you can bring it to the table on. But in order to bring that change into an organization, there has to be a focus, has to be clear understanding of what you're going to start working on because as that migrates through the organization through osmosis gets into all the nooks and crannies in an organization, there's lots and lots more opportunities that come out with that. But you can't do it accidentally. You can't kind of just release a large language model into an organization and expect change to come back.

Justin Trombold: 6:47
Yeah, and I love what you said there. And you know, of course, data has always become a big thing. And I think that was another area that we've seen evolve in the last two or three years where organizations thought that it was going to solve the data problem, but in fact it created another data problem. But you even put that aside of having to figure that out. What we started to see a lot of are some common characteristics and then dug into it with some research. And we can talk about some client and company examples. But a keystone part of that, like you alluded to, Dave, is that ability to let's just presume, which isn't always the case true, that you have a clear and well thought out either enterprise strategy, business unit strategy, functional strategy, whatever that might be, then it's about connecting that dots to a generative AI strategy. And so then you have that foundation set. It's like, well, now what do we do? You know, we see companies with hundreds of inbound marketing campaigns, right, from different companies saying, use this solution, use that. You see leadership crunching down, whether it's from the board or they went to a recent conference. I've got a lot of clients that'll say a lot, you know, their CEO went to X conference. It's like, why aren't we doing moonshots and this tool and that tool? And so there's a lot of pressure that's squeezing down on those organizations. But what we found, and even with larger organizations, sometimes just the ability to clearly convey what it is the organization or business unit is trying to do to their people, and then put the tools in their hands and then say, let's come up with a structured way. Let's just, for example, say it's in an LLM. Let's think about the work you're doing. How can you augment it? How can you make it more productive? What are the things you could be doing if you didn't have to do as much of that, or if you could do more of the same thing would be the other route. And what we've seen work really well is if you can get your people to start learning how to use just foundational LLMs that way in a secure and safe way, they can make their own business cases to say, hey, okay, we've gotten past the LLM stage. Now what applications are out there? Who can we engage with that can help us build more bespoke solutions that automate the process more and more? But what people do is they jump to that part and say, let's look for the apps, let's look for the solutions. And you end up with this host of apps in the hands of people that one, it may have not even been a problem that they needed to address in the first place, or there wasn't an objective worth addressing. And then two, they may not be able to use it well. And the part we didn't talk about is leaders may not be ready for what the output is from their people using that tool. And so there's a velocity of information that makes it particularly challenging.

Peter Maddison: 9:43
So the other one you've got there is the operations of that too. Yeah, we just built all these little things, now got all these little things scattered all over the enterprise, and now somebody's got to run all of those.

Justin Trombold: 9:55
And you know, that connectivity with IT, of course, and running, you know, just the actual we'd say behind the scenes, it depends on from which point of view, right? But in what we've seen in some ways, there's always at least what I've observed is there's a healthy tension between IT and every other part of the business. And what generative AI seems to do a little bit is it's increased the demand on IT to understand what it is that everybody's doing. And then if you have this constellation of a hundred different use cases going and they don't perhaps integrate on a nexus of a common platform or something like that, how do you manage that type of situation? So part of that, you know, the analogy we like to use is it's about defining the strategy, but it's also, you'd say that's the four walls of the sandbox, right? But it's also, what toys can you play with in the sandbox? And, you know, it doesn't mean that companies have to use one toy or have one core, but it certainly makes it a bit easier if there's visibility into what those toys are. And then the individual that's playing with it a certain way can then make recommendations or the business unit can make recommendations of how that toy should be improved, right? To then be able to do what they want to do. So yeah, it brings us to one of the other tenets of what we were looking at, which is cross-functional collaboration. And you know, in the agile world, I think the challenges associated with collaboration aren't new at all, right? So what does it take to be agile? You know, you have to not only have a team that can be agile in the way that they're developing something. Now with Generative AI, you've exploded that across the business outside of just, let's say, software development or some other examples. And now everybody has to think about that. You know, how do you, as a team, do that? And how do you get leaders to understand when an idea is progressing from being good to great to requiring more investment? And how do you get them to say, look, we're done with this? You know, this isn't what we're going to pursue. And so there's a shift in the way of working that both leaders and individuals in the business just aren't used to. And when those expectations of how you're going to work aren't set up, we see a lot of disappointment with let's say, imagine you're working in finance and you're using something that's very effective and you want to start scaling it to the rest of your group, and you make the case to do that, and the leaders are like, nope, sorry. You know, so there was no expectation or clear pathway of what it meant. And so being able to put those guardrails in place of how you open the lines of communication, how do you make that business case? How do you say yes and no to things in a way that encourage people to stay engaged with the business with those solutions, but also not investing in everything because you just can't, you still have scarce resources, at least at this stage. Maybe, depending on who you talk to, that changes in the near future. But those same business realities still exist in this era where it seems like there's an overabundance of opportunity to invest in solutions.

Dave Sharrock: 13:19
Justin, I was as you're describing that, I find this conversation so interesting just because the overlap with Agile, it's been a pet topic of myself and Peter for many many years, and it's very apparent in, for example, the white paper that you produced as well, and that whole the way you the lens with which you're looking at generative AI really comes across as that organizational transformation, top-down driven, understood from the top down, and cross-functional, all brings all the agile sort of ways of working and mindset to it. I also wanted to touch on the end user proficiency piece. But I think this is one that in an agile context, it often came from the end users, right? It kind of bottom-up was the first way agile would get introduced in many organizations, especially in the early days. And with the generative AI work again, there are the availability for every because we all have access somehow, even if it's in a personal browser rather than in a work browser. But at the end of the day, there's such a disparity in how it's used. So that end user proficiency is so much more than you have a large language model in your organization, an LLM that is approved for you to use in your organization, because that doesn't bring proficiency. It still has the people holding off. So there's a training aspect, there's a creating gaps and time for people to go and look at these things and forums where people can share what's working and what's not. And it's very different to a tool because it really is a mindset, it's a way of how do you go about solving problems in a different way when you've got access to generative AI.

Peter Maddison: 15:05
Yeah. And that creation of that imperative, right? Because there's a transformation within the organization. I mean, Agile for a long time has always been trying to break that IT business barrier. I mean, you could argue that when they're successful, they've been able to do that. There was always the intent that we fully understand that for agile to be successful for an organization, it has to exist across the organization. And very often the first barrier, and Dave and I have talked about this many times, the first barrier that you run into is as IT adopts more agile practices, the business hasn't. But now I think the advent of AI and LLMs has enabled more of an imperative for the business to also learn this technology, perhaps scale and create more of the conversation across these different areas. Because suddenly we've got a common interest. It's almost a desire to figure out how do we use this technology to improve the organization.

Justin Trombold: 16:04
Well, and here's you know, I think I love the way that you all said that and the end user proficiency, there from what we see there are two layers, right? It's very different to go in and explore an LLM as a casual user. You know, I want dinner recommendations in this city and I want these things, than to say, well, how do I use an LLM in a constructive sense for my business? So we're talking about, in these cases, more sophisticated prompt engineering, developing and testing and scaling use cases, bringing in, you know, whether you call it a poor person's retrieval augmented generation or just using customized content that you feed into the system. If you don't get to a clear way for them to play in that second category, which is in the context of their business beyond just a simple question, end users aren't going to develop the skills of what these things can and can't do. And so even if you could deploy the perfect solution, you're going to have an army of people relying on content that they don't really understand where it's coming from, how it was made, what the limitations are. And so what we see at times, a few things have happened that are interesting is several organizations that I've worked with have adopted Tiger Teams, which is just a different way to say agile in some respects, but you know, these small teams that are deployed on certain solutions and they carve out a percentage of their time that they require their people, you know, you have to spend 20% of your time on your current projects integrating these whatever solutions they are into your work. The other thing is giving people the confidence that the problem solving isn't in learning the tool. The problem solving of how to use it is getting to the point where you understand how you can leverage the tool the way that you would leverage thinking about any problem. Right. And so, like, I have this issue. I'm trying to develop some way in which you can do an analysis faster or whatever that might be. Being able to ask yourself the questions of, well, how could I approach this from a generative AI first approach? You know, what's the way I can set up the framework for thinking about it? What information do I need? Where are some of the gaps? And then you start putting the core together and trying to apply it more and more and refining it and improving it. And you start to get a sense of and different tasks will start to come up. And you're like, well, I could try doing that with generative AI. I could get a nice starting point, or I can make the starting point and then I can have it take me from, you know, I set it up from A to B and it can get me from B to Z, once I set up the structure. But we'll see, we conduct training sessions for people. And the most interesting exercise that we do is the beginning when we're asking people to generate use cases for their business. And a lot of times they can do it or they can at least identify what are the main things they do in their business and what are the main challenges you have with those activities and what are the things you would do if you have more time? So more just common problem solving and things they're ready for. But then you say, okay, well, what's the generative AI layer that sits on top of that? You know, how do you do that same type of activity in the context of generative AI and using generative AI to help you with some of that ideation? And so taking people through processes to give them that foundational understanding, actually illustrating how the tool can help them identify use cases, you know, is a good starting point because they start thinking about it and testing it. And okay, well, what about this question? Now, okay, they gave me this answer. What else would I like to know? And what else would I like to know? And then you get to a point where it's okay, you played around a little bit. Now let's give you the two, three, four things to think about. What are your variables in the LLM that you can start, or value creation levers you can start pulling to then say, okay, well, it doesn't look like it's pulling the right content in. So maybe I need to think about what I'm including, like what information is this referencing, or it's giving me outputs that I don't want. How do I improve those outputs with my prompts? I list this laundry list of things that once you get it, you get it. We've seen it, people can start exploring it, but to get to that point, people have to have the tools. And in most cases, the more senior people in the organization don't have the tools. And so who is it that's teaching them how to just get things started? And so it sounds a little elementary, but you're learning how to do what you already know how to do, which is critically think, and you're learning to do it in a new ecosystem. And then if an organization or people in the organization can't do that, in our experience, there's no point in having that investment discussion in terms of what tools you should bring in because your people aren't ready to use it. And back to the agile point, as these ideas for solutions start coming up and the information starts coming up that can lead to decisioning, the leaders aren't going to be ready to then say, okay, now I have confidence in this market analysis approach that I'm pretty sure about because some people I really trust tested it. They refined it and then we scaled it. Then we made the investments to make it more robust. Whereas if the information just shows up on your desk, it's like, what is this? I don't even I don't know what this is or how I should think about it. And so there's a dual-sided end user proficiency. I think, Peter, as you alluded to, of the idea that when you have agile aspects to the organization, that should proliferate across the organization. It does to the extent that everyone embraces it, right? And you know, one I'll pause and let you all respond here. One thing that we like to say to companies and in some meetings is that a committee isn't an incentive or a committee isn't an operating model, right? So if you set up a committee or a center of excellence that's in charge of generative AI, that doesn't fundamentally solve the gap between what you should be doing with it and where you should be investing with it.

Peter Maddison: 22:51
Yeah, finding the right use cases or even just enabling people, as you say. There's a piece of and you alluded to it too, there's a piece of enabling the people within your organization to experiment with these tools. There's another piece of giving them the knowledge to be able to do that well, and thinking about how do I not, and I think works block is a common phase that's kind of being thrown around, which is hey, great, so I can generate a lot more information a lot faster, that doesn't necessarily mean that the human on the other end of receiving that information is capable of actually digesting it faster. And if I'm sending them way more information, that's not necessarily a good thing. So when you've got a poor communicator enabled by something that's very verbose, you can end up with a lot of damage at the other end of that where people go, I just give up, I'm just drowning in this rubbish that I'm being sent.

Justin Trombold: 23:48
I think well, I'll just say quick, you know, I don't know if I can say this with 100% certainty. And maybe you all have seen something different than I have, but I've never received feedback from a client that said, I want more detail in the deck. Now, sometimes they want more detail later, but in terms of the messaging, okay, how should I think about this today? Like, oh, there aren't enough bullet points below this succinct.

Peter Maddison: 24:14
I mean, I think one of the most important words you can have in your prompts is succinct. Be succinct. Keep it short. Like I don't need 37 pages.

Justin Trombold: 24:24
And sorry, I interrupted your chain of thought there, Peter.

Peter Maddison: 24:28
Oh it's true. It's that there's this idea that we create these big all of this information. We're seeing this problem in every domain, which is a part of the problem. It's where you've got to think about a couple of pieces. One is the information that is being fed into the LLMs, what is the quality of that information? Otherwise you end up garbage in, garbage out. So thinking about who's going to be the consumer of this. LLMs respond much better to highly structured information, which unlike machine learning brings us to the other point of like LLMs are not the only way to solve a problem. And so some problems are much, much better solved by other things, other tools that we already have in our toolbox. Yeah. So there's lots of ways of approaching these things.

Justin Trombold: 25:17
And so I love what you said there, Peter, because another key observation that we see is that some of these organizations that have embraced machine learning, embraced AI, you know, a lot of manufacturing organizations that have been using, they're not the same tools, but are familiar with the discipline and they know they know a rabbit when they see it, right? And when they see a cat, they know that that's not a generative AI solution. Like we can already do that and we already have the tool. Whereas somebody that's new or a company that's new to generative AI or just thinking about things that way, they'll still think it's a rabbit, right? And they'll invest money in something that just doesn't make sense. And so these organizations not only can often think about that capital allocation more clearly, you know, what is this a use case for? But in many ways, they're just much more accustomed to dealing with the volume of information that comes in from, you know, whether it's AI or machine learning or any other type of analysis that involves a lot of data, both in the system and then also what it shares. You know, so you get a level of preparedness. And there's a point that was mentioned earlier, and this is where I think my research background has been a little bit helpful, and I think anybody listening to this can identify with the feeling that you get when you have too many unknowns or just too many things that you're not sure about. And so in a couple of these trainings that we've done, and you heard this a lot with conversations I have with leaders at different organizations, that people will get so concerned about the so what next, or they're unsure about what does this look like? I heard someone talk about agentic AI, or I heard somebody talk about all these other things. You don't have to know how to develop that to then get in and start testing use cases, you don't have to be up to speed on all the recent technology advancements. You know, someone else can do that critical thinking for you. But what you do need to do is tell somebody what you need, and meaning whoever's developing the applications or making them more robust. And if people are too caught up in too many things they don't understand, then they're not going to make those early steps to just advance some of the more simple explorations of these tools, and it'll hold people up.

Dave Sharrock: 27:51
I think in the marketplace it's not being helped by the fact that a lot of organizations have jumped on a problem and built AI tools around that, which then your first interaction in many cases is these AI tools coming in, which are really focused on optimization in many cases. And they're not on that sort of non-deterministic exploratory side, which when you hit large language models and this generative AI side, well, all of a sudden the real kind of magical value around these things is less about optimization and process tooling, and much more about that non-deterministic exploration of exploding the number of ideas you can explore and being able to look into things that in the past would just have been cost prohibitive, way too difficult to go in and do that work, whereas all of a sudden you have at your fingertip with something that gives you a proxy for those explorations that would just not have been on the table. So that like you said, to that point about research, there's a different mindset when you come in with that understanding of you don't need to know the answer coming out. In fact, you can jump from one as you learn more, you're going to change direction. But there's a mindset about being able to see lots of ambiguity in front of you and confidently sort of inch your way forward in that space. And that's somewhat put off if everything's tooling and we just think we're going to plug something in and there's an AI tool that will make things better, we're losing that cognitive exploration piece that has just not been I mean I'm just going in on the end user proficiency because so much of it is just people's awareness of like lifting the lid off their head and realizing exactly what they could really do changes things considerably.

Justin Trombold: 29:45
Yeah, and I think one other point worth noting on just that general theme is you know, we've talked a lot about the proficiency in using the tools and even the proficiency to act on the data or know where to invest. But there's also an element to consider, how many people really think about what else they could be doing if they didn't have to do something else? And so instilling that discipline of, well, what are those broader objectives? And how do you carve out the time to then do or if you carve out the time, what do you then do at that time, right? That you now have. And so when we think about value creation, you know, you either, as we alluded to before, you have to do more of the same thing, right? Which could be an outcome in some cases. But to then be able to do higher order or more creative work, like everybody talks about, you often have to be very intentional about that. And that's not an easy transition to go through. And what we've seen that works a lot is carbon we call it an impact hour sometimes, or it's like, what is the hour each day or the hour three days a week, whatever it is, and what are you spending time doing in that case? You know, and it's different for everybody's work, but not just brainstorming use cases, but then thinking about, well, once this comes up, now what? And so it's a whole other end user proficiency angle where it's just about thinking about those higher order, more accretive activities. And if organizations can't make that bridge, their value creation lever is very much limited to efficiencies. And that could be fine in some cases, but you know, the end result of that isn't your people are going to still be grinding, they're just going to be more productive, or you're going to have headcount reductions. And so to get value going that route isn't very palpable for leaders and for organizations in a lot of cases.

Peter Maddison: 31:51
Yeah, and we've seen that a lot in organizations where the it's the all of those time savings. It's why they get basically redirected into things that, well, I wish I had time to go do that. So all of those other tasks that we never had time to get to because it was too difficult to do or the other one that I've seen is where there was the cost benefit just wasn't there because it was too expensive to undertake this maintenance task or this update, which would make the system more resilient and easier to support ongoing, but it was just too cost prohibitive to do it because from an end user perspective, you wouldn't really see any difference. But suddenly this now suddenly becomes possible. So you're seeing in theory, you might end up with higher quality systems which are going to be easier to maintain over time. That's the idea. But it doesn't mean you're generating value for the organization. So that's why you're not necessarily seeing that translation. And some of that, I think, though, also comes from one of the pieces you were talking about earlier, which is that if the business side of the organization doesn't understand how to use these tools well, then IT is going to use the time savings they get back to improve IT, because that's good use of that time and it's a good use of that time anyway, generally. And in overall, it should benefit the organization on the longer time frame, but you're not necessarily investing into the true sort of I'm going to grow the top line of the organization. You're more in the managing risk and managing your operational efficiency.

Justin Trombold: 33:27
And that's where having what we work with on companies is where you have a very intentional documentation and business case approach to doing this. And so that comes into now leaders understand why it's important. And then that even if it's let's say it's taking away from the IT's team, at the very least, it allows leaders to make that trade-off decision and say, yes, you'll have to do less of this and more of that. But there's a very clear understanding as to why they're saying, you know, now devote this time on the IT side, other people in the organization devote the time to learn how to do this too. And we all understand what those outputs are, but without that clear business case and what's actually changing as a result of this tool being implemented, then it gets lost. And then there's also no foundation once the tool's in place, okay, we're going to work like this now. And so then the value creation gets lost. And there's an interesting question that we'll pose clients sometimes that if generative AI is a tool that's continuously learning, or you can use it in a way that allows your team to continuously learn on some spectrum, right? You know, why do you still have annual planning cycles? Right. So I've heard the same thing in one conversation where people will say, well, we don't need to think about it that our business plan is in place. All we really need to do is learn how to use the tool. It's like, well, okay, fair enough. Like you could operate within that constraint if you would like, but you're going to lose some of that ability to maximize value. And I'd also make the case that you're not only going to lose the ability to maximize value from it, but you just fundamentally won't be able to find the accretive solutions that work, or you won't be able to identify them because you'll just have people playing all around the organization.

Dave Sharrock: 35:17
There's this again, you could almost replace generative AI with agile there, and we're having the same conversation. And as you're describing the annual planning process, one of the key things that was a driver behind the adoption of agile, and it's clearly coming to fruition again with the adoption of generative AI, is the pace of change and being able to innovate your way through whatever happens. And of course, part of that is your annual plan is planned on the assumptions you know today. And in, I mean, forget about 12 months from now, we're seeing organizations that are planning three months ahead and they really don't know where their business goes after three months. And so in those situations, they're not looking for optimizations, they cannot afford to, but they've got to be looking for those accretive generational changing levers that really get them to be able to steer and navigate their way through these dramatic changes and the sort of volatility that they're seeing in those marketplaces.

Peter Maddison: 36:22
I like the way you put that, Dave. And since you like the way Dave put that, and we're at I think we're at time for our conversation today. So thank you very much, Justin, and thank you, Dave. We'd like to wrap it up for our users with a point from each of us. And so Justin as a guest, if you'd like to go first, what point would you like listeners to take away?

Justin Trombold: 36:46
Yeah, I'll go first and I'll pick up on something that you all shared because I think I've said my share of things during the discussion. But I really like what you just said, Dave, about the concept of we think about innovation in an almost too sexy of a way, right? It has to be something that's like a big breakthrough. And I think people that have been in Agile for a while understand that that's not the case. But I love what you said, and perhaps I misheard it or I'm putting my own spin on what I wanted to hear you say or whatever it is. But the idea of having a more innovative planning cycle, you know, as a mindset to think about it doesn't have to be innovation per se, but adopting a mindset that planning can be as innovative as a new solution. And so I think that's an interesting takeaway as a connective tissue between let's say people learning how to use generative AI tools and then implementing the thought processes, the way of thinking about problems in a way that corresponds to what the tools can enable.

Peter Maddison: 37:53
Makes sense. Dave, what would you like to leave our listeners with?

Dave Sharrock: 37:57
My head is still rotating around or kind of really consumed with that whole idea of both cross-functional teams and then the end user proficiency and bringing those aspects together. And I'm just reflecting back on again early days in the agile delivery or agile kind of revolution that came through a decade or two back. But you really need to have that bottom-up piece coming in and really have other people cross-functionality becomes essential as you identified and we discussed in the conversation, but also that awareness and practice of people putting into practice and exploring and trying out different things, that end user proficiency becomes a real benefit if you're trying to find solutions emerging from your business.

Peter Maddison: 38:48
So I think the one I'd like to bring in is around building on exactly on what you were saying there, Dave, the end user proficiency is necessary to start to break down the barriers and create those channels of communication so that there is that opportunity to create value across the organization because it takes everybody in the organization to do that. And I mean, this has been the end goal of a lot of agile initiatives for a very long time. I think we're seeing with Gen AI creating the opportunity to actually maybe finally achieve some of the things that have been much harder to achieve, especially in some traditional organizations.

Justin Trombold: 39:30
Yeah, at least people are motivated, perhaps, you know, and they see the light at the end of the tunnel, and it isn't maybe the stigma being removed of agile just being a software development activity for in those contexts. And that'll be interesting to see how that's responded to. And hopefully organizations are willing to take on the hard work when they see that it isn't just about the tech stack and investing dollars. You know, you have to do some things that are going to be hard for your people.

Peter Maddison: 40:00
Yeah, that was always the case. It was also very hard for people to take it on before, but now they maybe have an incentive to do so. So with that, I'd like to say thank you to Justin and thank you, Dave, for the conversation today. And I look forward to next time. You can contact us at feedback@definitelymaybeagile.com and look forward to next time. Thanks again, Justin.

Justin Trombold: 40:25
Thank you so much.

Peter Maddison: 40:27
You've been listening to Definitely Maybe Agile, the podcast where your hosts Peter Maddison and Dave Sharrock focus on the art and science of digital, agile, and DevOps at scale.