Definitely, Maybe Agile
Definitely, Maybe Agile
AI's Role in Modern SDLC
In this episode of "Definitely Maybe Agile," Peter Maddison and David Sharrock explore the application of generative AI in the Software Development Lifecycle (SDLC). While many organizations focus solely on using AI tools like GitHub Copilot for coding, the hosts discuss a broader vision of how AI can enhance the entire development process, from ideation to maintenance. They delve into innovative concepts like organizational knowledge agents and AI-assisted work prioritization systems.
This week´s takeaways:
- SDLC optimization - A key value creation area that continues to evolve through new approaches and technologies.
- AI applications extend beyond developer tools - Moving past coding assistance to improve productivity across multiple roles and touchpoints.
- AI as an organizational assistant - Understanding company knowledge, refining ideas, and improving prioritization and decision-making processes.
Want to join the conversation about AI in SDLC and digital transformation at scale? Share your thoughts and feedback at feedback@definitelymaybeagile.com, and don't forget to subscribe to the podcast to stay updated on future discussions about agile and DevOps practices.
Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and David Sharrock discuss the complexities of adopting new ways of working at scale. Hello, Dave, how are you doing? Peter, great to catch up with you yet again. Yes, it's great. It's been a long week and we're recording at the end of the week this time, and I'm quite happy to see you because it's nice to have a conversation about something different.
Dave:I was going to say it's one of the things I enjoy is a friendly face where we've shared a lot of the similar things and we can have a little bit of a conversation about how we'd like the world to be better.
Peter:Yes, yes, that's our goal in life. Like how do we improve, how do to help the world improve? Um, and speaking of which today's topic, uh, this has come up a couple of times, uh, for me at least this week uh, in various different forums, and this concept of applying uh gen ai to uh sdlc and how you can use it to improve the sdlc as software delivery life cycle, software development life cycle and whatever you want to translate it as. You could also say, like how can I improve DevOps? But then there's this piece of is that all?
Dave:of it. Well, and I wanted to say because I think, when you say that, the immediate thing that jumps to mind is hey, we've got GitHub, copilot, let's go get development to be way better. And there's some number 37% improvement or whatever that's floating around, depending on what you read but I think you're talking about a lot more than just how can we help our developers become more productive or better at their role, or how can we help testers become better at their role. It's actually much more interesting than that, right.
Peter:Yeah, because I mean, I have a somewhat broader view of what SDLC is. When we're developing software, it starts from ideation all the way through to the maintenance, management of it and in the hands of the consumer and the feedback loops that are all built into the entire end-to-end system. So my definition is somewhat broader than I.
Dave:Well, I think everybody's definition of software delivery starts with that ideation, analysis and all the way through. But what I find interesting is that as soon as you talk about applying Gen AI to productivity improvement, everybody takes that SDLC, which is long Code commit to delivery. And goes. The developers there's this small group of people over here can benefit from it without understanding what else is in that whole journey that can be improved.
Peter:And one of the pictures I like to draw to show this is I draw like 15 or so boxes going across with all the different activities that need to occur along this, and then I was like, well, the developer helps all the way from here in every single one of these boxes, but your sort of your help bit, that's helping with the coding, is just on these two. Yeah Right, it's like there's all of these other activities that happen, from product definitions to understanding architecture, to understanding design. All of these other things happen, but they're not part of where. It's not just about coding.
Dave:Well, it's interesting because what we'll often see in the conversations we have, we sort of start from the outside looking in. So we'll start with, just as an example, if I use the Agile DevOps space, we'll start with scrum masters, project managers or, say, bas or product owners, and we're looking at how large language models can accelerate their ability to create the inputs for a lot of that SDLC process. So it feels like we're at least looking at sort of two ends there the entry points or the kind of how things are flowing through the work, and then there's the kind of technical work. But even there I feel that that's barely scratching the surface of how you can really use some of these emerging practices and tool sets to really condense that whole SDLC process in the right way. I mean, we've got to be careful that we don't kind of throw the baby out with the bathwater type of problems that you often have when you try and just focus on efficiency.
Peter:Yeah, because the models can be very self-reinforcing.
Peter:If questions are not asked in the right way, then it'll lead you down a path, and if you, hey, tell me about how to do this, then it'll tell you about how to do this, and you'll end up going down a particular direction that it's guiding you on and that may not have opened your eyes up to what the other possibilities are.
Peter:So you've got to ask the right questions, you've got to be led in the right way, just if you're interacting with it that way. And there's a couple of other interesting avenues to this that I've been exploring in some of the conversations. One is and we were before the call, we were chatting about this a little bit too this idea of what I'd call like a genetic behavior, like an agent that has learned from all the material in your environment that you can ask questions of it and start to think about well, okay, has anybody else in my organization done something like this? How have they done it and what sort of what? If I want to do or build a solution that takes these types of files into, how do we do that here? And being able to ask the questions of an agent that can then pull from the organizational knowledge to help you determine what's the best solution to this.
Dave:Right Now. What I think is really interesting here, and which is probably worth us having a bit of an explore right now, is the role that you envisage asking those questions of the agent that has been trained on the past experiences in the organization so you can immediately see, you know, from a developer perspective or a testing or say, say, solution design perspective. But you can now go to architecture or an enterprise architecture view regulatory perspectives. A lot of these different skill sets have very specific needs, can go into that route.
Peter:Yeah, like imagine if, from a governance perspective, I want to be able to understand how many people have or haven't done a particular thing, I can just ask Chapman to go and pull the information from the right places and give it back to me in the way that I need to be able to demonstrate that right or collect evidence or do the other things that I need from a governance perspective.
Dave:So you're looking at that from an audit perspective or as an example of visibility, as an example, but you could also imagine it from an architecture perspective of putting a design together or any number of these things.
Peter:Yes, like from an architectural perspective of understanding and I've seen some organizations were doing this as like understanding what design decisions are occurring. Where are the commonality across that? Are design decisions being well documented? Do we have consistency in the way that we're doing things, that type of thing, and you can start to see ways in which you can use these technologies to pull that together.
Dave:Every time you're describing this, Peter, what I'm envisaging is you putting your policeman's hat on it feels like an audit police type of approach. But when we started chatting about this, I got the impression there's a lot more to it than that. It's not just about prove what you're doing or what is being done across the organization, right.
Peter:So this was starting down the like if I have an agent that I could ask that would be able to pull the organizational knowledge back, what sort of things might I want to be able to ask of it that would be able to pull the organizational knowledge back? What sort of things might I want to be able to ask of it and which can be like how might I solve a particular problem, or how might I collect certain types of information? There's also, of course, the other uses in this, which is more generally how might I solve this? What are the other ways in which other people might have come about doing this? Or what if I want to explore how different pieces come together? And that got us into the conversation around.
Peter:Well, these large language models are built from prior knowledge. They're only going to tell you about things they already know about. But then we were also touching on, well, from an innovation perspective. Most of what we call innovation not all, but a lot of it is really just building on other solutions and innovating and iterating from where we were before. So you can see where these tools can help in that space too, to help you through that process.
Dave:Well, I'm kind of smiling here, because when you look at it from, if you look at an SDLC process, there is a point at which work has been identified and agreed to be done and it gets passed into the okay, let's build this out and get the work delivered. And I think the conversation we've had right now is all about the let's get this thing done and build everything that we're being asked to do.
Dave:What I find quite interesting is there's a lot of churn and certainly turmoil that we see in a lot of situations that we work with, where it's the ideation and the let's think of them as exceptions or executive orders, requests for work that become incredibly disruptive, and there's a lot of work done there to prioritize across the organization and there's a hierarchical kind of pressure that comes in. So how can you say no to certain key stakeholders and so on? And I'd love to explore the idea of using an agent as a filter, as something that says before you bring in your great, well thought out, innovative idea, it has to go through this agent to filter it out and make sure it's relevant to strategic objectives and to you know all the things that we know.
Dave:Sometimes they're not.
Peter:Yeah, I don't think you quite mean filter it out. Maybe, maybe, yeah. Sometimes some ideas really should be you know, this is not the right time to do this for your organization.
Dave:Well, I actually think like, if I think of the role of any sort of intake product management or an intake function, a lot of the challenges is getting work that is in line with stated strategic objectives. Now there, you can filter things out. You can say this concept is not going to move your strategic objectives forward. Therefore it's lower priority than these other things which will.
Peter:And even at a very one of perhaps even the simplest level, is if you want to bring something forward to be prioritized, having it automatically reviewed to validate that. Well, have you captured everything that's required from a definition of ready? Is this well thought out, well written? Does it have the proper information that's necessary for the organization to be able to move forward on it? And if not, these are where the gaps are. This is where you should go look for the information you need to fill in here, Like you need to go get a project code or you need to do whatever else you need to do.
Dave:Well, I mean, it could probably do a lot of that for you.
Peter:if we know about it, yeah, and we can automate all of those pieces in the back end, please. So let's figure out what those parts are.
Dave:I kind of love this idea of a definition of ready idea for work requests, especially because I'm working with an organization where there is a tremendous pressure in pushing back on certain work requests coming in and there should be a little bit it's a little bit like there should be a bit of tension there. We've got to recognize that people get to a certain position in their organization because they have a lot to offer, but also it should also be that pressure shouldn't have to be doing the hygiene stuff of have you had the questions with the right people?
Peter:and well, there's, there's a, there's a simple piece there that validating that, despite the person being the the smart person who's got into the role they have, for all the right reasons, that they've been successful to get there, that they've actually thought through what's actually the consequence of the thing they're asking to do, and then that can, in turn, help them build upon their idea. And so maybe you don't actually mean that you want people to completely run off and build a new solution over the next six months. Maybe really what you mean is could I get one person to go and explore or experiment with this particular idea and come back or answer this question or think about how we might go about doing that before we dedicate masses of capacity to making this happen and creating a lot of churn and it brings us back to this is, you know, a long stretch if you like, but the agile manifesto, and one of the core principles is simplicity, the art of work not done.
Dave:So many conversations or organizations end up doing work that ultimately, they should probably never have done. Yes, that's the biggest lift in productivity if you're able to do that. That comes at the beginning of the sdlc process. It doesn't become in the design, doesn't come to develop and test you should just never have started.
Peter:It's the triage piece. It's like this is not something we should start on.
Dave:Yeah, and as we're exploring this, I mean a part of it is can we filter things out? But actually it's, how do we have the conversation to strengthen the case for the work, or get you know, build you know, identify these things which are really going to provide the leverage?
Peter:that we're looking for Exactly. So it's there's. The positive side of this is the how do you make this an even stronger argument? And by going through the process of making a stronger argument, you'll probably learn whether it is or it isn't an idea that's probably got legs.
Dave:And, as you're describing that, I almost feel like one of the obvious things that can come out of that sort of an agent might be what's the smallest, most impactful experiment that we can find, that we can go try it out to get some data that's going to prove where we're going, and not only because that's a really interesting sort of way of allowing progress to be made rather than it being a gate, a go-no-go gate, but also it's a really difficult place to get to. Designing experiments and testing the viability of ideas is really tough to do, and some form of Gen AI agent, I think, would really do a lot there to kind of build something that's robust and testable.
Peter:Yes, I agree, I agree. So, with all of that in mind, how do we sum this up in three points for our audience.
Dave:So I don't know if I'm going to get to three points, but, as you're you know, we started this with optimizing the SDLC and I mean we've spent decades doing this, whether we've done it through Agile and DevOps or through just working and looking at efficiency and structure and organization and everything around how products get built. What I really liked when you started that optimization comes from, you know, from bringing new technologies to the table and there's some obvious conversations which are driven on the developer testing front, which your conversation really expanded way beyond that, and I think that's a really great takeaway. Two great takeaways One SDLC optimization remains and will always remain a really key area. It's where value is created in an organization and so it is always being challenged and tested and there's always new ways to look at it. Number one. Number two the other thing is that these sort of AI approaches, whether it's co-pilot and improving the productivity of individual roles but that approach can be expanded way beyond where most organizations are talking about it today.
Peter:Yes, yeah, and I think we, to wrap it up, there's two sort of main, sort of pieces. We touched on on that first piece. One was this idea of something that understands your organization so you can ask questions of it to make it easier, and the other part was something that helps you better refine your ideas so that your process of prioritization and breaking down work, or even starting work on the right ideas, is much clearer, I think that one.
Dave:It just sounds like a really interesting place to work and I just don't know how that would go, but it would be an interesting one to explore.
Peter:Yeah, excellent, awesome, so well. Thank you, as always, dave, always an interesting conversation, and with that we'll wrap up for today. If anybody wants to reach out, they feedback at feedback@ definitelymaybeagile. com, and don't forget to hit subscribe. Yeah, thanks again. Talk to you soon. Thanks, dave. You've been listening to Definitely Maybe Agile, the podcast where your hosts, Peter Maddison and David Sharrock, focus on the art and science of digital agile and DevOps at scale.