Definitely, Maybe Agile

How work with audit in an agile environment

Peter Maddison and Dave Sharrock Season 2 Episode 153

Send us a text

In this episode of Definitely Maybe Agile, hosts Peter Maddison and David Sharrock dive into the challenging world of agile practices in regulated environments. They explore the tension between modern agile methodologies and traditional audit requirements, offering insights on how organizations can bridge this gap. The discussion covers the importance of understanding compliance needs, automating evidence collection, and transforming the audit process to align with agile principles.

This week´s takeaways:

  • Organizations must understand what they need to comply with and set up systems and practices that make evidence easily obtainable without disrupting workflow.
  • Implement automation in the delivery system to capture and expose evidence of compliance, making it easier to demonstrate adherence to regulations without slowing down agile processes.
  • Shift the audit focus from document checking and stage gates to validating system behavior. This approach can make audits more meaningful and engage development teams in solving compliance challenges creatively.


Peter:

Welcome to Definitely Maybe Agile, the podcast where Peter Maddison and David Sharrock discuss the complexities of adopting new ways of working at scale. Good to see you again. Let's have a chat. Yes, and we've got an interesting topic and I think you know more about this topic than you let on. Maybe. Maybe let's see where it goes. Letter Maybe, maybe let's see where it goes. Yeah, so I entitled it because, as we were talking, this seemed to be a good title for this.

Peter:

Like what happens when audit shows up and I think the underlying premise of this is and this may seem like old hat to people a lot of people have gone through this, but it's. Most people understand the value of delivering in small increments and using agile practices for delivery, and that significantly changes your SDLC, your software delivery lifecycle. So there's a lot of change as a consequence of that. Regulation often is looking for something else and your internal governance controls might well be written to the regulation and they may not necessarily have been updated in time, and then audit shows up and says hey, I don't care what you're doing, I don't care if you're using Agile or Thingamabob or whatever methodology. I need to see these documents, show me your evidence. So what happens?

Dave:

Well, and what I find really interesting. I think this is such a it's a fascinating topic, but it's not something that many organizations spend much time on. Even heavily regulated ones don't want to spend time on. It feels like you're taking away from something, but what I what I think is really important to recognize is a stage gate approach becomes is really suited to a compliance kind of audit approach, because now it becomes really obvious okay, there's stages and there's gates, so I'm going to audit what crosses that gate exactly and it and it's.

Dave:

It's like it's a home run. It's very, very straightforward. Great, you're using an sdlc. There's a number of different stages. I need to audit the fact that you're doing the right things as you go through the process. Very, very straightforward. Great, you're using an SDLC. There's a number of different stages. I need to audit the fact that you're doing the right things as you go through the process. I don't actually have to look at how you do it, because you've got these wonderful gates where everything collapses to a document, a plan, a design, a solution, something.

Dave:

So I'm going to look at that and make sure you've done everything that you need to do Brilliant when your SDLC is months and months long and you're not using Agile to deliver where there's lots of change, continuously being accommodated as you learn as you go. So when an audit team comes in to look at an Agile team, what should they look at?

Peter:

Well, this is typically where the problem starts, because they'll turn up and they'll say, okay, show me your requirements, and they're expecting a BRD and they're expecting to have somebody who's going to walk them through and show them exactly what's being built.

Peter:

And some people might say, ah, that doesn't happen anymore.

Peter:

I'm not going back very far in time. Before I've been dealing with exactly this with clients where this is still very much a common occurrence and partly driven, I think, because a lot of external regulation, that the compliance things are written to are very much waterfall-based. In a lot of cases, even when they're primarily written as guidelines, they're written as thou shalt have requirements, thou shalt have your bill documentation, thou shalt have your build documentation, thou shalt have your test cases documented, et cetera, and all of these things are true and there is some sort of logic in some of these pieces, like there's this idea that risk is lower if we have evidence that all of these things are occurring. It's just in a exactly as you were describing. It doesn't occur in an Agile process as a set of set gates, always in this order. We understand now that when we work to deliver these products, we're going back and forth between these different pieces, anything that we design, is required, and this is what's going to end up in production. So this isn't directly related from a risk perspective either.

Dave:

So well, can I? I know you finish your thought. I didn't want to jump in and just derail things, so finish your thought.

Peter:

Oh no, jump in derail, go ahead Okay.

Dave:

So what I'm what I was thinking as you're describing that.

Dave:

What jumps into my mind is the difference between building quality and then testing for quality, and the reason I mentioned that is I feel that there are a relatively small proportion of agile teams that really understand how to bring those compliance and regulatory activities into what they're doing.

Dave:

So I'm thinking of one of the organizations we've worked with. They're delivering, you know, many times a day. They're pushing things into production. They are absolutely building regulatory compliance into their process and they can point you to it and they know exactly how they're making sure the knowledge is in the right place, that people are doing the right steps, because they're shipping working software and they have to be able to do that without testing at the end. But so many teams are doing the agile delivery in an iterative and incremental way, but they're still testing for compliance let's call it at a final stage, and so they're still kind of have that gated mentality. And, of course, what we're trying to get to is how do you show that we've been building in that thought process for regulatory, you know, for meeting compliance or regulations, as we do the building? How do you demonstrate that?

Peter:

So there's some really key pieces here. So first of all, understand what it is you need to show compliance to is the first step, and really do understand that and think about how you are going to have that within your standards. So understand that from like, what am I being asked to be compliant to Outside of that, within the delivery system and in the process? Typically it comes down to a few key pieces that you're going to need to be able to show. One is kind of the for lack of a better term the four eyes At least two people have looked at something for it to move forward. So we have to be able to demonstrate. That's quite often done via code review. If you can get to the point where you can get compliance to agree that pair programming is occurring and we can evidence that it is, that's an even better way of doing it, because code review has other problems involved in it, but we'll cover that in some other podcast. But code review is a way of being able to evidence that because programmatically you can easily pull that out. The next part that typically we'd look for is is there an automated test harness or automation into the build process that exists? After that, that is going to test and validate and typically you really actually want, at the point of commit into test environment, build test evidence is surprised for code review so that I can validate that I've got an artifact that would build and that this is the code that was there to support it. So I can do the review of that in conjunction with the test results. And if I've got that evidence that I've got a tested, working component part of the system that I'm looking to build, how does that then fit into the overarching environment? Now there's other pieces of documented evidence that I would need around that Quite often I need to have some evidence of I'm going to be making a technical change into the target environment.

Peter:

Traditionally we would look to document all of the details of that technical change up front and we would then fill in all the details that are necessary for it and then we would publish on that. If we've managed to get to the point where we've got agreement that the change is going to be small enough, incremental enough, we can automate the change process. But we still need to ensure that the change is actually created, because we need a record of that, both from an operational perspective and to understand what are the changes to the target configuration items that we're looking to manage as IT assets within the target environment? I can see you smiling at me now because it's like I can talk about this a long time, but these are the kind of. So those are kind of some of the key elements of it, but that kind of starts at the commit point, going back even before that in requirements.

Dave:

I was going to say I don't know how to pause, okay, okay, before we go there, and I'll ask for that in just a second, but what I wanted to draw, draw out, so, and I, I'm just the dumb, simple couple of three things that jump out in my mind, but this is what I've seen work really well is the three things that I think you're just describing and you'll tell me what I've missed is the first one was about knowledge, and I always find I think this is like the starting point and something that we really want to recognize, which is so often. I have some expert in compliance over here and they know the regulations like and can start quoting paragraphs and sections in it. But the development individual let's talk about information, security and data the person who's responsible for manipulating data, reading it and storing it, playing with it in the system may not be familiar with those regulations.

Dave:

So the first point that I think of in terms of education is making sure those who are directly influencing what's being regulated should absolutely as part of their remit, if you like the expectation is they know the regulations within which they're working. Right, it's a little bit like if I'm a football player, I know the rules of the game.

Peter:

My one caveat on that would be you need an organizational mechanism to enable that, and there's a couple of different pieces here that happen within a delivery system we talk about either. I usually call it in-band and out-of-band, but you can talk about it as in a loop and out of loop. So there are sets of regulations and controls that need to occur on every iteration. Every time we're going to make a technical change into the target production environment. These things need to happen. There's another set of controls that exist outside of that, which occur on a different cadence, and they can be all sorts of things updating disaster recovery plans based on changes to the target system, understanding capacity of the system, understanding architecture, understanding A lot of these types of things. You don't need to make changes every single time you make a deployment Now, because we have these different cadences at different rates.

Peter:

There's very, very often what you find is that those controls, that the person who owns the system, the system owner who is responsible for ensuring the system is built in compliance with those controls can't be expected to know the 500 controls that are sitting in your standard documents.

Peter:

They're just simply not going to be so. You need a mechanism in the organization to help them with the piece that you said, which is, give them the knowledge to understand what are the things they need to be aware of and care about. How do you enable them to know that? And in a large, complex environment with so many moving parts, the last thing in the world you want is for that system owner to have to go to 15 different governance bodies and be expected to understand everything they need to do across all of those. Because I can tell you for a fact, they don't, they can't and it's not that they don't out of anything to ferris either. It's just not something that a single individual is going to be able to do. So you need a mechanism to be able to make that easier.

Dave:

Yeah, so let me put it in a slightly different way, which is within my span of influence, control, whatever it is that you know work. I need to understand what my responsibilities are against a given regulatory you know set of regulations and I and one of the reasons I mentioned that is the organizations that we've worked with where we've tried to, you know, we've worked at pushing into the team so that the teams can move quicker. The first thing that you end up doing is education, because you've now got to go to people who normally they're handing things off, to get somebody who's going to go and test it over here. Well, that validation is now, as you said, four eyes principle, but it's probably on the team or it's between teams. So now I need to get that knowledge into the team rather than rely on some, you know, outside of these teams, individuals who can do that.

Dave:

The second thing I wanted to pick up is what you talked about in terms of that. You know what is actually being built and this is where so many teams we all have Jira, tickets, stories, whatever we're using, and we know that this idea of what we're going to build, that we take into a sprint, morphs as we go through the sprint and we may build it in a subtly different way. Now, if we're not in a regulatory compliant type of environment, that morphing is not a critical thing in most of the case. So that definition of what we built doesn't get updated. Maybe, if we're smart, we're capturing it in some technical documentation through the definition of done and so on, but basically we're not being disciplined about making sure that what we committed to build, what we actually built and what is recorded as having been built, is correct as we go through and I mentioned that just before you dive in and tell me where I.

Dave:

But I mentioned that because because again, there's these little things like version control of these requirements, not at the entry point, not at the product owner. Well, this is what we need to build. But at that, hey, this is now complete, we're going to push this live and there's some brilliant like we can do really smart things in there. We can version control some of those systems. We can tie them to running automated tests that validate the functionality change that we've built. All of that is a disciplined kind of like additional layer of work that needs to be done, but also it's a discipline. When you work in these regulated environments you get these teams which are super disciplined about it. They know they have to get these things in place so that they have that audit frame.

Peter:

Yes, yeah, and the interesting part, there is some of the stuff that happens sort of on the left, if you like, on the stuff within that requirement, what stories we got and what it is we ended up building. Some of that is where we're regulated to capture it, but the actual value of that from a risk prevention perspective is somewhat minimal outside of are we built? What we're really looking for there from a risk perspective is are we just randomly going off and building stuff which has no business purpose, that is not intended to meet any kind of business purpose? Can we show that for organizationally, what it is we're doing relates to what we're trying to achieve? And so there's a piece there, the point that the thing that we've built, and understanding what that thing we've built is and what it is that it does, that's where it gets the critical from a risk perspective and technical perspective.

Peter:

We want to know what is that thing, what is it connected to? What libraries does it pull in? What are the dependencies? Because we need to know that from a dependency freshness perspective. The other piece that is sometimes hard to get across to the non-technical people is like even if I have a system running in production, if, uh, it's connected to a library that um has been pulled in, even if it's static thing, but that library changes and now has a vulnerability in it. I need to know what are all the systems that I have running that have that library in them so I can go change them um and and so I. You need a lot of. You need the discipline in place to be able to capture that and bring that information in so you can easily find them. Otherwise it becomes very disruptive to your technology organization to go and try and find them, like if you don't have.

Dave:

So what you're doing and the way again that I kind of end up, what we'll work with teams on is a couple of things. One is knowing what you're deploying, what you're pushing live, so where the changes are and what's going on there. That's one aspect, and I think there's an extension to that, which is you know what are the dependencies, what libraries are being used, not used, and what has changed there?

Dave:

And that's very much that part of the release process.

Dave:

It's somewhere in there in terms of either and again the reason I'm mentioning this in a way that we want to either automate it or we make it as cheap and quick as possible, because if you're going to release 10 times a day, I can't be knocking on somebody's office door metaphorically and having a quick chat with them.

Dave:

It has to be something which is in the process, either from a discipline perspective, we're updating the right pieces of information or, ideally, from an automation perspective, we're updating the right things. But then there's a second piece that often gets ignored because it's off somewhere else, which is monitoring and instrumenting to see that the change that we feel is going to do A, b and C isn't somehow breaking the behavior of the system somewhere else. So that whole bit of you know and monitoring it's not just you know uptime and performance, but it's you know. Are we seeing some unusual kind of behavior in the data you know, in how data is being used or whatever it might be? There's a number of things. So how do you monitor and instrument that? And that's part of that is somebody somewhere has to be looking at it or flag when they're getting pulled in and all of this then ties back as we started, like what happens when audit shows up.

Peter:

And this is why it's critical to understand what you're going to be able to do within that delivery process and what you're not, Because if you have, if you start to write standards that have absolutes, like every application must pass a hundred percent of its tests before being deployed into production that's a test.

Dave:

You're going to get rid of tests because there's some failing.

Peter:

Yeah, cause I'm going to go.

Peter:

I'm going to remove the tests that fail, because I know I have to get rid of tests because there's some tests that keep failing. You get rid of them. Yeah, because I'm going to remove the tests that fail because I know I have to get this into production. Or, if I'm building a mobile app, if I build a mobile app, if I'm deploying this across multiple different devices and I've got lots and lots of tests I'm doing across those it is going to fail on certain types of devices and certain types of screens. I've just got to make a judgment call as to which of these work and which they don't, so I can create a heat map of the different pieces, all sorts of. So a lot of it depends on what type of system am I deploying, what system am I changing, because all of that drives these different pieces. So you've got to be careful because regulation and policy tends to be broad sweeping, so try to avoid absolutes that then kill you.

Dave:

Generalities and absolutes, where everything is done this way and everything is done that way and everybody must fill this form in, or whatever it might be.

Peter:

It becomes aspirational and unenforceable. You just simply cannot actually do it In a large, large, complex, heterogeneous environment. So many different moving parts. It may be true for 90, but there's going to be those edge cases that are going to be very, very difficult to do so sort of.

Dave:

We started this conversation with what happens with you know, when audit shows up, and one of the things that I've seen happening and I think it happens more and more nowadays but was less and less, but it was a bit of a headache is if audit shows up and says show me your SDLC and the stage gates and they just sit down and go I need, you're going to fail the audit because you clearly don't have stage gates to do A, b and C.

Dave:

We have problems because what we've been describing is behaviors and sort of micro gates, if you like, built into the process, and what ideally we're going to find is that audit conversation is guiding like this is how we're handling this. Now let's go and look at the data that shows you know here's. Let's look at version control for all of the changes to the system, how many changes are being made which aren't locked in, and but you know, so we can really validate that process happening in real time, sort of measuring that and what changes there is. Now audit isn't a checklist to say you know. Well done, mr Madison. You're doing all the right things, because you've shown me a piece of paper and an agenda for a meeting and some documentation.

Dave:

It's a little messier, because now we're able to start saying here is what you would expect to see in our systems, and so let me show you what's happening with various. Here's our logs of everything that's been released in the last three weeks or 72 hours and what's going on there. So now we're getting a bit messier and now we're beginning to look at things in a subtly different way.

Peter:

But it's actually more real and, ideally, where you want to get to is, instead of it's oh no, audit is showing up, and that's a nightmare because it's going to distract from everything. You want the system to be such that audit's coming, which is a good thing because it can help me identify where I need to improve, where are the aspects of my system that I may miss, what are the things I should be doing? So, building those titill issues, understanding what the value that can come from having that external perspective, can be interesting as well. I think we should wrap this up, because otherwise I can talk about this all night.

Dave:

Yeah, yeah, let's do that.

Peter:

Oh, I have to give you a summary. So right, okay. So what happens when audits show up? Three things, three things. Three things.

Peter:

One be prepared and understand what it is you need to be compliant to. So understand ahead of time what is the expectation, what you need to have into place, and make sure that you're setting up the systems and practices into how you do delivery, to make sure that that evidence is easily obtainable, so it doesn't just completely derail everything and grind things to a halt. So that's kind of point number one. The second point is understand that there are different aspects of this, and this is on either side of the fence, whether you're coming at this from the audit perspective or you're coming at this from the system owner perspective. You cannot expect the system owner to know absolutely everything about every possible regulation and still do their job and manage all of this inside of their environment. So you need to frame it for them to have the understanding they need, and there are mechanisms we can use to make that easier for them. So that's where we talk about things like lean control and other things and stuff like that, which are really good practices that you can put into place to help with this.

Peter:

And the third piece I'd say is, as I think I was touching on I was starting to wrap it on talking about kind of different things that you can automate, and there's a much longer list than that. But is that? Look, if you're going to start delivery at speed, look to automate as much as you can into the delivery system itself and expose the evidence out of that. I think, actually, the fourth one might even be, if I'm allowed, a fourth one would be the last one you brought up at the end there, which is make audit valuable. But there you go.

Dave:

Maybe I'll touch on the making audit valuable, because I'm just going to say one thing here, which is I think I can imagine sitting down with a dev team ops, devops teams, agile teams and having this conversation. It's really interesting because we're solving a problem in a technical space with a technical way of looking at it.

Dave:

So the only real thing I want to think of is if we can move audit away from document checking and stage gate checking into one of show me how the system is validated. I think one of the things that you do is you bring the development team to the table. They want to solve it. It's actually great we are seeing how the engine we're making sure we're sitting. You know we're gamifying delivery to make sure we're within these bounds. We're hitting everything that we need to. It's actually really interesting when we start looking at it like that.

Peter:

But it doesn't start that way. Start looking at it like that, but it doesn't start that way, no, no. And the last one agile audit is a thing and there is agile in audits, and one of the key factors there is take a look at how your audit process is done today and think about how that can be improved. If audit is willing to come to the table and work with you on improving audit to become more incremental in the way that it does its delivery and its engagement with you, which can also have a really big impact as well. So thank you, dave.

Dave:

Peter, thanks again, great conversation, it was fun.

Peter:

Well, I hope this was valuable to our listeners. You can send us feedback at feedback@ definitelymaybeagile. com, and don't forget to hit subscribe. Thank you, dave. You've been listening to Definitely Maybe Agile, the podcast where your hosts, Peter Maddison and David Sharrock, focus on the art and science of digital agile and DevOps at scale.

People on this episode