Definitely, Maybe Agile
Definitely, Maybe Agile
Dashboards
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode, Dave and Peter discuss everything about data radiators and how to have metrics drive better conversations.
This week takeaway:
- Track metrics as vectors, not targets
- Product, Team, System health
- Make metrics positive so your conversations can be too.
We love to hear feedback! If you have questions, would like to propose a topic, or even join us for a conversation, contact us here: feedback@definitelymaybeagile.com
Peter: 0:04
Welcome to Definitely Maybe Agile, the podcast where Peter Madison and David Shurrock discuss the complexities of adopting new ways of working at scale. Hello and welcome, and here we are again for Definitely Maybe Agile, with my good friend, dave, and we're going to be talking today about dashboards and data radiators and all sorts of fun things like that. So, dave, why don't you kick us off?
Dave: 0:26
Great. So all I'm thinking is definitely maybe dashboards, or definitely maybe metrics. I don't know where we're going on that one, so this really came up. I'm having a number of conversations right now and I think this is a very important question, which is how do you get insight into what's going on in your organization, teams, products, whatever it might be, and the tendency towards you know rag status and snapshots of numbers, whether those numbers are velocities or passing tests or whatever it might be. But how do we really use these radiators in a constructive way with the teams? How do we set them up? Now, I think one of the things that we probably need to touch on there is why is absolute measurement of capacity or progress being made not the right way to go in a complex, uncertain, agile type?
Peter: 1:25
of environment, because we can never have an exact understanding of what capacity we well, potentially not even what capacity we have, let alone what capacity we need. If the environment is constantly changing, then any exact number that we grab is a point in time that is no longer relevant in the next moment that we grab it, a point in time that is no longer relevant in the next moment that we grab it. So you run into these interesting dynamics when we start to look at trying to grab and radiate metrics in these complex environments, that we get a number and we quite often find that we end up latching onto that number and then drawing all sorts of other potential information that we're deriving from it. Even though it's just a point in time, it's not necessarily indicative of everything that we could possibly know about the system.
Dave: 2:17
I had an exercise that I like to do when we're talking about complexity with leaders is a spot the ball competition. You remember the spot the ball competition, and this is back in the day when newspapers existed and you'd have on the back page a spot the ball competition, a picture of a football match, or a soccer match if you prefer, with the ball taken out of it, and obviously there's a winner if you can pick out where the ball was, and that's, I think, part of what we're talking about. There's a big difference between a snapshot in time in a dynamic world versus a snapshot in time of something which is a lot less dynamic and part of the fun of the spot the ball competition is actually it could be anywhere, because that's the nature of it. You actually have to see the flow of events to be able to get an idea of where things are so.
Peter: 3:08
So what is the purpose of uh grabbing this information, then, and and radiating it? Do you think it's uh?
Dave: 3:14
yeah, and I think this it's interesting. I've been thinking a lot about this one and I think the the idea of of looking at certain numbers to try and measure progress is flawed if we're not understanding what's going on. And it's not the numbers, it's more of a I think of this as vectors over targets. I want to understand the direction of change. So an example might be looking at the number of defects in your bug backlog or defect backlog. That's a single point in time.
Dave: 3:46
A number it can be high, low, medium, whatever the number is doesn't actually tell us anything constructive about the system, unless we've got a target. You need to get that number below, and how it's changing gives us an understanding of much more relevant conversation, if you like which is are the number of defects growing? Are we reducing them? Are they stable and reducing, or is it going up and so on? And so that's just on one number. And if you now think of two or three different numbers, that's now where we start to look at different things and can start interpreting, or at least having the right curious questions, if you like, around what might be happening.
Peter: 4:37
I completely agree, and I see this too in the security space. When we start to look at, okay, so how many security defects do we have? And if you're working in the security space, when we start to look at, okay, so how many security defects do we have? And if you're working in the DevOps space and you start to introduce new tooling over the top of an existing legacy platform, very often the first thing you'll find is okay, there's 10,000 vulnerabilities here and it's some ridiculous number. That's completely and utterly something you can't deal with. So you have to start looking at this. Well, I've got two things now. One I need to see the trend over time. I don't want that number to be growing as we make change. I don't want us to be increasing and putting more and more vulnerabilities in, but I also want to make sure that we are reducing that number over time too, that we're going down. So I'm more interested in the trend than the actual figure.
Dave: 5:26
And I also sorry to interrupt, but I just wanted to add. Something is invariably, when you look at one trend, you need to look at multiple trends. So we're using examples of number of vulnerabilities or number of defects. But this is a little bit more like when we're driving our car and we're looking at speed, for example, but we're also I mean, this is back in the day when cars didn't do all of this for you but you'd look at engine temperature, oil temperature, you'd look at speed, you'd look at fuel gauge, you're looking at a number of different things to look at, not least of which the other side is out of the window and you're looking at how things are moving relative to your position in the traffic. So, in a complex environment, one single dimension is rarely, if ever, sufficient. There's a number of different dimensions that we have to look at the trends in each of those so that we can make informed decisions about what we need to do next.
Peter: 6:29
Yes, and I've seen this as one of the things we always recommend teams start with when they're starting to measure their performances. Look at throughput, because it's the easiest thing to measure. All I need is I need a point in time like when did you finish the task, and I can then just count. That's easy, so I can count that up. But we've seen this as well, where throughput is increasing over time. So it looks like we're doing well, we're delivering more, we're getting more stuff done, so it's that's great. That means that more things are happening. But then when you start to break it down and look at all of that throughput, how many of those things that we're producing are value add say stories, and how many of those things are defects, and so throughput might be going up. But are we increasing the number of stories or are we increasing the number of defects?
Dave: 7:27
number of stories or are we increasing the number of defects?
Dave: 7:28
So, yes, yes, well, and I, I like what.
Dave: 7:30
So one of the things, the ways we've approached this one, or what we see a lot, is we introduce this idea of health dashboards so that we're not looking there's a in, instead of focusing on on red, amber, green status for everything. We're looking at trends and we're looking at particularly, how is the team performing, the health of the team, how is the product performing, the health of the product and I'm thinking there from a customer's perspective, from the outside, looking in, if you like and then, how is the system that you're building the product on top of performing, so that, again, you're beginning to be able to measure or at least understand how the system operates, where the system is, how the teams are performing and delivering what the product is delivering, how sustainable that is based on where things are going, so you can answer a wider range of questions by sort of understanding general directions and that vectors over targets when you're looking at that. Certainly, in our experience, those three areas team, product and system health what good ways have you found to radiate this back into teams?
Dave: 8:37
um, the obvious thing is I mean there is there are a number of different places to do that whether it's a retrospective, specific or just part of the retrospective conversation is looking at that. In many cases, of course, you want to radiate upwards and outwards because people need to know what's going on. There needs to be a recognition that if we, for example, put pressure on a team, there's deadlines being set and expectations being set. What you often see is the health of that team in terms of continuous growth of capability, improving what the team is doing, satisfaction on the team and so on stagnates. It sort of dips because of the pressure being put on there. Not to say that you shouldn't have that sort of pressure coming in, but just to recognize that there's a trade-off.
Dave: 9:27
So, yes, we all know that the end of the financial year, everybody who's working on systems that contribute to the end of the financial year have a lot of stress at that particular time. So how do you accommodate that? How do you set the right expectations? So that's that radiate outwards. I think the other side is to help the team determine what is important to the team's health. So have that conversation where you're not necessarily saying this is what you'll be looking at. There's certainly the expertise of you do need to follow these two or three key areas, but what does the team consider to be a healthy environment on the team and a healthy team behaviors so that they're engaged and involved in working, because I think that's something that, or working towards, um, changing those ideas, I don't know. Does that kind of answer your question?
Peter: 10:26
Yeah, it does. I was also thinking in terms of working within a much more virtual world and when I've used radiators in the past, I remember in the offices where we had big TV screens we intentionally put in all around the offices so we could have very easy to see visual radiators, whereas now we're in a space where I can't necessarily have that. If it's not in my immediate field of view, am I actually able to see it? I've got to go looking for it. Now. It's unless I haven't had enough monitors that I can always have something showing me things.
Dave: 10:59
Well, it's interesting because I think that also speaks to the timeliness of it. Having something that you want, something that's permanently, you can just pull it in and see what's really going on, and that's something that, more and more, we're seeing it as like a dashboard in a car or a cockpit of an airplane. You have a lot of information for you and when, for example, if you to uh pilots in a plane, they'll. We've all seen the pictures you know, of the dashboards and the cockpits, that, with buttons and dials and flashing lights and everything, and how on earth can you possibly fly that and stay in touch with it? Well, the two things that come out is measure a bit of everything, right. So get that information out there so that as something comes up, you can get more information and be curious about it. However, what you find in cockpit situations and pilots, for example, is they have five or six key things that every time something's happening, they're going to check those four or five or six things, and it's the same four or five, six things over and over again. So those core indicators always have to be front and center.
Dave: 12:09
And the cockpit is are you flying level, what's your fuel level, what's your airspeed, various things like this that you immediately look at and validate that you're true and you're going, you're flying where you think you're flying right the height, are you level? Have you got the right airspeed, have you got enough fuel? And only then do we start looking around the sites to look at what's going on. Well, the same is true on your product, same is true on your systems, the applications that you're building on how they're working. The same is true with your teams. There are these core things that we need to know are stable. And then we want the other bits and pieces around the side, and I think that's where that conversation with teams or product owner groups and stakeholders comes in, because now we're really looking at here's the core. We need to see that. But what are the things which are contextually relevant to the conversations that you're having today with your colleagues and stakeholders that we can add into that?
Peter: 13:11
Yeah, and I do see teams making the mistake of measuring all of the things from a trends perspective to improve the area they're most interested in. But we're not necessarily considering what are other things outside of that that I'm impacting with the changes that I make or the way that I'm making, so that I can have that truer understanding of it. And, for example, if I'm developing and implementing changes into a product and I'm doing that very quickly, if I'm impacting the end customers, I need a way of understanding and measuring that back so that I know that what is the trend of what am I doing? What am I actually doing? What's it costing me as an organization for me to start to do this?
Peter: 13:57
And we sometimes, especially in the DevOps space, sometimes it gets wrapped up in hey Dora, that's everything we need to know. But that's everything we need to know about improving our engineering capabilities and our DevOps capabilities. But it's not everything that we necessarily need to know about how we're delivering value to our customers, and for that we need other things. We need to track other things and we need to know about how we're delivering value to our customers, and for that we need other things. We need to track other things and we need to have an understanding of what the as you were describing earlier the trends of all of these things and how they intersect and how they interact with each other, so we understand what the real impact of the things that we're trying and the changes that we're making is.
Dave: 14:38
As you're describing that. I was actually thinking of one of the organizations I worked with and it's still to this day. It's been some time since I worked with this organization but to this day they're still one of the most responsive organizations to what was happening because they use those data radiators and made them available to everybody happening, because they used those data radiators and made them available to everybody and the data radiators kind of stretched from CPU load on the servers right through to orders per, you know, per 15 minutes or per second or whatever you might think of, right through to, you know, velocity or defects and things like that. So there's this wide range of information. But what was really interesting is especially post any sort of significant release, if anything affected that everybody knew about it. And it was one of the few organizations where, by the time somebody had recognized something was trending in the wrong direction.
Dave: 15:34
What you often find is the IT teams were already working on the solution because they were aware of it, even ahead of people on the business side, just because they're going to see that and they're very conscious of it. They know what's changed, they're aware of it, they're looking in specific areas and it's one of the few organizations where I've really seen this sort of preemptive behavior, where you could almost see that and it was something that the delivery teams really because they were involved in that dashboard and the data radiators going out, they were very consciously aware of it. It would often be on people's desktops. As you walk around, you'd see people looking at this all the time, just keeping an eye on it and making sure things were moving in the right direction.
Peter: 16:19
Yeah, it makes an awful lot of sense. So if you were to sum this up, Dave, how would you sum this up? In three points.
Dave: 16:26
Well, I mean, I think the bit that really stands out is this vectors versus targets, right? This idea of trends being more informative in a complex environment, in a rapidly changing, dynamic, uncertain environment, than a red amber green sort of status of where things are. The red amber green might tell you that snapshot of where you are now, but the trend is going to tell you is there something I need to worry about and act on? Do I need to act on quickly or slowly, or kind of ride it out and see where things are going? So that's one piece.
Dave: 17:09
I think another part is that idea of the core metrics that you immediately look at, that core health check coupled with the peripheral pieces that come into it. And in most organizations the core is going to be very similar but the peripheral will be situational. There'll be something driven by the team, the product, the engineers and so on, depending on context team, the product, the engineers and so on, depending on context. And well, I think they're probably just the two things that really jumped out. Anything that you would add to that?
Peter: 17:53
I think out of what we were talking about. No, there's this concept of making sure that you're measuring the positive things, that the conversation that you're having around data is on, that you're not trying to measure missed goals or missed targets, because if you are, then there's no way of having a positive conversation as to what the outcome might be. Instead and this is a coaching trick in and of itself If somebody comes to you and they're like I can't believe we lost this and we don't have enough of that, and we don't you and they're like I can't believe we lost this and we don't have enough of that, and we don't, and they say, well, what is it you want? Instead, this turning it around into a positive is, in general, will always end up with a better conversation I really like that point.
Dave: 18:29
I think that's a great way to end. Is that whole recognizing that the purpose of all of these numbers, vectors, targets, whatever targets, whatever it is, is for the conversation and you can set up a great conversation by having really good numbers that kind of highlight the right things. You're not setting that up if you have the sort of negative consequences drilled home to people. It sets it up for a poor conversation for sure, awesome.
Peter: 18:57
Well, thank you very much, dave. It was a pleasure, as always, I loved the conversation and, as always, if anybody wishes to reach out to us, they can find us at feedback at definitelymaybeagilecom. Excellent, thanks again. Talk to you soon, peter. Thank you, dave. You've been listening to Definitely Maybe Agile, the podcast where your hosts, peter Madison and David Sharrock, focus on the art and science of digital agile and DevOps at scale.