Definitely, Maybe Agile

We need more QA, developers should be developing

Peter Maddison and Dave Sharrock Season 1 Episode 28

This week on the Definitely, Maybe Agile podcast, Peter Maddison and Dave Sharrock explore important topics relating to QA in teams. These include some of the major issues that arise when you don't test.

Join us on this week's takeaway:
— Don't separate the QA team
— Testing capability must be integrated 
— Identify the problem first, so you can focus on the right areas.

We love to hear feedback! If you have questions, would like to propose a topic, or even join us for a conversation, contact us here: feedback@definitelymaybeagile.com

Peter  00:04
Welcome to Definitely Maybe Agile, the podcast where Peter Madison and David Shurrock discuss the complexities of adopting new ways of working at scale. Hello and welcome to another exciting episode of Definitely Maybe Agile with your hosts, Peter Madison and David Shurrock. So what are we talking about today, Dave? 


Dave 00:20
Well, I think the intention was to touch on that sort of topic in the corner of every agile team, which is QA, and how much QA is the right level of QA, or how many QA quality assurance folks are the right number to have on a team and I think having the QA on the team is good. 


Peter 00:41
Having a separate QA is actually where we start to see problems. 


Dave 00:46
Right, okay, so you've picked up immediately my immediate assumption, which is obviously you have QA on the team and it's funny you mentioned that, because it's not something you see all the time and in fact, very often QA is not just not on the team, but they're in a different location, different time zone, different company, in many cases trying to inspect for quality after the hard work and the damaging work has already been done. 


Peter 01:12
Yeah, and you can't inspect quality into a system and this is classic Deming, but it should be well understood that I think one of those quotes is along the lines of you can install a desk or a table, but you can't install quality. You can't put quality in by just inspecting or putting a QA there to check it. That's not going to produce any more quality in the product that you're creating. So having that QA team or making that QA team bigger isn't going to make your product of any higher quality. 


Dave 01:46
I find it interesting, just as you're saying, that, peter is the reminder that the journey that you and I are on, that many of us are on, has as its roots that focus on total quality management. And quality was where that discussion started, and sometimes we forget all of that, and many times we forget all of that over the decades, since that was the kind of primary driver for these discussions. So, build quality in one of those lean disciplines that we often referred to is exactly that. You can't install quality afterwards, so right from the outset. How do we get quality assurance? Folks on the team in the development cycle, not sequentially post-development? 


Peter 02:28
Yes, and by being on the team, you can push that down, the tests down and the quality into the way the developers are developing, into how you can start to test and check and validate right up front. You're not waiting for later stages for that to occur. You can start to look at different ways and understanding and improving from the beginning. Which is why when somebody sent me over a note about, hey, we need more QA because we need our developers to be developing, they need to stop doing testing and other things and it's like whoa, that is just such a wrong idea. So this is not the way of approaching this, and so my immediate response to that was more around, what are the things that would likely be driving that desire or that behavior? So I could say, hey, these are the places I would look. 


Dave 03:23
Well, I feel that once you've got QA on the team, then the next challenge to overcome is the them and us mentality between let's pretend it's development and QA, and I don't want to pigeonhole everybody in that particular space, but there is the my responsibility is is this? 

03:43
Your responsibility is something different, and you know I'm not going to step in your area of responsibility and I don't expect you to step into mine. And I think a big piece of the one I mean the strongest, the most powerful agile teams that we've bumped into I'm sure, have a lot more sort of a team approach where definitely there are specializations, no doubt. But if I can see one of my specialists struggling for whatever reason, then we're going to step in and say, hey, is there something that we can do to take load off of you, to make it easier for you to do whatever it is? And equally, that goes both ways. And as soon as I introduce the quality assurance as part of that specialization, does that mean my developers have to test? Well, no, but does it mean my developers are going to run ahead building code that's not being tested? Equally? No, they're not going to do that. They're going to kind of slow down. 


Peter 04:34
They're going to make sure that their quality assurance function is being met, that you know they're doing everything they can to make sure that that's a smooth process, and maybe it involves testing or some test automation or whatever it might be and and we do want to automate those tests we want to get to the point where the team can push things directly into production and get the feedback immediately to see the response of what that is, and that they're not waiting with multiple layers between them and the eventual delivery of what that is, and that they're not waiting with multiple layers between them and the eventual delivery of what they're creating and the end customer of that, so that you can actually see what's happening and you can learn. And because what we're trying to do is we want smaller batches, faster cycle times, we want to be able to see what's happening, yeah, which this is, of course, devops and it's, in a nutshell. This is where a lot of it summed up too well but, but it's. 


Dave 05:25
It's more to it than that, because there is, there's an appreciation that, um, I guess two things come out. Number one is is testing? That quality assurance role is vital. It's not a nice to have, it's not thank goodness you are doing it, because I don't want to do it. It's a vital part of the complete package of getting product out of the door. And I think, just even in that sort of understanding, that all of the roles required to get things out of the door and we can roll in operations into that, of course, as well but all of those roles involved all have an equally valuable contribution to the whole. I often think of things like, again, sports teams, but something like where you've got a goalie and a defense and an attack. If any one of the goalie, the defense and the attack becomes a hey, we don't attack, we're in defense, we're not going to step into goal, we're not going to step into attack Well, the team is much less performant as a result. We want everybody to work with one another. Yes, I was saying the same thing. 


Peter 06:24
We've everybody to work with one another. Yes, I was just saying the same thing. We've got to get everybody together. We're all working towards the same goals and this brings in the concepts of T and E shaped people, right, that's people who have multiple disciplines. They've got their core strength, the area that they go deepest in, but they're able to help their teammates out. And, as a group, you understand that as a team, we're working towards these targets, we're working towards these outcomes, and this is what we're looking to produce. So we will, as you were saying earlier, if we see a teammate struggling, we'll offer to help, we'll step in and we'll help them move forward and overcome whatever obstacles that they're running into. So, yeah, I mean, I think that's definitely the case. 

07:03
It's uh, there's this, this piece as well, of um. Looking at that in that entire end-to-end system, as you say, all the different people that are required, all the different areas that need to be covered in order for us to be able to to deliver if we've got multiple other dependent systems, for example, which are also are also dependent on other deliveries in order for our deliveries to move forward, working out how we can architect things to decouple ourselves from those so that we can, as a team, continue to deliver without needing to have large QA cycles and massive amounts of testing of the end product after the fact and all of the integration pieces. Now there's an awful lot of complexity and often when we're working, especially with legacy systems, that can be easier said than done. But there are ways of starting to like look at those overarching architectures and make sure that we're we're doing things in a way that allows us to achieve those rapid cycle times and that, and so we can get that rapid learning. 


Dave 08:04
I'm reminded, right at the when we started this conversation, of talking about the difference between inspecting that idea of you know, inspecting or adding in quality at the end I think you mentioned around tables and just you can't just build quality in when it's at the end of a process and the discussions that teams start having about looking beyond where their component or their part of the product is and understanding where coupling is an issue, where the legacy code makes the testing more opaque, more difficult to do, where in many cases on legacy systems, the only real way to test is uat manual regression testing. Well, that's not because it's fun, it's because your, your legacy architecture is structured in such a way that you can't inject at different stages the right sort of queries and steps so you can really validate that the behavior hasn't changed. So how do we talk about that? How do we come together? What have you seen works really well? What behaviors do you expect to see or practices on teams that excel at this? 


Peter 09:11
So, and, of course, it's all in the context of what are you working with, but having the team own and understand what will it actually take for us to deliver a piece of value into production today and even just a thought exercise of starting to understand what would that actually take? How do we make that happen? How can we deliver something small, even if it's just changing the color of an icon on a website or what are we? What are we doing so that we can show that what's possible? Because then, once we understand how to do that, we can now start to think about, well, what would it take? If, well, this system over here was involved in that, how would we be able to also be able to do that? What would it take? 

09:55
Now if we start to look at other dependencies or other pieces, or if we've got this, that legacy system that we've been trying to avoid like crazy we want nothing to do with it how might we still be able to deliver and understand what the impact of that system is to ours? As we continue to deliver and taking that, um, both iterative and incremental approach of like, where we're breaking this down, we're looking at, like, how do I deliver small pieces of value rapidly, and how do I iterate from my current state to one where I understand what delivery means to me and that I've got appropriate targets that I'm measuring against and continually improving? 


Dave 10:30
I find it, as I'm listening to you describing some of that, you started right at the outset talking about owning the delivery, and I think this is a particular piece. As soon as we're able to say, you know my component, my part of the product is done, everything else is somebody else's problem, then we're not owning that complete piece. And again, lean disciplines we often draw from those. But that's that whole idea of optimizing the whole, not just build quality in. But how do we optimize that entire piece? Look at the system, not look at my part in that system piece. Look at the system, not look at my part in that system. 

11:06
And I think the other piece that I really that pulls out or that came out from the conversation that you were just driving there is the experiment, the exploratory discussion around what would it mean if we did this? And I find that slightly different to the discussion of how are we going to solve this problem, which is the sort of de facto, if we're on an agile team, how are we going to solve this problem? What you're describing is one which is a little bit more exploratory is what does it mean if this happens, if we went this direction, what would the consequence be and that exploratory conversation. Sometimes teams just don't make space for it. 


Peter 11:41
Yes, and there's that piece as well. There and this is an interesting, uh way of looking at it's the continual improvement of the things that we do well, because that's how you build a resilient system, versus focusing on I'm firefighting all the time, I'm stopping with the next problem. I'm I'm constantly just trying to plug holes in the dam and instead focusing on well, how do we do this well and how do we do it repeatedly and how do we continually look at how we are doing things so we can improve the actual how of the things that we're doing right? So we're focused on the, the work itself, like how is the work getting done? And and we're taking the time to look back and say is this actually service? Is this something? 


Dave 12:26
that's actually still valuable to us. I really like that idea of, and it comes to things like refactoring. So sometimes there's that drive to say, hey, we're finished, we can wrap this piece of the product up with a bow on it and leave it alone. And I think the discussion that happens from an equality assurance perspective about are we really testing things the best way we can Do. We need to go back in and rethink how we're doing things. It's one of the drivers that allows architecture that allows the solutions that we're choosing to be both validated but also continually grown and improved and strengthened, because we're always challenging how things are working from testing and quality perspective as much as we are from a functionality and, you know, use, ease, of use from a customer perspective yes, and I think it, and to do that as well is the that ability to take that step back. 


Peter 13:21
This is another area, I think, where um teams often struggle is that they get so locked down into like identifying the particular type of tree and counting how many leaves it has on it, that they forget to take a step back and realize it in the forest. And because if you and we were talking about this, it's like understanding where those pences are. But if you, if you don't take a look at where the end-to-end system is and identify where the other problems are, you might find that you're like fixing the wrong things, that you're focused in the wrong areas. And this is where using techniques like value stream mapping comes in very handy to be able to guide that conversation, to create that time diagnostic across how the end-to-end system is working and help you maybe identify where some of those places you should look. 


Dave 14:11
So let me pick up where you were at there. I think it's quite interesting in how the conversation has circled back around in quite an interesting way, because we started this discussion with do we need more QA? How would you summarize the answer to that, given everything that we've talked? 


Peter 14:26
about. I think we need to have the testing capability built into the teams. I think if by more QA they mean and in this context of hey, we need more QA so the developers can develop, with the intent that that is going to be a separate area, that's going to sit between developers and the actual delivery of value to customers, then I think that's then the answer is no. This is not a good idea, because that's out of all of our conversation. I think that's one of the clearest things that we can possibly say that if you are building a software system, then putting more QA into that system is probably one of the worst things you can do. 


Dave 15:18
And in fact, it's probably going to result in worse outcomes than you would probably expect. 

15:20
Well, because the approach is not addressing that sort of relationship between development and those other roles in QA. What I found interesting as we went through that discussion, kind of in addition to that is first the recognition that there's a team that is supporting one another. So there's a recognition that I need to know a little bit about what you do and you need to know a little bit about what I do. We often refer to that as the T-shaped individuals. But also what came out is the fact of having that QA mentality or roles on the team drives the conversations to talk about building quality in, which actually has a really big impact on things like how your system evolves and how you address some of the challenges that you may have with legacy, with any interface actually between your part of the system and other parts of the system, which I I felt was quite interesting that that kind of emerged as part of this conversation yeah, I think so too, and so we're coming up to the end of our time here. 


Peter 16:30
How would you sum up this conversation? 


Dave 16:33
One or two points, okay. 

16:35
Number one, I think, is QA is on the team, full stop, right, the idea of sequential, you know, one development sprint, one testing sprint or something like that. 


16:46
Bring QA onto the team. I think number two is understand that once you have QA on the team. I think number two is understand that once you have QA on a team, it's the soft skills of how do you get two specialists, multiple specialists in different areas operations, testing, development, whatever it might be how do you get those to operate as a team as a whole, that T-shaped personalities, t-shaped roles, the appreciation that just because I'm done my role doesn't mean it's hands-free and I can just head off at two o'clock on a Friday afternoon, but that there are other individuals on the team that we've got to help contribute, to support. And then I think the kind of light bulb moment for me was the third piece, which is to do with by having QA on the team. By having that conversation. It leads to those exploratory conversations which allows you to start really effectively future-proofing and making your legacy systems much more sustainable in the long run. 


Peter  17:50
Yeah, and that resilience sustainable, adaptable systems is where we want to be and where we need to be. So, yeah, I don't know that I'd add much other than I think the other piece that we touched on there very much is remembering to take a step back and look at the end-to-end system and understand what's being delivered in the context of the whole and ensuring that you understand those impacts, because that's what's going to help you know where to improve and understand where your bottlenecks. Where should I be focusing? So, with that in mind, I'd like to wrap up and thank you as always. I always enjoy these conversations and if anybody would like to reach out, they can reach us at feedback at definitelymaybeagilecom. And thanks again. 


Dave 18:36
Thanks again. Always a pleasure, Peter you you, you, thank you. 




People on this episode