|Do it right from day one or you never will
|..."Ship It!" is in the style of the other Pragmatic books and is an easy and focused read. I finished it in two days and have already gained a wealth of insight that I can apply immediately. Hig...
|It's rare to have this much fun reading a book about software. The ideas are smart, relevant, and fundamental. I can be a better programmer today because of the things I read today.
I'm not blogging much these days... that's an indicator of how busy life has been lately. Almost all of it is really good stuff, but I'm still swamped.
I wanted to point you to the GROWS Method Workshops that Andy and I have been running. Drop me a line if you'd like to schedule one for your shop!
It's said that the road to hell is paved with good intentions... and that describes far too many "agile" adoptions. I've seen managers take money out of bonus pools for every broken build (in a continuous integration system). Another client tied raises to points and velocity. A vice president of a large organization explained to me that the teams doing agile should be faster than their counterparts, even though they had all the overhead of both agile and waterfall, with no freedom to reduce or change the work focus.
Agile practices are intended to be flexible... they should be fluid... liquid, if you will. Instead they're sometimes frozen and turned into clubs. Like any tool, they've only been as good, or bad, as the person wielding them.
More than a few developers have come to hate "Agile" because of how they've experienced it. Far too many have seen the word agile used to justify abusive practices at the hands of managers and executives who viewed agile as just another tool to extract more work out of their teams. That's not why agile was created and it's not particularly good at it either.
Agile becomes weaponized in several different ways, but two of the more common ways are one-sided implementations or frozen practices.
One-sided agile is easy to spot. Those companies use agility to provide more transparency and visibility on the activities of the development team and then continue poor practices (especially micro management) that negate any benefit of agility within the organization. Management (and project management) groups use agile as a tool, but don't adopt the practices themselves. Agile practices provides a great deal of visibility into the day to day lives of the development and testing teams, but that visibility needs to be a two-way street. The visibility can't be only into the technical team's work, but must also be into the executive vision and motivations. When "agility" means the teams adopting agile practices but management continues to work as before, issuing edicts from behind the curtain, then agile practices are being abused. This is a dysfunctional relationship. These agile adoptions flare up briefly and appear to be going well before flaming out. One-sided agile adoptions don't stand the test of time.
Frozen practices are also easy to spot. When iteration velocity is used at a tool for rewarding and punishing teams, instead of measuring working software, then you know your company is on the wrong path. When managers assign work to a teams instead of the team committing to their own work for the iteration, you're operating under a command and control mindset. The product developers (aka product owners) fail to attend demos or provide feedback, but later complain about the finished product and blame the teams. When the teams are forced to "sprint" over and over, and are being pushed for more and more work each time, it's an efficiency tool, not an iterative development model.
You're facing frozen agile clubs anytime an agile practice is used to justify punishing or blaming a team instead of empowering them, instead of tightening up feedback loops and directing the product focus. It's often an excuse to avoid putting in the hard work needed to ensure the team is moving in the right direction. It's easier to not fully engage, then blame development. Again.
Remember the original points of the Agile Manifesto... individuals and interactions over processes and tools. Especially those processes and tools that eliminate human interactions in favor of dictated orders. Use your agile adoption to encourage those interactions. Yes, they are "inefficient" from a time perspective, but those inefficiencies are more than recovered by eliminating many of the common misunderstandings that occur when we replace face-to-face communication with written documents. No matter how many pages you add to a requirements document, you can't the reach the level of understanding that comes from an interactive discussion. The document is faster, but the conversation is more effective.
The intention of the GROWS Method Practices is to ensure that practices won't be frozen. Practices begin with steps and advance to freedom. Try using the practice cards to help start those conversations.
If you think your company is in the midst of a weaponized one-sided agile adoption, I'd encourage you read over the Executive practices. Walk through the Vision and Progress Management sections with your manager. In my experience, many managers are doing their best to manage the work, and they're falling back to what they learned in years past. Give them a cleaner way to manage teams that involves a company-wide agile adoption and a true change of the culture and mindset all the way to the top... and you might find yourself understanding why some people enjoy agility so much.
After Andy's article last week, many people (including quite a few I have an enormous amount of respect for, and many others I haven't met) announced that they much preferred to simply keep doing what they've been doing. One of the leaders in our industry said she preferred to "keep improving how we deliver value at sustainable pace". And you know what? She's right.
That's an idea… it's a great value. But it's only that: an intangible; which requires someone with a great deal of experience to correctly interpret and apply that advice. Someone who's been working with software for a while, and has invested the effort required to become skilled, understands exactly what she meant. The technical guru who has 10 years of experience (not 1 year repeated 10 times) can take this advice, and it's useful.
Unfortunately, in my experience, that describes very little of our industry. We do have a talented core of skilled individuals in development, testing, and various flavors of management. Sadly, we have a much larger number of people who haven't reached that level, and don't know how to get there. Some just don't care, but others are struggling to get improvement, and when they get advice like "keep improving", they don't know how. As the old saying goes, they don't know what they don't know.
I frequently get emails asking which books they should read, which conferences they should attend. People want to improve, but they're often drowning in information and have no idea which information is good, bad, or ugly. For years we've been telling people to try harder, to read this book or that, attend this conference or that, and it's helped a lot of people. But many more have given up and turned into 9 to 5 wage slaves. They've retired in place and have given up seeing code as anything more than a way to pay the bills. We, as the enlightened priesthood, mock these people and talk about them at our conferences, with our enlightened friends. We're satisfied knowing that we're better, smarter and destined for greater things.
But what if these co-workers and friends want to be better but don't know how? What if offering them principles and ideas frustrates them and they want… no, NEED concrete steps. They're begging for explicit guidance, but instead we offer them values that are so nebulous as to be useless to them.
If this sounds like a scathing indictment, then good. It is. It's an indictment not of any individual, but of the industry. Most of us have fallen into the expert's trap. We've worked within a small bubble of smart, experienced people for so long that we've forgotten how to teach beginners. Let's face it... the person who knows who Martin Fowler or Andy Hunt is, and wants to bring them into their shop, is probably already very far along.
I've been lucky in my career. Many of my clients were in bad shape by the time I became involved. I've heard horror stories, time and again, about requirements, morale, quality, and so many other problems. I've worked with many talented people who didn't know anything about effective agile processes. They'd taken a Scrum class or two, read a book or two, but when things didn't work quite right, they didn't know what to do.
When Andy Hunt presented at several local user groups, during the time he'd started researching material for his Refactoring Your Wetware book, I first heard about the Dreyfus Model of Skills Acquisition. It was an eye opening view of how experts often lose touch with how people learn, and how beginners become frustrated with how experts teach.
Read my previous blog entry on the Dreyfus model for a description of it's stages.
So what's the solution? Those of you at the Proficient or Expert stages must work very hard to relearn how to teach in steps. Precious few will ever rise above the Competent stage until we learn to teach in both steps and values. Start with steps. Show them how to work effectively, but then transition into values.
This isn't a new idea, but, as an industry, we've traditionally shunned it. Experts and Proficients are hobbled by the rules that enable workers at the lower stages. Managers, who don't really understand what we're doing, especially when we're moving in the intuition realm, are desperately trying to manage projects. They, being beginners, love steps. So if we, as experts, provide steps, we're terrified those steps will be written down, canonized, and used against us… because they have been again and again.
GROWS is an effective way to manage this problem. It provides steps for lower stage teams, but explicitly discards those steps as your teams move up the Dreyfus Model. It clearly explains to managers what the Dreyfus Model is and why their teams will require fewer steps as they become increasingly effective.
Some companies will not want their teams to operate at the highest stages. They'll want to get to Competent and stay there, and that's okay. It's not a shop most of us would want to work in, but if they're using GROWS, you can understand their operating model during your interview process instead of after you take the job.
GROWS is much more than a process-oriented application of our interpretation of the Dreyfus Model, but it does harness the model in an effective way. We must, as an industry, find a sane path to provide steps to beginners while not hobbling our experts. We can make it possible to have meaningful conversations about skill stages (especially with management) and relearn how to effectively teach others. There may be other ways to do this, but so far, I've found that GROWS, and it's use of the Dreyfus Model, extremely effective.
The Dreyfus Model has five stages, and understanding them has helped me decipher so many reactions I've seen, both in software and in life.
Our first stage, also known as the beginner, is someone just learning a new skill. When you learn a new programming language or a new tool, you need steps. Most successful programming books provide a number of easy to follow steps (remember 10 print "Hello world"?). These steps help familiarize you with the environment and what things you are supposed to do. If you'd like a refresher in how this works, try teaching an introduction on Java or Junit to people with no experience. It's amazing how many things we take for granted. If you skip steps, students get frustrated and quickly let you know, or just shut down.
The Advanced Beginner
We become familiar when we've seen enough explicit examples that we can start putting them together. We accumulate code snippets that work. We're not positive why they work, but they do. Unfortunately, this stage is still famous for frustration. When things don't work, we still don't have the skills to debug those problems. We need things to work the way we expect them too, but we're building an experiential skillset of what works.
This stage is what I call the recipe stage. We can finally get things working and become cut and paste gurus. We've got tons of working solutions at our fingertips. I used to carry around a CD with collections of projects and code snippets. These recipes helped me look very smart and enabled me to quickly "solve" problems and churn out working projects. We've developed enough experience to understand what's likely to go wrong and the debugging skills to solve problems. Most developers hit this stage, achieve a solid level of competence and effectiveness, and they stop learning. They're too busy working overtime and solving problems to continue learning. And why bother? I'm getting stuff done, right? This is the trap… good has become the enemy of better and best. This local optimization prevents so many in our industry from moving to the next stage.
Things finally become smooth. We discover patterns, principles, and values. We start to understand that we don't have a daily meeting because we're "doing Agile", but because the team needs to interact and share information with each other. And sometimes there are more effective ways to do that. We realize that test first isn't about creating a test suite, but about helping us think about the problem we're really solving. We start to use techniques like the "5 Whys" and advice like "keep improving how we deliver value at sustainable pace" finally makes sense. Sadly, as we enter this amazing new world, we often start spending time with others who've already arrived here. We begin using terms like "code smells" that our Competent Stage friends don't understand, and we slowly drift into the bubble of Very Smart People. As we're surrounded by these Very Smart People, we begin to assume everyone is also Very Smart, and we begin to forget how to provide steps to beginners, widening the gap between the stages. As we all know, smart people attract smart people, and incompetent do the same. Birds of a feather do flock after all. So the shops that have the higher-stage teams don't usually hire the lower stage applicants, and vice versa.
The expert stage is known for intuition. In the development world, we call these people all sorts of colorful names. Wizards. Code ninjas. Rock stars. These are people who sit down, look over your architecture and tell you it won't scale. They might not effectively verbalize why it won't scale because they don't always think in steps anymore. In fact, it's often easier for them to redo your work themselves than it would be to explain the flaws in the architecture. Naturally, these 10x productivity team members are idolized, leading the ego we often seen in these people. You have a problem? Here's a value. Here's a principle. Here's an idea. You don't understand it? You must be dumb. I know I'm not. (Sound familiar?)
We tackle our work in the way that seems right to us. We look ahead at the work on our plate and do our best to get it done. I often say that everyone makes great choices... given their own context and point of view. Unfortunately, that point of view sometimes leads us to a local optimization, where things look efficient until we step back and take a look at the bigger picture. Then we realize our local optimization wasn't nearly as efficient as we thought.
This often takes shape in how we break out our team's work. Some times we break everything down into layers (horizontal slicing) while others slice the work into smaller, but working, bits of functionality (vertical slicing).
Horizontal feels more efficient because it lets different product area specialists (like SQL or UI gurus or server-side code jockeys) work quickly and knock out a lot of stories. It lets your specialists work alone, and doesn't force them to engage with the other team members. This leads to bursts of productivity, with people in the zone, with their heads down and distractions banished.
The result is a large number of "completed" stories, and, of course, correspondingly impressive high velocities. There's lots of UX work done. Lots of db work done. Lots of server work done. But there's nothing to demo yet, because the work hasn't been integrated. Nothing's actually working or done, except for your stories.
You'll notice this happening when you have to add in "integration stories". These are stories for integrating various layers of work. This is when you discover that the db work isn't exactly what the mid-tier team was expecting. Or the UI team didn't quite understand what the server people put in place. Integration hell is the technical term... you end up spending excessive amounts of time doing frustrating work. Nerves are frayed and tensions run high. Fingers are pointed as people's expectations are dashed again and again.
The alternative is work more "slowly" and tackle vertical slices of work. With a verticle slice, we focus on one story, but we make it work end to end. This means that the UI gurus work alongside the server team, who are working with the database gurus, to get one story completely working. No one can be "done" until the story is working.
What's the advantage? On the surface, we're working more slowly. We're pulling people out of their specialties, forcing them to talk to non-specialist team members. It feels more cumbersome and it feels like you're making less progress. But those feelings are deceiving. We're doing three key things that provide enormous productivity gains.
- We're focusing on smaller amounts of work. The thin slices that we complete force us to break down the big stories, then complete and potentially deploy a story before the business changes their mind about our priorities and changes our direction.
- We're building a solid team. Forcing conversation and removing and entire category of interpersonal and inter team conflict. Instead of leaving a series of almost working stories, we're discussing those integration points during the iteration and resolving confusion before the code is written. When people do move in different directions, it'll be for a small amount of work instead a series of dozens of stories.
- We're removing the integration stories by integrating as we go. No story can be marked as "Done" until it's functional. This is a very different approach and it eliminates the need for integration stories or integration sprints. At any point, the code can be deployed.
This is one of the great secrets of many successful agile teams. They say the work can be deployed at any time, but many struggling teams never quite understand how this works. This is how. By focusing on small slices of work, and implementing it with cross functional teams, the work is added in small, incremental slices. This changes the discussion with business about when to ship new features. It prevents large slides of almost-done code from piling up in chunks of DB work, server work, or UI updates.
I strongly suggest you try this practice for a few iterations. It will certainly feel different, even awkward, at first. It'll be cumbersome as you learn how to work with team members you've only interacted with in frustration. But, as you build up your team wide "muscle memory", you'll find it becomes second nature and you'll wonder why you wasted so much time working any other way.
The path of a requirement in a large organization is often foreign to those more familiar with small or medium companies. In smaller arenas, developers and testers help define requirements. Everyone has a clear view of what's being built because they had a hand in defining and refining the ideas. Do you have a question about what this feature should be or what this report should have? Go ask Mike or Sue. It was their idea.
Enterprise scale is different. When the program budget is half a billion dollars and there are a few thousand developers and testers involved, requirements are shifted to a specialized team. Often an entire division is tasked with searching out and documenting requirements. These requirements are compressed and converted into documents that can be shared with teams of developers to implement, and teams of testers to verify. Each team can become specialists and do their own job at peak efficiency. Unfortunately for most enterprises, this model doesn't work well.
It's turned out that requirements are difficult to capture in documents. Many companies and teams have tried to use various types of documents, spreadsheets, and other tools. Many dollars have been spent trying to capture this lightning in a bottle, but so far everyone has been frustrated. The best results anyone has achieved is a sad acceptance that all requirements are bad, and a plan to rebuild most of the features two or three times until the customers are happy. Or until the customers are worn down enough to accept what's been produced.
We can do better, but it requires a different point of view. Let's start with Tony Brill's excellent battleship example.
The game of battleship was once a staple of American homes. Kids put up a small divider so their opponent can't see their board, then arrange their fleet of ships. The kids then took turns guessing (or "shooting at") coordinates to locate the "enemy" fleet. Once you get a hit, you can zero in your fire until your opponent cries out "You sank my battleship!" I spend more than a few hours trying to best my brother and friends, but there's an excellent analogy to our software efforts still hidden in this game.
There are two ways to play this game. You can play to be efficient or you can play to be effective.
This first, and most efficient, way to play this game is to not wait on your opponent. Come up with a strategy and "fire" all your shots. Place every peg you have, then find out if you placed them in the right spots. This strategy minimizes the amount of time spent playing the game. It's very efficient, just not very effective.
The second strategy is more effective, but can be seen as wasteful. It's not remotely efficient. Place your shot, and then ask your opponent if you hit the mark. No? Then place your next shot somewhere else. Yes? Then focus all your resources in that area. You'll soon sink any battleship you locate with this strategy.
Why is the second strategy seen as inefficient? It takes more time and is labor intensive. You'll spend less time (and less salary dollars!) by simply getting all the work done in a single pass. There's a great deal of comfort to the scheduling manager who can see a gate marked on a calendar, and who knows that requirements will be "done" on that date. History tells us the "completed" requirements aren't going to be very effective, but they're done.
The second strategy isn't efficient, but it sure is effective! Those labor-intensive checkpoints slow us down, but the feedback is invaluable. Strategy two doesn't guaranty a win, but it gives you a fighting chance.
How does this relate to requirements?
It's much more efficient to batch up the entire division's work and get it all defined in a single pass, but it's not very effective. Your requirements team hasn't gotten any feedback. The technical teams haven't seen anything. Are the requirements written in a format they understand? Is vital context missing? Can they implement them in a way that's acceptable to the customer? Who knows… but we're making great time!
The second strategy is slower, but more effective. Unless you measure all the rework an "efficient" strategy incurs. Then you'll find the "slower" approach both faster and more effective.
The second strategy focuses on a tighter feedback loop with smaller slices of work. Have the requirements team complete a small amount of work, then pass that work over to the technical teams. Do the developers understand it? Can the testers verify it? Find out. First have discussions, but then have them implement the first set of features. Bring the running code back to the requirements team. Was everyone speaking the same language when they talked about the report or the preferences pane? No? Then let's adjust that misunderstanding before moving forward. Let's give the requirements team a chance to get better at writing effective requirements before they spend months writing them!
(And we're ignoring the efficiencies found when the development team is only a few weeks behind the requirements team… that's pretty amazing as well.)
If you want to get really crazy with this idea, you'll include developers and testers on the requirement teams, but that's a topic for another day.
A client recently told me that requirements teams are like quarterbacks who run all over the (American) football field throwing passes. The QB thinks he's throwing great passes. He'll tell you how good the passes are if you ask him. But the person who can really judge the pass is the receiver. The QB might throw a great pass, but let's see if the receiver can catch it. That's the judge of the pass. The best pass in the world is useless if it can't be caught.
The same is true of requirements. The best requirements are those the team can implement and verify. It sounds like slowing down to get that interactive verification is inefficient, but it's not. Taking the time to aim the gun before firing slows down the shot, but ensures the target is hit.
A coaching model that I've found very effective is something I call roadmapping and mentoring. In a traditional coaching engagement, the coach comes alongside the team and works onsite for some period of time. This is a very effective way to work and teach, but also requires a substantial budget commitment. It also needs a dedicated block of time from the coach. Lining up client needs with coach's availability is often challenging, and there are usually problems that are discovered after the engagement is complete and the coach is at a different client site. These problems are often insurmountable for the small or medium software shop.
Roadmapping puts a coach back in reach for most teams. The coach comes onsite for only few days, and that drastically reduces the cost. I meet with both the teams and the leadership. What pain points triggered this invitation? We try to identify what the existing pain is, but we also look for pain that the team has come to accept as normal. What hidden pain points exist?
There?s usually management pain as well as technical pain. Quite often the pain is perceived as different issues when it's really just two sides of the same coin. The initial goal is to identify a focused set of changes that alleviate pain points.
After the initial visit, the client has a list of changes or improvements. However, like most people, I find clients are usually better at a “New Year’s style resolutions” than following through, so I return every few weeks to check up on my new friends. Sometimes they require a bit of encouragement and other times we switch directions or adopt new goals.
What sorts of challenges do teams have?
Many shops have similar challenges, and while this isn’t a comprehensive list, it does contain the “Top Five” most common issues I’ve encountered.
- Slow delivery/Long product cycles
- Lack of shared product vision
- Quality issues
- Lack of automated builds/tests/deploys
- Expensive manual product verifications
This is just an introductory discussion, but maybe it’ll spark a few ideas that can help move your organization forward.
As teams adopt more responsive software practices, one area is often left behind. We believe that the development team should deliver functionality incrementally. We know about minimum viable products. But, especially in larger companies, we hold onto the requirements until they are "done" or "right". Well-intentioned requirements groups work months getting work lined up for the developers and testers.
First, requirements, when done well, are an ever-evolving view of the product and the customer's needs. Trying to get it right in one go is like trying to ship your product in one pass. It's difficult to make anything perfect, especially requirements. The law of diminishing returns (and Little's law) kick in quickly. In other words, it's just not worth it. You'll spend far more money, and lose development runway time, by trying to perfect requirements.
I'd like to suggest we start thinking in terms of minimum viable estimates. This is not the completed estimate that's solid and can be relied upon. This is an evolving level of confidence. Understand and embrace continual elaboration and the Cone of Uncertainty. Initially we have a fuzzy idea, but it's gets better as we move forward. Here are a few steps your estimates might take.
- Gut estimate. Or maybe rough estimate. This is an off the cuff, rough order of magnitude estimate. I don't want you to do any research or assemble a team. Tell me ~right now~ how big you think this work is. Maybe in quarter year increments. This requirement looks like one quarter for a team with this expertise. That one is at least four quarters!
- Level 0 estimate. We've taken the requirement and broken it down to features. Here's how long we think it'll take, but we haven't gone too deep.
- Level 1. Now your teams have broken down the features into stories. We're finally moving towards something with solid numbers behind it.
I usually find that gut estimates are much more accurate than anyone suspects, but we're not trying to be "right". We need to be accurate enough to do rough capacity planning as early as possible and engage development earlier. Are we within an order of magnitude of our capacity? Then proceed. Otherwise let's start reining in expectations from our customers, managers and sales teams. These stakeholders rarely get everything they want, but they don't realize that they won't until the proverbial 9th hour. I'd like to get them that information earlier in the process and help them understand that they really do have to sequence (aka prioritize) the work.
And now for the weasel clause.... :)
What are these estimates? We all know they ~aren't~ estimates. True estimates can only come from the team doing the work. Just like your general contractor would never schedule your house remodeling work without talking to the subcontractors that will do the work, you shouldn't try to plan without talking to your teams. These "estimates" are just enough relative sizing to get us started. They're based on our experience with other projects, but they are only starting the first step.
What's the point?
It's that the requirements process, just like development, can benefit from a bit of transparency and diversity. Teams can begin working with you on smaller slices of work when they understand where you're heading with the work.
Often teams can start working long before you think they can… but since they have no incremental visibility into your process, they can't tell you the requirements are good enough. One feature will be solid enough for a good team to get started. Others won't be. Open the blinds and let your coworkers help with the work. They might surprise you.
And finally, the team creating requirements require feedback. Are they doing the best job possible? Probably not… no more than your developers and testers are. So let them complete a small slice of work and get the teams that consume the requirements involved. They can say "This is GREAT! I love the way this feature is described and organized. It's perfect." They might also say "I have no idea what you mean for this area… please don't write another 150 requirements this way!"
A tight feedback loop will enable your team to continually improve the process. Working without feedback rarely leads to the best product you are capable of producing and never builds a team. Involving your technical teams earlier helps the work become something you all own, instead of commandments handed down from on high.
Never forget that over 70% of our work is rework due to wrong requirements. Having another team of eyes with a different perspective will drastically improve quality.
Start with a more imprecise gut feel with rough epics and features. Involve your technical coworkers earlier. Don't be satisfied with a big bang requirements process. You'll be amazed at the effectiveness of a minimum viable requirement.
There's a great annual conference coming to RTP for the first time on May 2nd. The stellar speaker lineup includes:
And that's only about half the list! See the entire list here.
The conference has four concurrent tracks with six sessions, not counting Andy's morning keynote. We opted for shorter session times (45 minutes) with the hope we'd have more content and you'd get to attend more great sessions and hear more speakers.
We also worked hard to keep the prices low. A single ticket is only $99 and can be lower if you can bring 4 more of your friends.
Topics range from Tim Wingfield's Executable Requirements: Testing in the Language of the Business to Catherine Louis' Failure patterns: Lessons learned over 10 years transforming global companies. I'll be giving my introduction to agile talk. The material is wide ranging and I'm sure you'll have more than a few talks you'll want to attend.
Also, a huge shout out to Tom Wessel and all of Southern Fried Agile team. Without their help and backing, this event wouldn't have happened.
It should be a great first outing for this new conference. We're trying to spread the word, so please pass on the information to your friends and co-workers in North Carolina. I hope to see you at TriAgile!
I'm involved with a new conference coming to RTP in May. Come join us for TriAgile. We've got a great list of speakers and a good mix of topics. I'll be giving my Introduction to Agile talk. Andy Hunt, Cory Foy, and many other great speakers will share the stage.
Take advantage of the early bird pricing! I look forward to seeing you there.