|If your shop has trouble shipping quality software on time -- and let's face it, most do -- then this book is for you. If you're a manager, I'd say that doubly so.
-Ernest Friedman-Hill "JavaRanch Sheriff"
|..."Ship It!" is in the style of the other Pragmatic books and is an easy and focused read. I finished it in two days and have already gained a wealth of insight that I can apply immediately. Hig...
|They have gathered together the ‘best bits’ of various styles and methodologies they have been directly involved with, and combined them into a practical approach with the focus on delivering a pro...
The Dreyfus Model has five stages, and understanding them has helped me decipher so many reactions I've seen, both in software and in life.
Our first stage, also known as the beginner, is someone just learning a new skill. When you learn a new programming language or a new tool, you need steps. Most successful programming books provide a number of easy to follow steps (remember 10 print "Hello world"?). These steps help familiarize you with the environment and what things you are supposed to do. If you'd like a refresher in how this works, try teaching an introduction on Java or Junit to people with no experience. It's amazing how many things we take for granted. If you skip steps, students get frustrated and quickly let you know, or just shut down.
The Advanced Beginner
We become familiar when we've seen enough explicit examples that we can start putting them together. We accumulate code snippets that work. We're not positive why they work, but they do. Unfortunately, this stage is still famous for frustration. When things don't work, we still don't have the skills to debug those problems. We need things to work the way we expect them too, but we're building an experiential skillset of what works.
This stage is what I call the recipe stage. We can finally get things working and become cut and paste gurus. We've got tons of working solutions at our fingertips. I used to carry around a CD with collections of projects and code snippets. These recipes helped me look very smart and enabled me to quickly "solve" problems and churn out working projects. We've developed enough experience to understand what's likely to go wrong and the debugging skills to solve problems. Most developers hit this stage, achieve a solid level of competence and effectiveness, and they stop learning. They're too busy working overtime and solving problems to continue learning. And why bother? I'm getting stuff done, right? This is the trap… good has become the enemy of better and best. This local optimization prevents so many in our industry from moving to the next stage.
Things finally become smooth. We discover patterns, principles, and values. We start to understand that we don't have a daily meeting because we're "doing Agile", but because the team needs to interact and share information with each other. And sometimes there are more effective ways to do that. We realize that test first isn't about creating a test suite, but about helping us think about the problem we're really solving. We start to use techniques like the "5 Whys" and advice like "keep improving how we deliver value at sustainable pace" finally makes sense. Sadly, as we enter this amazing new world, we often start spending time with others who've already arrived here. We begin using terms like "code smells" that our Competent Stage friends don't understand, and we slowly drift into the bubble of Very Smart People. As we're surrounded by these Very Smart People, we begin to assume everyone is also Very Smart, and we begin to forget how to provide steps to beginners, widening the gap between the stages. As we all know, smart people attract smart people, and incompetent do the same. Birds of a feather do flock after all. So the shops that have the higher-stage teams don't usually hire the lower stage applicants, and vice versa.
The expert stage is known for intuition. In the development world, we call these people all sorts of colorful names. Wizards. Code ninjas. Rock stars. These are people who sit down, look over your architecture and tell you it won't scale. They might not effectively verbalize why it won't scale because they don't always think in steps anymore. In fact, it's often easier for them to redo your work themselves than it would be to explain the flaws in the architecture. Naturally, these 10x productivity team members are idolized, leading the ego we often seen in these people. You have a problem? Here's a value. Here's a principle. Here's an idea. You don't understand it? You must be dumb. I know I'm not. (Sound familiar?)
We tackle our work in the way that seems right to us. We look ahead at the work on our plate and do our best to get it done. I often say that everyone makes great choices... given their own context and point of view. Unfortunately, that point of view sometimes leads us to a local optimization, where things look efficient until we step back and take a look at the bigger picture. Then we realize our local optimization wasn't nearly as efficient as we thought.
This often takes shape in how we break out our team's work. Some times we break everything down into layers (horizontal slicing) while others slice the work into smaller, but working, bits of functionality (vertical slicing).
Horizontal feels more efficient because it lets different product area specialists (like SQL or UI gurus or server-side code jockeys) work quickly and knock out a lot of stories. It lets your specialists work alone, and doesn't force them to engage with the other team members. This leads to bursts of productivity, with people in the zone, with their heads down and distractions banished.
The result is a large number of "completed" stories, and, of course, correspondingly impressive high velocities. There's lots of UX work done. Lots of db work done. Lots of server work done. But there's nothing to demo yet, because the work hasn't been integrated. Nothing's actually working or done, except for your stories.
You'll notice this happening when you have to add in "integration stories". These are stories for integrating various layers of work. This is when you discover that the db work isn't exactly what the mid-tier team was expecting. Or the UI team didn't quite understand what the server people put in place. Integration hell is the technical term... you end up spending excessive amounts of time doing frustrating work. Nerves are frayed and tensions run high. Fingers are pointed as people's expectations are dashed again and again.
The alternative is work more "slowly" and tackle vertical slices of work. With a verticle slice, we focus on one story, but we make it work end to end. This means that the UI gurus work alongside the server team, who are working with the database gurus, to get one story completely working. No one can be "done" until the story is working.
What's the advantage? On the surface, we're working more slowly. We're pulling people out of their specialties, forcing them to talk to non-specialist team members. It feels more cumbersome and it feels like you're making less progress. But those feelings are deceiving. We're doing three key things that provide enormous productivity gains.
- We're focusing on smaller amounts of work. The thin slices that we complete force us to break down the big stories, then complete and potentially deploy a story before the business changes their mind about our priorities and changes our direction.
- We're building a solid team. Forcing conversation and removing and entire category of interpersonal and inter team conflict. Instead of leaving a series of almost working stories, we're discussing those integration points during the iteration and resolving confusion before the code is written. When people do move in different directions, it'll be for a small amount of work instead a series of dozens of stories.
- We're removing the integration stories by integrating as we go. No story can be marked as "Done" until it's functional. This is a very different approach and it eliminates the need for integration stories or integration sprints. At any point, the code can be deployed.
This is one of the great secrets of many successful agile teams. They say the work can be deployed at any time, but many struggling teams never quite understand how this works. This is how. By focusing on small slices of work, and implementing it with cross functional teams, the work is added in small, incremental slices. This changes the discussion with business about when to ship new features. It prevents large slides of almost-done code from piling up in chunks of DB work, server work, or UI updates.
I strongly suggest you try this practice for a few iterations. It will certainly feel different, even awkward, at first. It'll be cumbersome as you learn how to work with team members you've only interacted with in frustration. But, as you build up your team wide "muscle memory", you'll find it becomes second nature and you'll wonder why you wasted so much time working any other way.
The path of a requirement in a large organization is often foreign to those more familiar with small or medium companies. In smaller arenas, developers and testers help define requirements. Everyone has a clear view of what's being built because they had a hand in defining and refining the ideas. Do you have a question about what this feature should be or what this report should have? Go ask Mike or Sue. It was their idea.
Enterprise scale is different. When the program budget is half a billion dollars and there are a few thousand developers and testers involved, requirements are shifted to a specialized team. Often an entire division is tasked with searching out and documenting requirements. These requirements are compressed and converted into documents that can be shared with teams of developers to implement, and teams of testers to verify. Each team can become specialists and do their own job at peak efficiency. Unfortunately for most enterprises, this model doesn't work well.
It's turned out that requirements are difficult to capture in documents. Many companies and teams have tried to use various types of documents, spreadsheets, and other tools. Many dollars have been spent trying to capture this lightning in a bottle, but so far everyone has been frustrated. The best results anyone has achieved is a sad acceptance that all requirements are bad, and a plan to rebuild most of the features two or three times until the customers are happy. Or until the customers are worn down enough to accept what's been produced.
We can do better, but it requires a different point of view. Let's start with Tony Brill's excellent battleship example.
The game of battleship was once a staple of American homes. Kids put up a small divider so their opponent can't see their board, then arrange their fleet of ships. The kids then took turns guessing (or "shooting at") coordinates to locate the "enemy" fleet. Once you get a hit, you can zero in your fire until your opponent cries out "You sank my battleship!" I spend more than a few hours trying to best my brother and friends, but there's an excellent analogy to our software efforts still hidden in this game.
There are two ways to play this game. You can play to be efficient or you can play to be effective.
This first, and most efficient, way to play this game is to not wait on your opponent. Come up with a strategy and "fire" all your shots. Place every peg you have, then find out if you placed them in the right spots. This strategy minimizes the amount of time spent playing the game. It's very efficient, just not very effective.
The second strategy is more effective, but can be seen as wasteful. It's not remotely efficient. Place your shot, and then ask your opponent if you hit the mark. No? Then place your next shot somewhere else. Yes? Then focus all your resources in that area. You'll soon sink any battleship you locate with this strategy.
Why is the second strategy seen as inefficient? It takes more time and is labor intensive. You'll spend less time (and less salary dollars!) by simply getting all the work done in a single pass. There's a great deal of comfort to the scheduling manager who can see a gate marked on a calendar, and who knows that requirements will be "done" on that date. History tells us the "completed" requirements aren't going to be very effective, but they're done.
The second strategy isn't efficient, but it sure is effective! Those labor-intensive checkpoints slow us down, but the feedback is invaluable. Strategy two doesn't guaranty a win, but it gives you a fighting chance.
How does this relate to requirements?
It's much more efficient to batch up the entire division's work and get it all defined in a single pass, but it's not very effective. Your requirements team hasn't gotten any feedback. The technical teams haven't seen anything. Are the requirements written in a format they understand? Is vital context missing? Can they implement them in a way that's acceptable to the customer? Who knows… but we're making great time!
The second strategy is slower, but more effective. Unless you measure all the rework an "efficient" strategy incurs. Then you'll find the "slower" approach both faster and more effective.
The second strategy focuses on a tighter feedback loop with smaller slices of work. Have the requirements team complete a small amount of work, then pass that work over to the technical teams. Do the developers understand it? Can the testers verify it? Find out. First have discussions, but then have them implement the first set of features. Bring the running code back to the requirements team. Was everyone speaking the same language when they talked about the report or the preferences pane? No? Then let's adjust that misunderstanding before moving forward. Let's give the requirements team a chance to get better at writing effective requirements before they spend months writing them!
(And we're ignoring the efficiencies found when the development team is only a few weeks behind the requirements team… that's pretty amazing as well.)
If you want to get really crazy with this idea, you'll include developers and testers on the requirement teams, but that's a topic for another day.
A client recently told me that requirements teams are like quarterbacks who run all over the (American) football field throwing passes. The QB thinks he's throwing great passes. He'll tell you how good the passes are if you ask him. But the person who can really judge the pass is the receiver. The QB might throw a great pass, but let's see if the receiver can catch it. That's the judge of the pass. The best pass in the world is useless if it can't be caught.
The same is true of requirements. The best requirements are those the team can implement and verify. It sounds like slowing down to get that interactive verification is inefficient, but it's not. Taking the time to aim the gun before firing slows down the shot, but ensures the target is hit.
A coaching model that I've found very effective is something I call roadmapping and mentoring. In a traditional coaching engagement, the coach comes alongside the team and works onsite for some period of time. This is a very effective way to work and teach, but also requires a substantial budget commitment. It also needs a dedicated block of time from the coach. Lining up client needs with coach's availability is often challenging, and there are usually problems that are discovered after the engagement is complete and the coach is at a different client site. These problems are often insurmountable for the small or medium software shop.
Roadmapping puts a coach back in reach for most teams. The coach comes onsite for only few days, and that drastically reduces the cost. I meet with both the teams and the leadership. What pain points triggered this invitation? We try to identify what the existing pain is, but we also look for pain that the team has come to accept as normal. What hidden pain points exist?
There?s usually management pain as well as technical pain. Quite often the pain is perceived as different issues when it's really just two sides of the same coin. The initial goal is to identify a focused set of changes that alleviate pain points.
After the initial visit, the client has a list of changes or improvements. However, like most people, I find clients are usually better at a “New Year’s style resolutions” than following through, so I return every few weeks to check up on my new friends. Sometimes they require a bit of encouragement and other times we switch directions or adopt new goals.
What sorts of challenges do teams have?
Many shops have similar challenges, and while this isn’t a comprehensive list, it does contain the “Top Five” most common issues I’ve encountered.
- Slow delivery/Long product cycles
- Lack of shared product vision
- Quality issues
- Lack of automated builds/tests/deploys
- Expensive manual product verifications
This is just an introductory discussion, but maybe it’ll spark a few ideas that can help move your organization forward.
As teams adopt more responsive software practices, one area is often left behind. We believe that the development team should deliver functionality incrementally. We know about minimum viable products. But, especially in larger companies, we hold onto the requirements until they are "done" or "right". Well-intentioned requirements groups work months getting work lined up for the developers and testers.
First, requirements, when done well, are an ever-evolving view of the product and the customer's needs. Trying to get it right in one go is like trying to ship your product in one pass. It's difficult to make anything perfect, especially requirements. The law of diminishing returns (and Little's law) kick in quickly. In other words, it's just not worth it. You'll spend far more money, and lose development runway time, by trying to perfect requirements.
I'd like to suggest we start thinking in terms of minimum viable estimates. This is not the completed estimate that's solid and can be relied upon. This is an evolving level of confidence. Understand and embrace continual elaboration and the Cone of Uncertainty. Initially we have a fuzzy idea, but it's gets better as we move forward. Here are a few steps your estimates might take.
- Gut estimate. Or maybe rough estimate. This is an off the cuff, rough order of magnitude estimate. I don't want you to do any research or assemble a team. Tell me ~right now~ how big you think this work is. Maybe in quarter year increments. This requirement looks like one quarter for a team with this expertise. That one is at least four quarters!
- Level 0 estimate. We've taken the requirement and broken it down to features. Here's how long we think it'll take, but we haven't gone too deep.
- Level 1. Now your teams have broken down the features into stories. We're finally moving towards something with solid numbers behind it.
I usually find that gut estimates are much more accurate than anyone suspects, but we're not trying to be "right". We need to be accurate enough to do rough capacity planning as early as possible and engage development earlier. Are we within an order of magnitude of our capacity? Then proceed. Otherwise let's start reining in expectations from our customers, managers and sales teams. These stakeholders rarely get everything they want, but they don't realize that they won't until the proverbial 9th hour. I'd like to get them that information earlier in the process and help them understand that they really do have to sequence (aka prioritize) the work.
And now for the weasel clause.... :)
What are these estimates? We all know they ~aren't~ estimates. True estimates can only come from the team doing the work. Just like your general contractor would never schedule your house remodeling work without talking to the subcontractors that will do the work, you shouldn't try to plan without talking to your teams. These "estimates" are just enough relative sizing to get us started. They're based on our experience with other projects, but they are only starting the first step.
What's the point?
It's that the requirements process, just like development, can benefit from a bit of transparency and diversity. Teams can begin working with you on smaller slices of work when they understand where you're heading with the work.
Often teams can start working long before you think they can… but since they have no incremental visibility into your process, they can't tell you the requirements are good enough. One feature will be solid enough for a good team to get started. Others won't be. Open the blinds and let your coworkers help with the work. They might surprise you.
And finally, the team creating requirements require feedback. Are they doing the best job possible? Probably not… no more than your developers and testers are. So let them complete a small slice of work and get the teams that consume the requirements involved. They can say "This is GREAT! I love the way this feature is described and organized. It's perfect." They might also say "I have no idea what you mean for this area… please don't write another 150 requirements this way!"
A tight feedback loop will enable your team to continually improve the process. Working without feedback rarely leads to the best product you are capable of producing and never builds a team. Involving your technical teams earlier helps the work become something you all own, instead of commandments handed down from on high.
Never forget that over 70% of our work is rework due to wrong requirements. Having another team of eyes with a different perspective will drastically improve quality.
Start with a more imprecise gut feel with rough epics and features. Involve your technical coworkers earlier. Don't be satisfied with a big bang requirements process. You'll be amazed at the effectiveness of a minimum viable requirement.