In the process of writing this article, I interviewed more than a dozen project managers. One question I asked was how to make good decisions. Their answers included weighing options, defining criteria, and seeking out different ways to resolve the situation at hand. But when I asked how many decisions they made a day, and how often they used the techniques they named, they often realized something was wrong. Many admitted (after looking over their shoulders to make sure no one else would hear) that it was impossible to always follow any formalized process for making decisions, given the limited time they had and the number of things they needed to get done.
Instead, they conceded that they often work on intuition, reasonable assumption, and a quick projection of the immediate issue against the larger goals of the project. If they can, they will reapply logic used for previous decisions or make use of experience from previous projects. But as reasonable as this answer sounded every time I heard it, the project manager and I found something disappointing about it. I think we all want to believe that all decisions are made with care and consideration, even though we know it can’t possibly be so. There is limited time and limited brain power, and not all decisions can be made equally well.
Failures in decision making occur most often not because the decision maker was weak-minded or inexperienced, but simply because he invested his energy poorly across all of the different decisions he had to make. There is a meta-process of deciding which decisions to invest time and energy in. It takes experience and the willingness to review mistakes and learn from them to get better at this higher-level decision making. (Different types of training can be done to develop these skills,[44] but I’ve never seen or heard of them as core components of any computer science or project management curriculum.)
It’s the ability to make effective decisions that explains how some people can manage five times as much work (or people) as others: they instinctively divide work into meaningful pieces, find the decisions and actions that have the most leverage, and invest their energy in making those decisions as good as possible. For the decisions they must invest less time in, any errors or problems caused by them should be easier to recover from than the mistakes they might have made in important decisions.
It’s curious then that when “decision-making skills” are taught in universities, students typically learn the methods of utility theory or decision tree analysis: processes where choices are assigned numerical values and computations are made against them (cost-benefit analysis is another commonly taught method). Many MBA degree programs include this kind of training.[45] But little coverage is offered for higher-level decisions or other practical considerations of decision making outside of the classroom. Methods like decision tree analysis demand the quantifying of all elements, which works well for exclusively financially based decisions, but is a stretch for design, strategy, or organizational decisions.
It’s not surprising then that of the project managers I interviewed, few had formal training in decision making, and of those who did, few used it often. This anecdotal observation fits with what Gary Klein wrote in his book, Sources of Power: How People Make Decisions (MIT Press, 1999): “… be skeptical of courses in formal methods of decision making. They are teaching methods people seldom use.” Klein goes on to explain the many different ways that skilled airline pilots, firefighters, and trauma nurses make decisions, and how rare it is that formalized methods found in textbooks are used to get things done. This doesn’t mean these methods are bad, just that the textbooks rarely provide any evidence about who uses the methods or how successful they are, compared to other techniques.
Much like project managers, Klein observed that these skilled professionals rarely have enough information or time to make those decision methods work. Instead, they have four things: experience, intuition, training, and each other. They make good decisions by maximizing those resources. In some cases, such as with fighter pilots or medical students, training is designed with this in mind. Instead of memorizing idealized procedures or theories during training, an emphasis is placed on developing experience through simulations of common problems and challenges.
In this article, my coverage of decision making focuses on three aspects: understanding what’s at stake, finding and weighing options (if necessary), and using information properly.
Sizing up a decision (what’s at stake)
Everything you do every day is a kind of decision—what time to wake up, what to eat for breakfast, and who to talk to first at work. We don’t often think of these as decisions because the consequences are so small, but we are always making choices. We all have our own natural judgments for which decisions in our lives demand more consideration, and the same kind of logic applies to project management decisions. Some choices, like hiring/firing employees or defining goals, will have ramifications that last for months or years. Because these decisions will have a longer and deeper impact, it makes sense to spend more time considering the choices and thinking through their different tradeoffs. Logically, smaller or less-important decisions deserve less energy.
So, the first part of decision making is to determine the significance of the decision at hand. Much of the time, we do this instinctively—we respond to the issue and use our personal judgment. Am I confident that I can make a good decision on the spot, or do I need more time for this? It often takes only a few moments to sort this out. However, this is precisely where many of us run into trouble. Those instincts might be guided by the right or wrong factors. Without occasionally breaking down a decision to evaluate the pieces that lead to that judgment, we don’t really know what biases and assumptions are driving our thinking (e.g., desiring a promotion, protecting a pet feature, or ignoring decisions that scare us).
With that in mind, here are questions to use in sizing up a decision.
- What problem is at the core of the decision? Decisions often arise in response to new information, and the initial way the issue is raised focuses on the acute and narrow aspects of the problem. So, the first thing is to ask probing questions. For example, the problem might be defined initially as, “We don’t have time to fix all 50 known bugs we’ve found,” but the real issue is probably “We have no criteria for how to triage bugs.” Redefining the decision, into a more useful form improves decision quality. Being calm in response to a seemingly urgent issue helps make this happen. Ask questions like: What is the cause of this problem? Is it isolated or will it impact other areas? Whose problem is it? Which goals in the vision doesn’t it put at risk? Did we already make this decision in the spec and, if so, do we have good reasons to reconsider now?
- How long will this decision impact the project? How deep will the impact be? A big decision, such as the direction of the vision or the technology to use, will impact the entire project. A small decision, such as what time to have a meeting or what the agenda should be, will impact a small number of people in a limited way. If it’s a long-term decision, and the impact is deep, patience and rigor are required. If it’s a short-term decision with shallow impact, go for speed and clarity, based on a clear sense of the strategic decisions made in the vision. Generally, it’s best to make big decisions early on or in a given phase of a project so they can be made with patient thought and consideration, instead of when time is running out.
- If you’re wrong, what’s the impact/cost? What other decisions will be impacted as a result? If the impact is small or negligible, then there isn’t much to lose. However, this doesn’t mean you should start flipping coins. For aspects of projects such as usability or reliability, quality comes from many small decisions being aligned with each other. The phrase “Death by a thousand cuts” comes from this situation, where it’s not one big mistake that gets you: it’s the many tiny ones. So, you must at least consider whether the choice is truly isolated. If it isn’t, it’s best to try and make several choices at once. For example, either follow the same UI design guidelines on all pages, refactor all the code that uses the same API, or cut those features completely. Get as much mileage as possible out of each decision you make.
- What is the window of opportunity? If you wait too long to make the decision, it can be made for you—routes will close and options will go away. In this universe, big decisions don’t necessarily come with greater amounts of time to make them. Sometimes, you have to make tough strategic decisions quickly because of the limited window of opportunity you have. And sometimes, the speed of making a decision is more important than the quality of the decision itself.
- Have we made this kind of decision before? This is the arrogance test. If I put you in an emergency room with a patient squirming on the operating table and asked you to perform heart bypass surgery, how confident would you be? There is no shame in admitting ignorance: it generally takes courage to do so. If you’re working on anything difficult, there will be times when you have no idea how to do something. Don’t hide this (unless you’re choosing speed over quality for the decision in question), or let anyone else hide it. Instead, identify that you think the team, or yourself, is inexperienced with this kind of choice and needs outside help, or more time. If a leader admits to ignorance, she makes it OK for everyone else to do the same. Suddenly, decision making for the entire team will improve because people are finally being honest.
- Who has the expert perspective? Is this really my decision? Just because someone asks you to decide something doesn’t mean you’re the best person to make the call. You are better at some decisions than others, so don’t rely on your own decision-making limitations. Never be afraid to pick up the phone and call the people who know more than you do about an issue. At least ask for their consultation and bring them into the discussion. Consider delegating the choice entirely to them: ask whether they think it’s their call to make or yours. If the relationship is good, it might be best to collaborate, although this requires the most time for both parties.
- Whose approval do we need? Whose feedback do we want/need before we decide? The larger the organization, the more overhead costs there are around decisions. A trivial decision can become complex when the politics of stakeholders come into play . A good test of your authority is how often trivial decisions require approvals or the formation of committees. The more processes there are around decisions, the more you must work through influence rather than decree. There are political costs to decisions that have nothing to do with technology, business, or customer considerations, and the impact of a decision includes them.
Finding and weighing options
In Sources of Power: How People Make Decisions, Klein identifies two basic ways people make decisions: singular evaluation and comparative evaluation (see Table 8-1). In singular evaluation, the first option is considered and checked against some kind of criteria (do I want to wear this green shirt today?). If it meets the criteria, it’s chosen and the decision maker moves on to more important things. If it doesn’t meet the criteria, another idea or choice is considered, and the process repeats (how about this yellow shirt?). Examples include finding a bathroom after drinking a liter of soda, or finding something to eat after fasting for three days. The first available restroom or restaurant you find is sufficient, and there’s no need to explore for alternatives.
At the other end of the decision-making spectrum, comparative evaluation requires seeking alternatives before deciding. Considering what city to move your family to is a good example of a common comparative evaluation decision.
Decision approach | How it works | Example |
Singular evaluation | The first reasonable alternative found is accepted. | You’ve been wounded by zombies and need to find a hospital. |
Comparative evaluation | Several alternatives are evaluated against each other before deciding. | You have only one extra anti-zombie inoculation and must decide who on the planet to save. |
Singular evaluation makes sense for situations where the difference between a great solution and a decent solution isn’t important. Klein describes these situations as being in the zone of indifference because the decision maker is indifferent to major aspects of the outcome as long as a basic criterion is met. Being able to recognize when all of the alternatives are in the zone of indifference can save a project significant time, enabling you to end debates and discussions early on and to focus energy on the complex decisions worthy of more thought. Good decision makers don’t waste time optimizing things that don’t need to be optimized. As Tyler Durden says, “That which doesn’t matter truly should not matter.”
Comparative evaluation is best for complex situations that involve many variables, have consequences that are difficult to grasp quickly, or require a high quality outcome. New situations or problems that are strategic in nature are prime candidates for comparative evaluation. The more that is at stake in a decision, and the less familiar everyone is with the nature of the options, the more appropriate a comparative evaluation is. With teams, comparative evaluation is the best framework to use if you have to convince others or want their participation in the decision-making process. Comparative evaluation forces you to make relative arguments and develop deeper rationales for action, which is useful for group discussion and communication.
Most of the time, there’s every reason to do quick comparisons. There are many different ways to do comparative evaluation, and some are less elaborate than others. For example, it doesn’t take more than a few minutes to list out a few alternatives for a decision on a whiteboard and to make some quick judgments about their relative value. And even when working alone, I’ve found that making a short list of comparisons is a great way to check my own sanity. If I can’t come up with more than one choice, I clearly don’t understand the problem well enough: there are always alternatives.
Emotions and clarity
Few people talk about them, but there are always emotional and psychological issues involved in decision making. Richard Restak, author of The Secret Life of the Brain (Joseph Henry Press, 2001), wrote, “There is no such thing as a non-emotional moment.” We always have fears, desires, and personal motivations for things, whether we acknowledge them or are even aware of them. Even altruistic motivations, such as wanting the best outcome for the project or for the people involved, have emotional components.
This means that even the most logical business-like person in the room has feelings about what he’s doing, whether he is aware of them or not. Sometimes emotions are useful in making decisions, but other times they slow us down or bias us against things we need to consider. And beyond personal feelings, the act of decision making itself involves pressure and stress, and it can create emotions and feelings that have nothing to do with the matter at hand. By externalizing the decision-making process through writing or talking, you can share emotional burden and increase the odds of finding clarity.
The easy way to comparison
Comparative evaluation can happen only if you’ve clarified the problem or issue to be decided. You also need a sense for desirable outcomes (ship sooner, improve quality, make the VP happy, etc.). Borrow words and phrasing from the vision document, specifications, or requirements lists. Those documents reflect big decisions that have already been made, so use them as leverage. Sometimes a quick conversation with the client, customer, or author of those documents is better than the documents themselves.
If you’re familiar with the specifics of the issue, or can get in a room with someone who is, it takes only a few minutes to come up with a decent list of possible choices. With a quick list, you’ll start to feel better about your alternatives and will have a basis for bringing other people into the discussion. Sometimes, it will be obvious that one choice is dramatically better than the others, and no further analysis is necessary. But often you’ll find the opposite: what appeared to be a no-brainer is more complicated than first thought. By writing down the choices, you get a chance to recognize that other issues were hiding from you.
The simplest way to do this is with a good old pros and cons list. I’m not sure when in life we learn it, but most everyone I’ve ever taught or managed was somehow familiar with making this type of list. What’s strange is that it’s uncommon to see people use these lists in meetings or discussions, perhaps because they’re afraid that by writing down their thought processes, others will think they’re not smart enough to keep it in their heads.
Apparently, the pros/cons list dates back to at least the 15th century, when it was used as a tool to help settle public debates. Then, centuries later, Benjamin Franklin applied the technique to his own decision making, so he is credited with popularizing it in the U.S.
As simple as this kind of list is, there are important considerations for using it effectively:
- Always include a “do nothing” option. Not every decision or problem demands action. Sometimes, the best way to go is to do nothing, let whatever happens happen, and invest energy elsewhere. Sunk costs are rarely worth trying to recover. Always give yourself this option, even if only to force the team to understand exactly what’s at stake in the decision. Depending on your local politics, having “do nothing” on the list can give more relative value to any other decision that you make because it reminds people that there is no universal law that says you must do something about a problem.
- How do you know what you think you know? This should be a question everyone is comfortable asking. It allows people to check assumptions and to question claims that, while convenient, are not based on any kind of data, firsthand knowledge, or research. It’s OK to make big unsupported claims—“I’m 100% positive this function will be reliable”—as long as everyone knows the only thing behind it is the opinion of the person making it (and can then judge it on that merit). As appropriate, seek out data and research to help answer important questions or claims.
- Ask tough questions. Cut to the chase about the impact of decisions. Be direct and honest. Push hard to get to the core of what the options look like.The quicker you get to the heart of the issue and a true understanding of the choices, the sooner you can move on to the next decision. Be critical and skeptical. Ask everyone to put feelings and personal preferences aside: don’t allow good ideas to hide behind the fear of hurting someone’s feelings. Show the list to others on the team, and add in their questions or meaningful comments. Put any questions or possible assumptions in the pros or cons column for a given idea; an unanswered question can still help clarify what a given choice really means.
- Have a dissenting opinion. For important decisions, it’s critical to include unpopular but reasonable choices. Make sure to include opinions or choices you personally don’t like, but for which good arguments can be made. This keeps you honest and gives anyone who sees the pros/cons list a chance to convince you into making a better decision than the one you might have arrived at on your own. Don’t be afraid to ask yourself, “What choice would make me look the worst but might still help the project?” or “Are there any good choices that might require that I admit that I’m wrong about something?”
- Consider hybrid choices. Sometimes it’s possible to take an attribute of one choice and add it to another. Like exploratory design, there are always interesting combinations in decision making. However, be warned that this does explode the number of choices, which can slow things down and create more complexity than you need. Watch for the zone of indifference and don’t waste time in it.
- Include any relevant perspectives. Consider if this decision impacts more than just the technology of the project. Are there business concerns that will be impacted? Usability? Localization? If these things are project goals and are impacted by the decision, add them into the mix. Even if it’s a purely technological decision, there are different perspectives involved: performance, reliability, extensibility, and cost.
- Start on paper or a whiteboard. When you’re first coming up with ideas/options, you want the process to be lightweight and fast. It should be easy to cross things out, make hybrids, or write things down rapid-fire (much like early on in the design process). Don’t start by making a fancy Excel spreadsheet, with 15 multicolored columns enabled for pivot tables; you’ll miss the point. For some decisions that are resolved quickly, the whiteboard list is all you’ll ever need. If it turns out you need to show the pros/cons list at an important meeting, worry about making an elaborate spreadsheet or slide deck later.
- Refine until stable. If you keep working at the list, it will eventually settle down into a stable set. The same core questions or opinions will keep coming up, and you won’t hear any major new commentary from the smart people you work with. When all of the logical and reasonable ideas have been vetted out, and showing the list to people only comes up with the same set of choices you’ve already heard, it’s probably time to move on and decide.
Discuss and evaluate
Effective decisions can be made only when there is a list of choices and some understanding of how the choices compare to each other. With a list in place, a person can walk through the choices and develop an opinion about which options have the greatest potential. It’s often only through discussion that strong opinions can be developed, and the list of choices acts as a natural discussion facilitator. I always try to put these decision matrixes up on a whiteboard, so when people walk into my office and ask about the status of an issue, I can point them to exactly where I am and show them why I’m leaning in a particular direction. Even if I don’t have a conclusion yet, it’s easy for them to understand why (perhaps buying me more time to make the decision). More so, I can ask them to review it with me, hear out my logic, and offer me their opinions. Instead of trying to explain it all on the fly, the pros/cons list documents all of the considerations and adds credibility to whatever opinion I’ve developed.
On teams that communicate well, it’s natural to discuss critical decisions as a group. Each person in the discussion tries to string together assumptions pulled from the pros/cons list and makes an argument for one particular decision. You’ll hear each person voice her opinion in terms of a story—“If we do this, then X will happen first, but we’ll be able to do Y”—and then someone else will chime in, refining the story or questioning one of the assumptions. The story gets refined, and the pros and cons for choices get adjusted to capture the clearer thinking that the group has arrived at. Over time (which might be minutes or days), everyone involved, especially the decision maker, has a full understanding of what the decision means and what tradeoffs are involved. When the pros and cons list stabilizes, and little new information is being added, it’s time to try and eliminate choices.
Sherlock Holmes, Occam’s Razor, and reflection
The character Sherlock Holmes once said, “If you eliminate the impossible, whatever remains, however improbable, must be the truth.” And so it goes with decision making: if you eliminate the worst choices, whatever remains, however bad, must be your best choice. This is admittedly a cynical way to decide things, but sometimes eliminative logic is the only way to gain momentum toward a decision.
If you’ve created a list of possible choices and need to narrow the field, look for choices that do not meet the minimum bar for the project. You might have included them earlier on because they added to the discussion and provided an opportunity to find hybrid choices, or because the requirements were being reconsidered, but now it’s time to cut them loose. Review your documents and requirements lists, check with your customer, and cross off choices that just won’t be good enough.
Another tool to narrow the possibilities is a principle known as Occam’s Razor. William of Occam was a medieval philosopher in the 12th century who’s credited with using the notion of simplicity to drive decisions. He believed that people add complexity to situations unnecessarily. He suggested that the best way to figure things out was to find the simplest explanation and use that first because, most of the time, it was the right explanation (i.e., in modern parlance, “Keep it simple, stupid”).
Occam’s Razor refers to the process of trying to cut away all of the unneeded details that get in the way and return to the heart of the problem. It also implies that the solution with the simplest logic has the greatest odds of being the best. There might be a promising choice that requires risky engineering or new dependencies on unreliable people. Applying Occam’s Razor, that choice’s lack of simplicity would be a reason for taking it off the list of possibilities.
But to apply Occam’s Razor effectively, you need time to reflect. After long hours pounding away at the same issues, you lose perspective. When all the choices start looking the same, it’s time to get away. Go for a walk, get some coffee with a friend, or do anything to clear your mind and think about something else. You need to be able to look at the choices with a clear and fresh mind in order to make an effective decision, and you can’t do that if you continue to stare at it all day.
Reflection is highly underrated as a decision-making tool. To reflect means to step back and allow all of the information you’ve been working with to sink in. Often, real understanding happens only when we relax and allow our brains to process the information we’ve consumed. I find doing something physical like going for a run or walk is the best way to allow my mind to relax. Other times, doing something purely for fun does the trick, like participating in a Nerf fight or playing with my dog. It’s also hard to beat a good night’s sleep (perhaps preceded by a collaborative romp between the sheets) for clearing the mind. But everyone is different, and you have to figure out for yourself the best way to give your mind time to digest everything you’ve been thinking about.
When you do come back to your comparison list, briefly remind yourself what the core issues are. Then, thinking of Occam, look at the alternatives and ask yourself which choice provides the simplest way to solve the problem at hand. The simplest choice might not promise the best possible outcome, but because of its simplicity, it might have the greatest odds of successfully resolving the problem to a satisfactory level.
Information is a flashlight
Most people educated in the Western world are taught to trust numbers. We find it easier to work with numbers and make comparisons with them than with abstract feelings or ideas. Decision and utility theory, mentioned briefly earlier, depends on this notion by claiming that we make better decisions if we can convert our desires and the probabilities of choices into numbers and make calculations based on them. Despite my earlier criticism of these theories, sometimes forcing ourselves to put numerical values on things can help us define our true opinions and make decisions on them.
But decisions aside, we commonly like to see evidence for claims in numeric form. There is a difference in usefulness and believability in someone saying “Our search engine is 12% slower on 3-word queries” than “The system is slow.” Numerical data gives a kind of precision that human language cannot. More so, numerical data is often demanded by people to support claims that they make. The statement “The system is slow” begs the question “How do you know this?” The lack of some kind of study or research into the answer makes the claim difficult to trust, or dependent solely on the opinion and judgment of the person saying it. Sometimes a specific piece of information answers an important question and resolves a decision much faster than possible otherwise.
Data does not make decisions
The first misconception about information is that it rarely makes a decision for you. A good piece of information works like a flashlight. It helps illuminate a space and allows someone who is looking carefully to see details and boundaries that were invisible before. If there is currently no data into a claim, taking the time to get data can accelerate the decision-making process. The fog lifts and things become clear. But returns diminish over time. After the first light has been lit and the basic details have been revealed, no amount of information can change the nature of what’s been seen. If you’re stranded in the middle of the Pacific Ocean, knowing the current water temperature or the subspecies of fish nearby won’t factor much in your survival decisions (but knowing the water currents, trade routes, and constellations might). For most tough decisions, the problem isn’t a lack of data. Tough decisions exist no matter how much information you have. The phenomenon of analysis paralysis, where people analyze obsessively, is symptomatic of the desperate belief that if only there was enough data, the decision would resolve itself. Sadly, this isn’t so. Information helps, but only so much.
It’s easy to misinterpret data
The second misnomer about data is that it’s all created equally. It turns out that when working with numbers, it’s very easy to misinterpret information. As Darrell Huff wrote in How to Lie with Statistics (W.W. Norton, 1993), “The secret language of statistics, so appealing in a fact-minded culture, is employed to sensationalize, inflate, confuse, and oversimplify.” Huff categorizes the many simple ways the same data can be manipulated to make opposing arguments, and he offers advice that should be standard training for decision makers everywhere. Most of the tricks involve the omission of important details or the exclusive selection of information that supports a desired claim.
For example, let’s say a popular sports drink has an advertisement that claims “Used by 5 out of 6 superstars.” It sounds impressive, but which superstars are using the product? What exactly separates a star from a superstar? Whoever they are, how were they chosen for the survey? How do they use the drink—to wash their cars? Were they paid first, or were they rejected from the survey if they didn’t already use the drink? Who knows. The advertisement certainly wouldn’t say. If you look carefully at all kinds of data, from medical research to business analysis to technological trends, you’ll find all kinds of startling assumptions and caveats tucked away in the fine print, or not mentioned at all. Many surveys and research reports are funded primarily by people who have much to gain by particular results. Worse, in many cases, it’s magazines and newspaper articles written by people other than those doing the research that are our point of contact to the information, and their objectives and sense of academic scrutiny are often not as high as we’d like them to be.
Research as ammunition
The last thing to watch out for is ammunition pretending to be research. There is a world of difference between trying to understand something and trying to support a specific pet theory. What happens all too often is that someone (let’s call him Skip) has an idea but no data, and he seeks out data that fits his theory. As soon as Skip finds it, he returns to whomever he’s trying to convince and says, “See! This proves I’m right.” Not having any reason to doubt the data, the person yields and Skip gets his way. But sadly, Skip’s supporting evidence proves almost nothing. One pile of research saying Pepsi is better than Coke doesn’t mean there isn’t another pile of research somewhere that proves the opposite. Research, to be of honest use, has to seek out evidence for the claim in question and evidence to dispute the claim (this is a very simple and partial explanation of what is often referred to as the scientific method). Good researchers and scientists do this. Good advertisers, marketers, and people trying to sell things (including ideas) typically don’t.
The best defense against data manipulation and misinterpretation is direct communication between people. Talk to the person who wrote the report instead of just reading it. Avoid second-, third-, and fourth-hand information whenever possible. Talking to the expert directly often reveals details and nuances that are useful but were inappropriate for inclusion in a report or presentation. Instead of depending exclusively on that forwarded bit of email, call the programmer or marketer on the phone and get his opinion on the decision you’re facing. There’s always greater value in people than in information. The person writing the report learned 1,000 things she couldn’t include in it but would now love to share with someone curious enough to ask.
Aside from using people as sources, a culture of questioning is the best way to understand and minimize the risks of information. As we covered earlier in matters of design and decision making, questions lead to alternatives, and they help everyone to consider what might be missing or assumed in the information presented. Questioning also leads to the desire for data from different sources, possibly from people or organizations with different agendas or biases, allowing for the decision maker and the group to obtain a clear picture of the world they’re trying to make decisions in.
Precision is not accuracy
As a last note about information and data, many of us forget the distinction between precision and accuracy. Precision is how specific a measurement is; accuracy is how close to reality a measurement is. Simply because we are offered a precise number (say, a work estimate of 5.273 days) doesn’t mean it has any greater likelihood of being accurate than a fuzzier number (4 or 5 days). We tend to confuse precision and accuracy because we assume if someone has taken the time to figure out such a specific number, the analysis should improve the odds that his estimation is good. The trap is that bogus precision is free. If I take a wild-assed guess (aka WAG) at next year’s revenue ($5.5 million), and another one for next year’s expenses ($2.35 million), I can combine them to produce a convincing-sounding profit projection: $3.15 million. Precise? Yes. Accurate? Who knows. Without asking “How do you know this?” or “How was this data produced?”, it’s impossible to be sure if those decimal places represent accuracy or just precision. Make a habit of breaking other people’s bad habits of misleading uses of precision.
The courage to decide
“All know the way; few actually walk it.”
—Bodhidharma
There is a big difference between knowing the right choice and making the right choice. Often many people can figure out the right decision, but very few will be willing to stand up and put themselves and their reputations behind it. You will always find more people willing to criticize and ridicule you for your decisions than people willing to take on the responsibility and pressure to make the decision themselves. Always keep this in mind. Decision making is a courageous act. The best decisions for projects are often unpopular, will upset or disappoint some important people on the team, and will make you an easy target for blame if things go wrong.
These burdens are common for anyone trying to engage in leadership activity. Decision making is one of the most central things leaders and managers do, and the better the leader, the more courage that’s required in the kinds of decisions that she makes.
Some decisions have no winning choices
One of the ugliest decisions I’ve made as a project manager involved the explorer bar component of Internet Explorer 4.0. The explorer bar was a new part of the user interface that added a vertical strip to the left part of the browser to aid users in navigating through search results, their favorites list, and a history of sites they’d visited. With a few weeks left before our first beta (aka test) release, we developed concerns about a design issue. We’d known about the problem for some time, but with the increasing public pressure of what were called the “browser wars,” we began to fear that this problem could hurt us in the press if we shipped with it.
The issue was this: it was possible, in special cases, to view the explorer bar in the same window as the filesystem explorer, allowing for a user to create a web browser that divided the screen into three ugly vertical strips, leaving a small area for actually viewing web pages. After seeing the press and the industry scrutinize IE 3.0, we feared beta users or journalists might discover this condition, make a screenshot of it, and release it as part of their reviews. Product reviews were critically important, especially for beta releases. There was consensus on the team and pressure from senior management that we had to take action and do something.
I made a pros and cons list quickly, discussed it with my programmers and other project managers, and identified three viable choices. They were all bad. Fixing the problem properly required five days of work, which we didn’t have. We’d have to cut another major feature to do that work in time, and it would be devastating to the quality of the release to do so. There was a hacky solution, requiring two days of work, that eliminated some of the cases that caused this condition, but it was work that would have to be thrown away later (the work was good enough for a beta release, but not good enough for a final release). The last choice was to do nothing and bet that no one would discover this issue. I desperately looked for other alternatives but didn’t find any. Every idea people came to me with led back down to these three choices. I remember sitting in my office one night until very late, just staring at my whiteboard and going around in circles on what I should do.
Every project manager can tell stories of tough choices they had to make. If you have responsibility, they come with the territory. They can involve decisions of budget, hiring, firing, business deals, technology, litigation, negotiation, design, business strategy, you name it. When faced with a tough decision, there is no single right answer. In fact, it’s entirely possible that things may happen to make none of the available choices (or all of them) lead to success. Decision making, no matter how well researched or scrutinized, is another act of prediction. At some level, any tough decision comes down in the end to the project manager’s judgment and courage—and the team’s courage—to follow it.
In this particular situation on IE4, I chose to do nothing. After a sleepless night, I decided I’d rather manage the press issues if and when they occurred (which would consume my time, not the programmers’) instead of investing in insurance against something that hadn’t happened yet. I wasn’t happy about it, but I felt it was the best choice for the project. The team had agreed early on that it was my decision to make, so we moved on.
Good decisions can have bad results
Our hindsight into past events has been unfair to many good decision makers. Simply because things didn’t work out in a particular way doesn’t mean they didn’t make a good choice with the information available. It’s impossible to cover every possibility when dealing with complex, difficult decisions (although some people will try). The more time you spend trying to cover every contingency, a common habit of micromanagers, the less time you’ll have to spend on the probable outcomes. There’s little sense in worrying about getting struck by lightning if you have a heart condition, eat poorly, and consider typing really fast as a form of exercise.
Simply because part of a project fails doesn’t necessarily mean a bad decision was made. It’s common for things to happen beyond the control of the project manager, the team, or the organization. Many things are impossible to predict, or even if predicted, impossible to be accounted for. It’s unfair to hold decision makers accountable for things they couldn’t possibly have known or done anything about. Yet, in many organizations, this is exactly what happens. If a team loses a close game, public opinion tends not to credit the hard work and heroic effort of the players who got the losing team even that far. Blame should be wielded carefully around decision making. Courageous decision makers will tend to fail visibly more often than those who always make safe and cautious choices. If you want courageous decision makers, there needs to be some kind of support for them to make big bets and to help them recover when they fail.
Project managers are definitely responsible for the fate of the project. I’m not suggesting they should be patted on the back for imploding a team. It’s just that care should be taken not to blame a PM for making a good decision that turned out to have a bad outcome. If his logic and thought process were sound before the decision was made, then even in hindsight, his logic and thought process are still just as sound after the decision was made. The state of the world at the moment a decision occurs doesn’t change later on simply because we know more now than we did then. If there was something the PM and the team didn’t know, or couldn’t see, despite their diligence in trying to know and see those things, they shouldn’t be roasted for it. Instead, the team should be thinking about how collectively they might have been able to capture the data and knowledge that they missed and apply that to the next decisions they have to make.
“Paying attention and looking back”
Paying attention and looking back
To improve decision-making skills, two things need to happen. First, you have to make decisions that challenge you and force you to work hard. If you never make decisions that you find difficult, and if you are rarely wrong, it’s time to ask your boss for more responsibility. Second, you have to pay attention to the outcomes of your decisions and evaluate, with the help of others involved, if you could have done anything differently to improve the quality of the outcome. Experience benefits only those who take the time to learn from it.
In training and in real missions, fighter pilots meet in debriefing sessions to review what took place. These sessions are led by senior and experienced staff. The central theme is that the only way to develop and learn about something as complex as being a fighter pilot is to review missions, correlate with everyone involved regarding what happened and why, and see if there were any ways to improve the outcome. These discussions often include analysis of strategy and tactics and an exchange of ideas and opinions for alternative ways to deal with the same situation.
The medical community does something similar in what are called M&M or morbidity and mortality sessions (jokingly referred to as D&D, death and doughnuts), though these are typically done only for fatal cases or where something particularly novel or complex was done.
In both cases, it’s up to the leaders of the session to avoid making the session a trial or to embarrass people for their mistakes. The goal should be to make them feel comfortable enough with what happened that they are willing to spend time reviewing and re-evaluating what occurred, so they learn something from it, and give others in the organization a chance to benefit from the costs of whatever took place.
Here’s my rough list of questions for reviewing decisions. When I’m called in to help teams evaluate previous work, this is the decision-making framework I start with. This works best as a group activity (because you’ll benefit from different perspectives), but it also functions for reviewing your own thinking.
- Did the decision resolve the core issue? This should be part of the decision-making process itself. Even if you make the right call, the difference is how well the team executes the decision. Two hours, one day, two days after a decision, the decision maker needs to check in and ensure the decision is being carried out. Those first few hours or days are when unforeseen problems arise.
- Was there better logic or information that could have accelerated the decision? Where was time spent in making the decision? Was there any knowledge or advice you could have had that would have accelerated the process of finding or exploring alternatives? What research tools were used? Did anyone go to the library? The bookstore? Search the Web? Call a consultant or expert? Why weren’t these sourced used?
- Did the vision, specification, or requirements help? Good project-level decisions should contribute to lower-level decisions. Did this decision reveal a weakness or oversight in the vision? Was the vision/spec/requirement updated after the decision was made to eliminate the oversight?
- Did the decision help the project progress? Sometimes making a bad decision moves the project forward. Decisions catalyze people. By making a quick decision to go east, and changing the perspective, it might become crystal clear that the right direction is actually north. But until the team started moving east, they might never have figured that out. In looking back, clarify why the initial decision was successful: was it because you made the right call or because you made the decision at the right time?
- Were the key people brought into the process and behind the decision? Was there anyone whose support or expertise was needed that wasn’t involved? Did you attempt to contact them and fail, or did you not even try? Was there some way to bring them in more effectively than you did? (You need to get their opinions on this if you want an honest perspective.)
- Did the decision prevent or cause other problems? The immediate issue might have been solved, but were other problems caused? Did morale drop? Was a partner company or team burned by the decision? What negative side effects did the decision have, and could they have been avoided? Were they anticipated, or were they a surprise?
- In hindsight, were the things you were worried about while making the decision the right things? Pressure and paranoia can distort one’s sense for which issues are worthy of attention. In hindsight, you should be able to see the things that were distorted in importance, by you or others, and ask yourself how it happened. Whose opinion or influence contributed to the distortion? Who tried to minimize it but was ignored?
- Did you have sufficient authority to make the right call? Perhaps you had an idea you wanted to run with, but you ditched it for political reasons. Or maybe you spent more time fighting for control over issues, which you felt should have been under your authority from the beginning. Consider how power played a role in the decision and how changes in the distribution of power might have changed how things went.
- How can what was learned in making this decision be applied elsewhere in the project? Don’t limit lessons learned to the specifics of the decision. Look at the next wave of decisions coming to the project (next important date or task), and apply the lessons to them. Use the new perspective and look out into the future, rather than only the past. Remember the Burmese saying: “A man fears the tiger that bit him last, instead of the tiger that will bite him next.”
References:
Making Things Happen
By: Scott Berkun