Coleman McCormick

Archive of posts with tag 'Tacit Knowledge'

Process vs. Practice

May 2, 2023 • #

In product development, you can orient a team toward process or practice. Process is about repeatability, scalability, efficiency, execution. Practice is about creativity, inventiveness, searching for solutions.

Choosing between them isn’t purely zero-sum (like more practice = worse process), but there’s a natural tension between the two. And as with most ideas, the right approach varies depending on your product, your stage, your team, your timing. In general, you’re looking for a balance.

Divergence and convergence

I heard about this concept on a recent episode of the Circuit Breaker podcast, with Bob Moesta and Greg Engle. A recommended listen.

In the discussion Bob mentions his experiences in Japan, and how the Japanese see process differently than we do here in the US:

A lot of this for me roots back to some of the early work I did in Japan around process. The interesting part is the difference between the way the Japanese talked about process was around the boundaries by which you have control.

The boundaries of the process basically say that you have responsibility and authority to change things inside that process, and that it was more about continuous improvement, changing, and realizing you know there’s always a way to make it better, but here are the boundaries. When it got translated over to the US, it got turned into “best practices” — to building a process and defining the steps. “These are ways to do it. I don’t want you to think. Stop thinking and just do the process and it works.” And so what happens is that most people equate making a process to not thinking and “just following the steps”. And so that’s where I think there’s this big difference is that at some point time there’s always got to be some deeper thinking inside the process.

Process assumes there’s a linearity to problem-solving — you program the steps in sequence: do step 1, do step 2, do step 3. At a certain stage of maturity, work benefits from this sort of superstructure. Once you’ve nailed the product’s effectiveness (it solves a known problem well), it’s time to swing the other way and start working out a process to bring down costs, speed things up, and optimize.

So what happens when a team over-indexes on process when they should be in creative practice mode?

A key indicator that it’s “practice time” is when you’ve got more unknowns than knowns. When there are still more unanswered questions than answered ones and you try to impose a programmatic process, people get confused and feel trapped. If you start to impose a linear process before it’s time, your team will grind to a halt and (very slowly) deliver a product that won’t solve user problems.

Too much process (or too early process) means you don’t leave room for the creative thinking required to answer the unanswered.

Legendary engineer and management consultant W. Edwards Deming had a saying about process:

“If you can’t describe what you do as a process, then you don’t know what you’re doing.”

But I love that Moesta calls this out, which I agree with. The quip overstates the value of process:

“But that doesn’t mean that if we can describe it as a process that we know what we’re doing. We can have a process and it doesn’t work!

The best position for the process ↔ practice pendulum is a function of your need at a point in time, and the maturity of your particular product (or feature, or function). In general the earlier you are on a given path, the less you should be concerned with process. You need the divergent-thinking creativity to search the problem space. You’re in “solve for unknowns” mode. In contrast, later on once you’ve solved for more of the unknowns and have confidence in your chosen direction, it’s beneficial to create repeatability, to shore up your ability to execute reliably. At that point it’s time to converge on executing, not to keep diverging into new territory.

Back to Bob’s point about process meaning “no thinking inside the process”, perhaps we could contrast process and practice by the size of the boundaries inside which we can be divergent and experimental. When we need to converge on scalability and consistency we don’t want to eliminate all thinking, just shrink down the confines of creativity. Even at this point in a mature cycle, the team will still encounter problems that we need them to think creatively to navigate — but the range of options should be limited (e.g. they can’t “start over”). When our problem space is still rife with unanswered questions, we want a pretty expansive space to wander in search of answers. If our problem space is defined by having Hard Edges and a Soft Middle, at different stages of our work we should size that circle according to how much divergence or convergence we need.

All this isn’t to say that during the creative, divergent thinking period that you should have an unbounded lack of structure to how you conduct the work. Perhaps it’s better to say that at this stage you want to define principles to follow that give you the degrees of freedom you need to explore the solution space.

✦

On Validating Product Ideas

January 19, 2023 • #

Building new things is an expensive, arduous, and long path, so product builders are always hunting for means to validate new ideas. Chasing ghosts is costly and deadly.

The “data-driven” culture we now live in discourages making bets from the gut. “I have a hunch” isn’t good enough. We need to hold focus groups, do market research, and validate our bets before we make them. You need to come bearing data, learnings, and business cases before allowing your dev team to get started on new features.

Validating ideas

And there’s nothing wrong with validation! If you can find sources to reliably test and verify assumptions, go for it.

But these days teams are compelled to conduct user testing and research to vet a concept before diving in.

This push for data-driven validation I see as primarily a modern phenomenon, for two reasons:

First, in the old days all development of any new product was an enormously costly affair. You were in it for millions before you had anything even resembling a prototype, let alone a marketable finished product. Today, especially in software, the costs of bringing a new tool to market are dramatically slashed from 20 or 30 years ago. The toolchains and techniques developed over the past couple decades are incredible. Between the rise of the cloud, AWS, GitHub, and the plethora of open source tools, a couple of people have superpowers to do what used to take teams of dozens, and thousands of hours before you even had anything beyond a requirements document.

Second, we have tools and the cultural motivation to test early ideas, which weren’t around back in the day. There’s a rich ecosystem of easy-to-use design and prototyping tools, plus a host of user testing tools (like UserTesting, appropriately) to carry your quick-and-dirty prototypes out into the world for feedback. Tools like these have contributed to the push for data-driven-everything. After all, we have the data now, and it must be valuable. Why not use it?

I have no problem with being data-driven. Of course you should leverage all the information you can get to make bets. But data can lie to you, trick you, overwhelm you. We’re inundated with data and user surveys and analytics, but how do we separate signal from noise?

One of my contrarian views is that I’m a big defender of gut-based decision making. Not in an “always trust your gut” kind of way, but rather, I don’t think listening to your intuition means you’re “ignoring the data.” You’re just using data that can’t be articulated. It’s hard to get intuitive, experiential out of your head and into someone else’s. You should combine your intuitive biases with other objective data sources, not attempt to ignore them completely.

I’m fascinated by tacit knowledge (knowledge gained through practice, experience). It differentiates between what can be learned only through practice from what can be read or learned through formal sources — things we can only know through hands-on experience, versus things we can know from reading a book, hearing a lecture, or studying facts. Importantly, tacit knowledge is still knowledge. When you have a hunch or a directional idea about how to proceed on a problem, there’s always an intrinsic basis for why you’re leaning that way. The difference between tacit knowledge and “data-driven” knowledge isn’t that there’s no data in the former case; it’s merely that the data can’t be articulated in a spreadsheet or chart.1

I’ve done research and validation so many ways over the years — tried it all. User research, prototype workshopping, jobs-to-be-done interviews, all of these are tools in the belt that can help refine or inform an idea. But none of them truly validate that yes, this thing will work.

One of the worst traps with idea validation is falling prey to your own desire for your idea to be valid. You’re looking for excuses to put a green checkmark next to an idea, not seeking invalidation, which might actually be the more informative exercise. You find yourself writing down all the reasons that support building the thing, without giving credence to the reasons not to. With user research processes, so many times there’s little to no skin in the game on the part of your subject. What’s stopping them from simply telling you want you want to hear?

Paul Graham wrote a post years ago with a phenomenal, dead-simple observation on the notion of polling people on your ideas. A primitive and early form of validation. I reference this all the time in discussions on how to digest feedback during user research (emphasis mine):

For example, a social network for pet owners. It doesn’t sound obviously mistaken. Millions of people have pets. Often they care a lot about their pets and spend a lot of money on them. Surely many of these people would like a site where they could talk to other pet owners. Not all of them perhaps, but if just 2 or 3 percent were regular visitors, you could have millions of users. You could serve them targeted offers, and maybe charge for premium features.

The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.

The hardest part is separating the genuine from the imaginary. Because of the skin-in-the-game problem with validation (exacerbated by the fact that the proposal you’re validating is still an abstraction), you’re likely to get a deceitful sense of the value of what you’re proposing. Would you pay for this? is a natural follow up to a user’s theoretical interest, but is usually infeasible at this stage.

It’s very hard to get early critical, honest feedback when users people have no reason to invest the time and mental energy thinking about it. So the best way to solve for this is to reduce abstraction — give the user a concrete, real, tangible thing to try. The closer they can get to a substantive thing to assess, the less you’re leaving to their imagination as to whether the thing will be useful for them. When an idea is abstract and a person says “This sounds awesome, I would love this”, you can safely assume they’re filling in unknowns with their own interpretation, which may be way off the mark. Tangibility and getting hands-on with a complete, usable thing to try, removes many false assumptions. You want their opinion on what it actually will be, not what they’re imagining it might be.

In a post from a couple years ago, Jason Fried hit the mark on this:

There’s really only one real way to get as close to certain as possible. That’s to build the actual thing and make it actually available for anyone to try, use, and buy. Real usage on real things on real days during the course of real work is the only way to validate anything.

The Agile-industrial complex has promoted this idea for many years: moving fast, prototyping, putting things in market, iterating. But our need for validation — our need for certainty — has overtaken our willingness to take some risk, trust in our tacit knowledge, and put early, but concrete and minimal-but-complete representations out there to test fitness.

De-risking investments is a critical element of building a successful product. But some attempts to de-risk actively trick us into thinking an idea is better than it is.

So I’ll end with two suggestions: be willing to trust your intuition more often on your ideas, and try your damnedest to smoke test an idea with a complete representation of it, removing as much abstraction as possible.

  1. I’d highly recommend Gerd Gigerenzer’s book Gut Feelings, which goes deep on this topic. For a great précis on the topic, check out his conversation from a few years back on the EconTalk podcast. 

✦
✦