Coleman McCormick

Archive of posts with tag 'Product'

February 4, 2024 • #

Look back less →

Jason Fried:

A better path is to reflect forward, not backwards. Develop a loose theory while working on what’s next. Appreciate there’s no certainty to be found, and put all your energy into doing better on an upcoming project. But how will you do better next time if you don’t know what went wrong last time? Nothing is guaranteed other than experience. You’ll simply have more time under the curve, and more moments under tension, to perform better moving forward. Internalize as you go, not as you went.

✦
January 27, 2024 • #

Captain Kirk making the case for “muddling through.”

✦
January 23, 2024 • #

What's the unit of impact? →

Ryan Singer calls out the different flavors of “impact”, and the need to be able to articulate the purpose for doing something:

✦
✦

Process vs. Practice

May 2, 2023 • #

In product development, you can orient a team toward process or practice. Process is about repeatability, scalability, efficiency, execution. Practice is about creativity, inventiveness, searching for solutions.

Choosing between them isn’t purely zero-sum (like more practice = worse process), but there’s a natural tension between the two. And as with most ideas, the right approach varies depending on your product, your stage, your team, your timing. In general, you’re looking for a balance.

Divergence and convergence

I heard about this concept on a recent episode of the Circuit Breaker podcast, with Bob Moesta and Greg Engle. A recommended listen.

In the discussion Bob mentions his experiences in Japan, and how the Japanese see process differently than we do here in the US:

A lot of this for me roots back to some of the early work I did in Japan around process. The interesting part is the difference between the way the Japanese talked about process was around the boundaries by which you have control.

The boundaries of the process basically say that you have responsibility and authority to change things inside that process, and that it was more about continuous improvement, changing, and realizing you know there’s always a way to make it better, but here are the boundaries. When it got translated over to the US, it got turned into “best practices” — to building a process and defining the steps. “These are ways to do it. I don’t want you to think. Stop thinking and just do the process and it works.” And so what happens is that most people equate making a process to not thinking and “just following the steps”. And so that’s where I think there’s this big difference is that at some point time there’s always got to be some deeper thinking inside the process.

Process assumes there’s a linearity to problem-solving — you program the steps in sequence: do step 1, do step 2, do step 3. At a certain stage of maturity, work benefits from this sort of superstructure. Once you’ve nailed the product’s effectiveness (it solves a known problem well), it’s time to swing the other way and start working out a process to bring down costs, speed things up, and optimize.

So what happens when a team over-indexes on process when they should be in creative practice mode?

A key indicator that it’s “practice time” is when you’ve got more unknowns than knowns. When there are still more unanswered questions than answered ones and you try to impose a programmatic process, people get confused and feel trapped. If you start to impose a linear process before it’s time, your team will grind to a halt and (very slowly) deliver a product that won’t solve user problems.

Too much process (or too early process) means you don’t leave room for the creative thinking required to answer the unanswered.

Legendary engineer and management consultant W. Edwards Deming had a saying about process:

“If you can’t describe what you do as a process, then you don’t know what you’re doing.”

But I love that Moesta calls this out, which I agree with. The quip overstates the value of process:

“But that doesn’t mean that if we can describe it as a process that we know what we’re doing. We can have a process and it doesn’t work!

The best position for the process ↔ practice pendulum is a function of your need at a point in time, and the maturity of your particular product (or feature, or function). In general the earlier you are on a given path, the less you should be concerned with process. You need the divergent-thinking creativity to search the problem space. You’re in “solve for unknowns” mode. In contrast, later on once you’ve solved for more of the unknowns and have confidence in your chosen direction, it’s beneficial to create repeatability, to shore up your ability to execute reliably. At that point it’s time to converge on executing, not to keep diverging into new territory.

Back to Bob’s point about process meaning “no thinking inside the process”, perhaps we could contrast process and practice by the size of the boundaries inside which we can be divergent and experimental. When we need to converge on scalability and consistency we don’t want to eliminate all thinking, just shrink down the confines of creativity. Even at this point in a mature cycle, the team will still encounter problems that we need them to think creatively to navigate — but the range of options should be limited (e.g. they can’t “start over”). When our problem space is still rife with unanswered questions, we want a pretty expansive space to wander in search of answers. If our problem space is defined by having Hard Edges and a Soft Middle, at different stages of our work we should size that circle according to how much divergence or convergence we need.

All this isn’t to say that during the creative, divergent thinking period that you should have an unbounded lack of structure to how you conduct the work. Perhaps it’s better to say that at this stage you want to define principles to follow that give you the degrees of freedom you need to explore the solution space.

✦

Simplicity on the Other Side of Complexity

April 21, 2023 • #

“I would not give a fig for the simplicity on this side of complexity, but I would give my life for the simplicity on the other side of complexity.”

What a great line from Oliver Wendell Holmes.

When you think you’re coming up with “simple” responses to complex problems, make sure you’re not (as Bob Moesta says) creating “simplicity on the wrong side of the complexity.”

What we really want is to work through all the tangled complexity ourselves as we’re picking apart the problem and designing well-fit solutions.

A great (simple) solution to a complex problem can be that way because someone’s taken on the burden of detangling the complexity first.

✦

37signals Live Design Review

March 22, 2023 • #

This is an interesting look into how an effective team works through the weeds of a product design review. I love how it shows the warts and complexities of even seemingly-simple flow of sending a batch email in an email client. So many little forking paths and specific details need direct thinking to shape a product that works well.

✦

On Validating Product Ideas

January 19, 2023 • #

Building new things is an expensive, arduous, and long path, so product builders are always hunting for means to validate new ideas. Chasing ghosts is costly and deadly.

The “data-driven” culture we now live in discourages making bets from the gut. “I have a hunch” isn’t good enough. We need to hold focus groups, do market research, and validate our bets before we make them. You need to come bearing data, learnings, and business cases before allowing your dev team to get started on new features.

Validating ideas

And there’s nothing wrong with validation! If you can find sources to reliably test and verify assumptions, go for it.

But these days teams are compelled to conduct user testing and research to vet a concept before diving in.

This push for data-driven validation I see as primarily a modern phenomenon, for two reasons:

First, in the old days all development of any new product was an enormously costly affair. You were in it for millions before you had anything even resembling a prototype, let alone a marketable finished product. Today, especially in software, the costs of bringing a new tool to market are dramatically slashed from 20 or 30 years ago. The toolchains and techniques developed over the past couple decades are incredible. Between the rise of the cloud, AWS, GitHub, and the plethora of open source tools, a couple of people have superpowers to do what used to take teams of dozens, and thousands of hours before you even had anything beyond a requirements document.

Second, we have tools and the cultural motivation to test early ideas, which weren’t around back in the day. There’s a rich ecosystem of easy-to-use design and prototyping tools, plus a host of user testing tools (like UserTesting, appropriately) to carry your quick-and-dirty prototypes out into the world for feedback. Tools like these have contributed to the push for data-driven-everything. After all, we have the data now, and it must be valuable. Why not use it?

I have no problem with being data-driven. Of course you should leverage all the information you can get to make bets. But data can lie to you, trick you, overwhelm you. We’re inundated with data and user surveys and analytics, but how do we separate signal from noise?

One of my contrarian views is that I’m a big defender of gut-based decision making. Not in an “always trust your gut” kind of way, but rather, I don’t think listening to your intuition means you’re “ignoring the data.” You’re just using data that can’t be articulated. It’s hard to get intuitive, experiential out of your head and into someone else’s. You should combine your intuitive biases with other objective data sources, not attempt to ignore them completely.

I’m fascinated by tacit knowledge (knowledge gained through practice, experience). It differentiates between what can be learned only through practice from what can be read or learned through formal sources — things we can only know through hands-on experience, versus things we can know from reading a book, hearing a lecture, or studying facts. Importantly, tacit knowledge is still knowledge. When you have a hunch or a directional idea about how to proceed on a problem, there’s always an intrinsic basis for why you’re leaning that way. The difference between tacit knowledge and “data-driven” knowledge isn’t that there’s no data in the former case; it’s merely that the data can’t be articulated in a spreadsheet or chart.1

I’ve done research and validation so many ways over the years — tried it all. User research, prototype workshopping, jobs-to-be-done interviews, all of these are tools in the belt that can help refine or inform an idea. But none of them truly validate that yes, this thing will work.

One of the worst traps with idea validation is falling prey to your own desire for your idea to be valid. You’re looking for excuses to put a green checkmark next to an idea, not seeking invalidation, which might actually be the more informative exercise. You find yourself writing down all the reasons that support building the thing, without giving credence to the reasons not to. With user research processes, so many times there’s little to no skin in the game on the part of your subject. What’s stopping them from simply telling you want you want to hear?

Paul Graham wrote a post years ago with a phenomenal, dead-simple observation on the notion of polling people on your ideas. A primitive and early form of validation. I reference this all the time in discussions on how to digest feedback during user research (emphasis mine):

For example, a social network for pet owners. It doesn’t sound obviously mistaken. Millions of people have pets. Often they care a lot about their pets and spend a lot of money on them. Surely many of these people would like a site where they could talk to other pet owners. Not all of them perhaps, but if just 2 or 3 percent were regular visitors, you could have millions of users. You could serve them targeted offers, and maybe charge for premium features.

The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.

The hardest part is separating the genuine from the imaginary. Because of the skin-in-the-game problem with validation (exacerbated by the fact that the proposal you’re validating is still an abstraction), you’re likely to get a deceitful sense of the value of what you’re proposing. Would you pay for this? is a natural follow up to a user’s theoretical interest, but is usually infeasible at this stage.

It’s very hard to get early critical, honest feedback when users people have no reason to invest the time and mental energy thinking about it. So the best way to solve for this is to reduce abstraction — give the user a concrete, real, tangible thing to try. The closer they can get to a substantive thing to assess, the less you’re leaving to their imagination as to whether the thing will be useful for them. When an idea is abstract and a person says “This sounds awesome, I would love this”, you can safely assume they’re filling in unknowns with their own interpretation, which may be way off the mark. Tangibility and getting hands-on with a complete, usable thing to try, removes many false assumptions. You want their opinion on what it actually will be, not what they’re imagining it might be.

In a post from a couple years ago, Jason Fried hit the mark on this:

There’s really only one real way to get as close to certain as possible. That’s to build the actual thing and make it actually available for anyone to try, use, and buy. Real usage on real things on real days during the course of real work is the only way to validate anything.

The Agile-industrial complex has promoted this idea for many years: moving fast, prototyping, putting things in market, iterating. But our need for validation — our need for certainty — has overtaken our willingness to take some risk, trust in our tacit knowledge, and put early, but concrete and minimal-but-complete representations out there to test fitness.

De-risking investments is a critical element of building a successful product. But some attempts to de-risk actively trick us into thinking an idea is better than it is.

So I’ll end with two suggestions: be willing to trust your intuition more often on your ideas, and try your damnedest to smoke test an idea with a complete representation of it, removing as much abstraction as possible.

  1. I’d highly recommend Gerd Gigerenzer’s book Gut Feelings, which goes deep on this topic. For a great précis on the topic, check out his conversation from a few years back on the EconTalk podcast. 

✦

Chris Spiek and Ryan Singer on Shape Up

September 28, 2022 • #

Reading Ryan Singer’s Shape Up a few years ago was formative (or re-formative, or something) in my thinking on how product development can and should work. After that it was a rabbit hole on jobs-to-be-done, Bob Moesta’s Demand-Side Sales, demand thinking, and more.

Since he wrote the book in 2019, he talks about 2 new concepts that extend Shape Up a little further: the role of the “technical shaper” and the stage of “framing” that happens prior to shaping.

Framing is a particularly useful device to add meat onto the bone of working through defining the problem you’re trying to solve. Shaping as the first step doesn’t leave enough space for the team to really articulate the dimensions of the problem it’s attempting to solve. Shaping is looking for the contours of a particular solution, setting the desired appetite for a solution (“we’re okay spending 6 weeks worth of time on this problem”), and laying out the boundaries around the scope for the team to work within.

I know in our team, the worst project execution happens when the problem is poorly-defined before we get started, or when we don’t get down into enough specificity to set proper scopes. I love these additions to the framework.

✦

Notes on Operating Well

June 27, 2022 • #

Sam Gerstenzang wrote an excellent piece a couple weeks ago with operating lessons for growing companies, driven by his learnings from the product team at Stripe. Personally, I’ve got a decade or so of experience as an “operator” at a “startup” (two words I wouldn’t have used to describe my job during most of that time). Since 2011 I’ve led the product team at Fulcrum, a very small team until the last few years, and still only in the medium size range. So my learnings on what “good operating” looks like are based mostly on this type of experience, not necessarily how to lead a massive product team at a 10,000 person company. I think a surprising number of the desirable characteristics translate well, though. Many more than BigCo operator types would have you believe.

Drafting tools

Overall this list that Sam put together is especially great for teams that are small, early, experimental, or trying to move fast building and validating products. There are a few tips in here that are pretty contrarian (unfortunately so, since they shouldn’t be contrarian at all) to most of what you’ll read in the literature on lean, agile, startuppy teams. But I like that many of these still apply to a company like Stripe that’s in the few-thousands headcount range now. There isn’t that stark a difference in desirable characteristics between small and large teams — or at least there doesn’t need to be.

On the desire to enforce consistency:

Consistency is hard to argue against – consistency reinforces brand and creates better ease of use – but the costs are massive. Consistency just feels good to our system centric product/engineering/design brains. But it creates a huge coordination cost and prohibits local experimentation – everything has to be run against a single standard that multiplies in communication complexity as the organization gets larger. I’d try to ask “if we aim for consistency here, what are the costs, and is it worth it?” I think a more successful way to launch new products is to ignore consistency, and add it back in later once the project is successful. It will be painful, but less painful than risking success.

I’d put this in the same category as feeling need to “set yourself up to scale”. When a team lead is arguing to do something in a particular way that conforms with a specific process or procedure you want to reinforce through the company, it’s an easy thing to argue for. But too often it ignores the trade-offs in coordination overhead it’ll take to achieve. Then the value of the consistency ends up suspect anyway. In my experience, coordination cost is a brutal destroyer of momentum in growing companies. Yes, some amount of it is absolutely necessary to keep the ship floating, but you have to avoid saddling yourself with coordination burdens you don’t need (and won’t benefit from anyway). Apple or Airbnb might feel the need to tightly coordinate consistency, but you aren’t them. Don’t encumber yourself with their problems when you don’t have to.

Enforcement of consistency — whether in in design, org charts, processes, output — has a cost, sometimes a high cost. So it’s not always worth the trade-off you have to make in speed or shots-on-goal. With a growing company, the number of times you can iterate and close feedback loops is the coin of the realm. Certain things may be so important to the culture you’re building that the trade-off is worth it, but be judicious about this if you want to maintain velocity.

On focus:

People think focus is about the thing you’re focused on. But it’s actually about putting aside the big shiny, exciting things you could be working on. The foundation of focus is being clear upfront about what matters—but the hard work is saying no along the way to the things that feel like they might matter.

True focus is one of the hardest parts once your team gets some traction, success, and revenue. It’s actually easier in the earlier days when the sense of desperation is more palpable (I used to say we were pretty focused early on because it was the only way to get anything made at all). But once there’s some money coming in, users using your product, and customers are growing, you have time and resources you didn’t have before that you can choose to allocate as you wish. But the thing is, the challenge ahead is still enormous, and the things you’ve yet to build are going to get even more intricate, complex, and expensive.

It’s simple to pay lip service to staying focused, and to write OKRs for specific things. But somehow little things start to creep in. Things that seem urgent pop up, especially dangerous if they’re “quick wins”. You only have to let a few good-but-not-that-important ideas float around and create feel-good brainstorming sessions, and before you know it, you’ve burned days on stuff that isn’t the most important thing. There’s power in taking an ax to your backlog or your kanban board of ideas explicitly. Delete all the stuff you aren’t gonna do, or at least ship it off to Storage B and get to work.

On product-market fit and small teams:

Start with a small team, especially when navigating product market fit. Larger teams create communication overhead. But more importantly they force definition = you have to divide up the work and make it clear who is responsible for what. You’re writing out project plans and architecture diagrams before you even should know what you’re building. So start small and keep it loose until you have increased clarity and are bursting at the seams with work.

To restate what I said above, it’s all about feedback loops. How many at-bats can you get? How many experiments can you run? Seeking product-market fit is a messy, failure-ridden process that requires a ton of persistence to navigate. But speed is one of your best friends when it comes to maintaining the determination to unlock the job to be done and find just enough product fit that you get the signals needed to inform the next step. Therefore, small, surgical teams are more likely to successfully run this gauntlet than big ones. All of the coordination cost on top of a big, cross-functional group will drain the team, and greatly reduce the number of plate appearances you get. If you have fewer points of feedback from your user, by definition you’ll be less likely to take a smart second or third step.

The responsibility point is also a sharp one — big teams diffuse the responsibility so thinly that team members feel no ownership over the outcomes. And it might be my cynicism poking through, but on occasion advocates for teams to be bigger, broader, and more cross-functional are really on the hunt for a crutch to avoid ownership. As the aphorism goes, “a dog with two owners dies of hunger.” Small teams have stronger bonds to the problem and greater commitment to finding solutions.

Overall, I think his most important takeaway is the value of trust systems:

Build trust systems. The other way is to create systems that create trust and distributed ownership – generally organized under some sort of “business lead” that multiple functions report to. It’s easier to scale. You’ll move much more quickly. A higher level of ownership will create better job satisfaction. But you won’t have a consistent level of quality by function, and you’ll need to hire larger numbers of good people.

A company is nothing but a network of relationships between people on a mission to create something. If those connections between people aren’t founded in trust and a shared understanding of goals and objectives, the cost of coordination and control skyrockets. Teams low in trust require enormous investment in coordination — endless status update meetings, check-ins, reviews, et al1. If you can create strong trust-centric operating principles, you can move so much faster than your competition that it’s like having a superpower. The larger teams grow, of course, the more discipline is required to reinforce foundations of trust, but also the more important those systems become. A large low-trust team is incredibly slower than a small one. Coordination costs compound exponentially.

  1. Don’t take this to mean I’m anti-meeting! I’ve very pro-meeting. The problem is that ineffective ones are pervasive. 

✦
✦

Hard Edges, Soft Middle

January 2, 2022 • #

Have you had that feeling of being several weeks into a project, and you find yourself wandering around, struggling to wrangle the scope back to what you thought it was when you started?

It’s an easy trap to fall into. It’s why I’m always thinking about ways to make targets smaller (or closer, if you’re thinking about real physical targets). The bigger and more ambitious you want to be with an objective, the more confidence you need to have that the objective is the right one. What happens often is we decide a project scope — a feature or product prototype we think has legs — but the scope gets bigger than the confidence that we’re right. A few weeks in and there’s hedging, backtracking, redefining. You realize you went down a blind alley that’s hard to double-back on.

I heard an interesting perspective on scopes and approaches to building. Think of the “scope” as the definition of what the project is seeking to do, and the approach as the how.

Hard edges, soft middle

In an interview on David Perell’s podcast, Ryan Singer made a comparison between having a hard outer boundary for the work with soft requirements on approach, versus rigid and specific micro-steps, without a solid fence around it, an unclear or amorphous objective. In his words: “hard walls with a soft middle” or “hard middle with a soft wall”:

I’ve had this mental image that I haven’t been able to shake that’s working for me lately, which is what we’re doing in Shape Up. We have a very hard outer wall for the work. And we have a soft middle. So there’s a hard outer boundary perimeter — it’s very fixed, it’s going to be six weeks and we’re doing this, and this is in the project, and this is out of the project, and this is what this solution more or less. Clear hard outer boundaries. But then the middle is totally like “hey, you guys figure it out.” Right now what a lot of companies have is the opposite. They have a hard middle and a soft boundary. So what happens is they commit to this for the first two weeks, we’re going to build this and we’re going to build that, and we’re going to build that all these little things. And these become tickets or issues or very specific things that have to get done. And then what happens the next two weeks you say, okay, now we’re going to do this. You’re specifying exactly what should go in the middle, and it just keeps growing outward because there’s no firm boundary on the outside to contain it. So this is the the hard wall and the soft middle or the hard middle in the soft wall. I think our represent two very, very different approaches.

This requires trust in the product team to choose approach trade-offs wisely. If you encounter a library in use for the feature that’s heavily out of date, but the version update requires sweeping changes throughout the app, you’ll need to pick your battles. A team with fixations on particular steps (the “hard middle”) might decide too early that an adjacent feature needs rework1. Before pulling up to a higher altitude to look at the entire forest, the team’s already hitched to a particular step.

Setting a hard edge with the soft middle sets what the field of play and game plan look like, but doesn’t prescribe for the team what plays to run. The opposite model has a team hung up on specific play calls, with no sense for how far there is to run, or even how large the field is in the first place. When you grant the team the freedom to make the tactical choices, everyone knows there’s some freedom, but it isn’t infinite. The team can explore and experiment to a point, but doesn’t have forever to mess around. If you choose to work in the Shape Up-style 6 week cycle, decision velocity on your approaches has to be pretty high to hit your targets.

Any creative work benefits from boundaries, from having constraints on what can be done. The writer is constrained by a deadline or word count. The artist is constrained by the canvas and medium. A product team should be constrained by a hard goal line in terms of time or objective, or preferably both.

Some of the best work I’ve ever been a part of happened when we chose particular things we weren’t going to do — when we intentionally blocked specific paths for ourselves for some cost/benefit/time balance. Boundaries allow us to focus on fewer possibilities and give greater useful, serious attention to fewer options. We can strongly consider 10 approaches rather than poorly considering 50 (or, even worse, becoming attached to a specific one before we’ve explored any others).

Premature marraige to specific tactics pins you to the ground at the time when you need some space to explore. Because you’ve locked yourself into a particular approach too early, it may require tons of effort and time to navigate from your starting point to the right end point. You may end up having to do gymnastics to make your particular decided-upon solution fit a problem you can find (a solution in search of a problem).

Hard edge, soft middle reminds me of a favorite philosophy from the sage Jeff Bezos, talking about Amazon’s aggressive, experimental, but intentional operating culture:

“Be stubborn on the vision, but flexible on the details.”

  1. What you “need” to do is a dangerous trigger word. Almost always the perceived need is based on a particular understanding of trade-offs that could be misguided. One engineer’s need to recover some technical debt (while noble, of course) might be the opposite for CEO, who might be seeing a bigger picture existential need to the business. A thing is only “more needed” relative to something else. 

✦

Product-led Growth Isn't Incompatible with Sales

September 1, 2021 • #

Product-led growth has been booming in the B2B software universe, becoming the fashionable way to approach go-to-market in SaaS. I’m a believer in the philosophy, as we’ve seen companies grow to immense scales and valuations off of the economic efficiencies of this approach powered by better and better technology. People point to companies like Atlassian, Slack, or Figma as examples that grew enormously through pure self-service, freemium models. You hear a lot of “they got to $NN million in revenue with no salespeople.”

This binary mental model of either product-led or sales-led leads to a false dichotomy, imagining that these are mutually exclusive models — to grow, you can do it through self-service or you can hire a huge sales team, pick one. Even if it’s not described in such stark terms, claims like “they did it without sales” position sales as a sort of necessary evil we once had to contend with against our wills as technology builders.

Product-sales compatibility

But all of the great product-led success stories (including those mentioned above) include sales as a component of the go-to-market approach. Whether they refer to the function being performed by that name or they prefer any number of other modern euphemisms (customer happiness advocate, growth advisor, account manager), at scale customers end up demanding an engagement style most of us would call “sales.”

Product-led, self-service models and sales are not incompatible with one another. In fact, if structured well, they snap together into a synergistic flywheel where each feeds off of the other.

Early-stage customers

Product-led tactics have the most benefit in the early stage of a customer’s lifecycle, when your product is unproven. Free trials and freemium options lower the bar to getting started down to the floor, self-service tools allow early users to learn and deploy a tool in hours on their own timeline, and self-directed purchasing lets the buyer buy rather than be sold to. In 2021, flexibility is table stakes for entry-level software adoption. There are so many options now, the buying process is in the customer’s control.

With the right product design, pricing, and packaging structure, customers can grow on their own with little or no interaction through the early days of their expansion. For small to mid-size users, they may expand to maximum size with no direct engagement. Wins all around.

For larger customers (the ones all of us are really after in SaaS), this process gets them pretty far along, but at some stage other frictions enter the picture that have nothing to do with your product’s value or the customer’s knowledge of it. Financial, political, and organizational dynamics start to rear their heads, and these sorts of human factors are highly unlikely to get resolved on their own.

The Sales Transition

Once the bureaucratic dynamics are too great, for expansion to continue we need to intervene to help customers navigate their growing usage. As I wrote about in Enterprises Don’t Self-Serve, several categories of friction appear that create growth headwinds:

  • Too many cats need to be herded to get a deal done — corralling the bureaucracy is a whole separate project unrelated to the effectiveness or utility of the product; no individual decision maker
  • The buyer isn’t the user — user can’t purchase product, purchaser has never used product; competing incentives 
  • If you have an advocate, they have a day job — And that job isn’t playing politics with accounting, legal, execs, IT, and others

As you start encountering these, you need to proactively intervene through sales. The role of sales is to connect with and navigate the players in the organization, then negotiate the give and take arrangements that create better deals for both parties: e.g. customer commits to X years, customer gets Y discount. Without a sales-driven approach here, every customer is treated as one-size-fits-all. Not the best deal for the vendor or customer. When you insert sales at the right stage, you increase the prospect of revenue growth, and the customer’s ability to sensibly scale into that growth with proper integration throughout their organization.

In SaaS literature you’ll read about the notion of “champions”, internal advocates for your product within your customer that are instrumental in growing usage. Champions serve a function in both methodologies — with product-led, they’re pivotal for adoption to perpetuate itself without your involvement, and when engaging with sales, we need those champions to be intermediaries between vendor and buyer. They act like fixers or translators, helping to mediate the communication between the sides.

A well-built, product-led product mints these champions through empowerment. We give users all the tools they need — documentation, guides, forums, SDKs — to build and roll out their own solution. After a couple phases of expansion, users evolve from beginners to experts to champions. If we’re doing it right and time sales correctly, champions are a key ingredient to maximizing relationships for customers and product-makers. Product-led approach early creates inertia to keep growing, a back pressure that sales can harness to our advantage.

AppCues publishes their product-led growth flywheel, which describes this cycle succinctly:

Product-led flywheel

As they demonstrate, a user becoming a champion isn’t the end state; champions beget future brand new users through advocacy, word-of-mouth, and promotion within their own networks.

It’s dangerously short-sighted to look down the nose at sales as a bad word. Sales isn’t just something you resort to when you “can’t do PLG”, it’s a positive-sum addition to your go-to-market when you execute this flywheel properly.

✦
✦

On Effectiveness vs. Efficiency

July 26, 2021 • #

“Efficiency is doing things right; effectiveness is doing the right things.”

— Peter Drucker

People throw around these two words pretty indiscriminately in business, usually not making a distinction between them. They’re treated as interchangeable synonyms for broadly being “good” at something.

We can think about effectiveness and efficiency as two dimensions on a grid, often (but not always) in competition with one another. More focus on one means less on the other.

That Drucker quote is a pretty solid one-line distinction. But like many quotes, it’s concerned with being pithy and memorable, but not that helpful.

Effectiveness

“Doing things right” is too amorphous. I’d define the two dimensions like this:

  • Efficiency is concerned with being well-run, applying resources with minimal waste; having an economical approach
  • Effectiveness is a focus on fit, fitting the right solution to the appropriate problem, being specific and surgical in approach

Where would speed fit into this? Many people would think of velocity of work as an aspect of efficiency, but it’s also a result of and an input to effectiveness. When a team of SEAL operators swoop in to hit a target, we’d say that’s just about the pinnacle of being “effective”, and swiftness is a key factor in driving that effectiveness.

Let’s look at some differences through the lens of product and company-building. What does it mean to orient on one over the other? Which one matters more, and when?


A company is like a machine — you can have an incredibly efficient machine that doesn’t do anything useful, or you can have a machine that does useful things while wasting a huge amount of energy, money, and time.

With one option, our team leans toward methods and processes that efficiently deploy resources:

  • Use just the right number of people on a project
  • Create infrastructure that’s low-cost
  • Build supportive environments that get out of peoples’ way
  • Instrument processes to measure resource consumption
  • Spend less on tools along the way

With this sort of focus, a team gets lean, minimizes waste, and creates repeatable systems to build scalable products. Which all sounds great!

On the other dimension, we apply more attention on effectiveness, doing the right things:

  • Spend lots of time listening to customers to map out their problems (demand thinking!)
  • Get constant feedback on whether or not what we’re making helps customers make progress
  • Test small, incremental chunks so we stay close to the problem
  • Make deliberate efforts, taking small steps frequently, not going too far down blind alleys with no feedback

Another great-sounding list of things. So what do we do? Clearly there needs to be a balance.

Depending on preferences, personality types, experiences, and skill sets, different people will tend to orient on one of these dimensions more than the other. People have comfort zones they like to operate in. Each stage of product growth requires a different mix of focuses and preferences, and the wrong match will kill your company.

If you’re still in search of the keys to product-market fit — hunting for the right problem and the fitting solution — you want your team focused on the demand side. What specific pains do customers have? When do they experience those pains? What things are in our range that can function as solutions? You want to spend time with customers and rapidly probe small problems with incremental solutions, testing validity of your work. That’s all that matters. This is Paul Graham’s “do things that don’t scale” stage. Perfecting your machine’s efficiency is wasted effort until you’re solving the right problems.

A quick note on speed, and why I think it’s critical to being effective — if you’re laser-focused on moving carefully and deliberately to solve the right problem at the right altitude, but not able to move quickly enough, you won’t have a tight enough feedback loop to run through the iteration cycle enough over a period of time. Essential to the effectiveness problem is the ability to rapidly drive signal back from a user to validate your direction.

When you find the key that unlocks a particular problem-solution pair, then it’s time to consider how efficiently you can expand it to a wider audience. If your hacked-together, duct-taped solution cracks the code and solves problems for customers, you need to address the efficiency with which you can economically expand to others. In the early to mid-stage, effectiveness is far and away the more important thing to focus on.

The traditional definition of efficiency refers to achieving maximum output with the minimum required effort. When you’re still in search of the right solution, the effort:output ratio barely matters. It only matters insofar as you have the required runway to test enough iterations to get something useful before you run out of money, get beat by others, or the environment changes underneath you. But there’s no benefit to getting 100 miles/gallon if you’re driving the wrong way.

Getting this balance wrong is easy. There’s a pernicious aspect to many engineers, particularly so in pre-PM-fit products: they like to optimize things. You need to forcefully resist spending too much time on optimization, rearchitecting, refactoring, et al until it’s the right time (i.e. the go-to-market fit stage, or thereabout). As builders or technologists, most of us bristle at the idea of doing something the quick and dirty way. We have that urge to automate, analyze, and streamline things. It’s not to say that there’s zero space for this. If you spend literally zero time on a sustainable foundation, then your product clicks and it’s time to scale up, you’ll be building on unstable ground (see the extensive literature on technical debt here).

There’s no “correct” approach here. It depends on so many factors. As Tom Sowell says, “there are no solutions, only trade-offs.” In my first-hand experience, and from sideline observations of other teams, companies are made by favoring effectiveness early and broken by ignoring efficiency later.

✦

The Low-Code IKEA Effect

March 22, 2021 • #

I linked a few days ago to Packy McCormick’s piece Excel Never Dies, which went deep on Microsoft Excel, the springboard for a thousand internet businesses over the last 30 years. “Low-code” techniques in software have become ubiquitous at this point, and Excel was the proto-low-code environment — one of the first that stepped toward empowering regular people to create their own software. In the mid-80s, if you wanted to make your own software tools, you were in C, BASIC, or Pascal. Excel and its siblings (Lotus 1-2-3, VisiCalc) gave users a visual workspace, an abstraction layer lending power without the need to learn languages.

Today in the low-code ecosystem you have hundreds of products for all sorts of use cases leaning on similar building principles — Bubble and Webflow for websites, Make.com and Zapier for integrations, Notion and Coda for team collaboration, even Figma for designs. The strategy goes hand-in-hand with product-led growth: start with simple use cases, be inviting to new users, and gradually empower them to build their own products.

Low-code IKEA effect

Excel has benefited from this model since the 80s: give people some building blocks, a canvas, and some guardrails and let them go build. Start out with simple formulas, create multiple sheets, cross-link data, and eventually learn enough to build your own complete custom programs.

What is it about low-code that makes such effective software businesses? Sure, there’s the flexibility it affords to unforeseen use cases, and the adaptability to apply a tool to a thousand different jobs. But there’s psychology at play here that makes it particularly compelling for many types of software.

There’s a cognitive phenomenon called the “IKEA effect”, which says:

Consumers are likely to place a disproportionately high value on products they partially created

IKEA is famous for its modular furniture, which customers take home and partially assemble themselves. In a 2011 paper, Michael Norton, Daniel Mochon, and Dan Ariely identified this effect, studying how consumers valued products that they personally took part in creating, from IKEA furniture, to origami figures, to LEGO sets. Other studies of effort justification go way back to the 1950s, so it’s a principle that’s been understood, even if only implicitly, by product creators for decades.

Low-code tools harness this effect, too. Customers are very willing to participate in the creation process if they get something in return. In the case of IKEA it’s more portable, affordable furniture products. In low-code software it’s a solution tailored to their personal, specific business need. Paradoxically, the additional effort a customer puts into a product through self-service, assembly, or customization generates a greater perception of value than consumer being handed an assembled, completed product.

SaaS companies should embrace this idea. Letting the customer take over for the “last mile” creates wins all around for everyone. Mutual benefits accrue to both creator and consumer:

  • Customers have a sense of ownership when they play a role in building their own solution.
  • The result can be personalized. In business environments, companies want to differentiate themselves from competitors. They don’t want commodities that any other player can simply buy and install. Certainly not for their unique business practices.
  • Production costs are reduced. The product creator builds the toolbox (or parts list, instructions, and tool kit) and lets the consumer take it from there. Don’t have to spend time understanding the nuances of hundreds of different use cases. You provide building blocks and let recombination generate thousands of unique solutions.
  • Increased retention! Studies showed that consumers consistently rated products they helped assemble higher in value than already-assembled ones. This valuation bias manifests in retention dynamics for your product: if customers are committed enough and build their own solution, they’ll more likely imbue it with greater value.

The challenge for product creators is to strike a balance — a “just-right” level of customer participation. Too much abstraction in your product, requiring too much building of primitives, and the customer is confused and unlikely to have the patience to work through it. Likewise, when you buy an IKEA table, you don’t want to be sanding, painting, or drilling, but snapping, locking, and bolting are fine. Success is a key criteria to get the positive upside. From the Wikipedia page:

To be sure, “labor leads to love only when labor results in successful completion of tasks; when participants built and then destroyed their creations, or failed to complete them, the IKEA effect dissipated.” The researchers also concluded “that labor increases valuation for both ‘do-it-yourselfers’ and novices.”

Participation in the process creates a feedback loop: the tool adapts to the unique circumstances of the consumer, functions as a built-in reward, and the consumer learns more about their workflow in the process.

Low-code as a software strategy allows for a personalization on-ramp. Its IKEA effect gives customers the power to participate in building their own solution, tailoring it to their specific tastes along the way.

✦

Jobs Clubhouse Does

February 23, 2021 • #

If you’re on the internet and haven’t been living under a rock for the last few months, you’ve heard about the startup Clubhouse and its explosive growth. It launched around the time COVID lockdowns started last year, and has been booming in popularity even with (maybe in-part due to?) an invitation gate and waitlist to get access.

The core product idea centers around “drop-in” audio conversations. Anyone can spin up a room accessible to the public, others can drop in and out, and, importantly, there’s a sort of peer-to-peer model on contributing that differentiates it from podcasting, its closest analog.

Clubhouse

I got an invite recently and have been checking out sessions from the first 50 or so folks I follow, really just listening so far. Their user and growth numbers aren’t public, but from a glance at my follow recommendations I see lots of people I follow on Twitter already on Clubhouse.

They recently closed a B round led by Andreessen Horowitz, who also backed the company in its earlier months last year. Any time an investor does successive rounds this quickly is an indicator of magic substance under the hood, signals that show tremendous upside possibility. In the case of Clubhouse, user growth is obviously a big deal — viral explosion this quickly is always a good early sign — but I’m sure there are other metrics they’re seeing that point to something deeper going on with product-market fit. Perhaps DAUs are climbing proportional to new user growth, average session duration is super long, or retention is extremely high (users returning every day).

On the surface a skeptical user might ask: what’s so different here from podcasts? It’s amazing what explosive growth they’ve had given the similarities to podcasting (audio conversations), and considering its negatives when compared with podcasts. In all of the Clubhouse rooms I’ve been in, most users have telephone-level audio quality, there’s somewhat chaotic overtalk, and “interestingness” is hard to predict. With podcasts you can scroll through the feed and immediately tell whether you’ll find something interesting; when I see an interesting guest name, I know what I’m getting myself into. You can reliably predict that you’ll enjoy the hour or so of listening.

Whenever a new product starts to take off like this, it’s hitting on some aspect of latent user demand, unfulfilled. What if we think about Clubhouse from a Jobs to Be Done perspective? Thinking about it from the demand side, what role does it play in addressing jobs customers have?

Clubhouse’s Differentiators

Clubhouse describes itself as “Drop-in audio chat”, which is a stunningly simple product idea. Like most tech innovations of the internet era, the foundational insight is so simple that it sounds like a joke, a toy. Twitter, Facebook, GitHub, Uber — the list goes on and on — none required invention of core new technology to prosper. Each of them combined existing technical foundations in new and interesting ways to create something new. Describing the insights of these services at inception often prompted responses like: “that’s it?”, “anyone could build that”, or “that’s just a feature X product will add any day now”. In so many cases, though, when the startup hits on product-market fit and executes well, products can create their own markets. In the words of Chris Dixon, “the next big thing will start out looking like a toy”.

Clubhouse rides on a few key features. Think of these like Twitter’s combo of realtime messaging + 140 characters, or Uber’s connection of two sides of a market (drivers and riders) through smartphones and a user’s current location. For Clubhouse, it takes audio chat and combines:

  • Drop-in — You browse a list of active conversations, one tap and you drop into the room. Anyone can spin up a room ad-hoc.
  • Live — Everything happening in Clubhouse is live. In fact, recording isn’t allowed at all, so there’s a “you had to be there” FOMO factor that Clubhouse can leverage to drive attention.
  • Spontaneous — Rooms are unpredictable, both when they’ll sprout up and what goes on within conversations. Since anyone can raise their hand and be pulled “on stage”, conversation is unscripted and emergent.
  • Omni-directional — Podcasts are one-way: from producer to listener, or some shows have “listener mail” feedback loops. Clubhouse rooms by definition have a peer-to-peer quality. They truly are conversations, at least as long as the room doesn’t have 8,000 people in it.

None of these is a new invention. Livestreaming has been around for years, radio has done much of this over the air for a century, and people have been hosting panel discussions since the time of Socrates and Plato. What Clubhouse does is mix these together in a mobile app, giving you access to live conversations whenever you have your phone plus connectivity. So, any time.

Through the Lens of Jobs

Jobs to Be Done focuses on what specific needs exist in a customer’s life. The theory talks about “struggling moments”: gaps in demand that product creators should be in search of, looking for how to fit the tools we produce into true customer-side demand. It describes a world where customers “hire” a product to perform a job. Wherever you see products rocketing off like Clubhouse, there’s a clear fit with the market: users are hiring Clubhouse for a job that wasn’t fulfilled before.

Some might make the argument that it’s addressing the same job as podcasts, but I don’t think that’s right exactly. For me it has hardly diminished my podcast listening at all. I think the market for audio is just getting bigger — not a zero-sum taking of attention from podcasts, but an increase in the overall size of the pie. Distributed work and the reduction of in-person interaction and events has amplified this, too (which we’ll get to in a moment, a critical piece of the product’s explosive growth).

Let’s go through a few jobs to be done statements that define the role that Clubhouse plays in its users’ lives. These loosely follow a format for framing jobs to be done, statements that are solution-agnostic, result in progress, and are stable across time (see Brian Rhea’s helpful article on this topic).

I’m doing something else and want to be entertained, informed, etc.

Podcasts certainly fit the bill here much of the time. Clubhouse adds something new and interesting in how lightweight the decision is to jump into a room and listen. With podcasts there’s a spectrum: on one end you have informative shows like deep dives on history or academic subjects (think Hardcore History or EconTalk) that demand attention and that entice you to completionism, and on the other, entertainment-centric ones for sports or movies, where you can lightly tune in and scrub through segments.

The spontaneity of Clubhouse rooms lends well to dropping in and listening in on a chat in progress. Because so many rooms tend to be agendaless, unplanned discussions, you can drop in anytime and leave without feeling like you missed something. Traditional podcasts tend to have an agenda or conversational arc that fits better with completionist listening. Think about when you sit down with Netflix and browse for 10 minutes unable to decide what to watch. The same effect can happen with podcasts, decision fatigue on what to pick. Clubhouse is like putting on a baseball game in the background: just pick a room and listen in with your on-and-off attention.

Ben Thompson called it the “first Airpods social network”. Pop in your headphones and see what your friends or followers are talking about.

I have an idea to express, but don’t want to spend time on writing or learning new tools

Clubhouse does for podcasting what Twitter did for blogs: massively drops the barrier to entry to participation. Setting up a blog has always required some upstart cost. Podcasting is even worse. Even with the latest and greatest tools, publishing something new has overhead. Twitter lowered this bar, only requiring users to tap out short thoughts to broadcast them to the world. Podcasting is getting better, but is still hardware-heavy to do well.

There’s a cottage industry sprouting up on Clubhouse of “post-game” locker room-style conversations following events, political, sports, television, even other Clubhouse shows. This plays well with the live aspect. Immediately following (or hell, even during) sporting events or TV shows, people can hop in a room and gab their analysis in real time.

Clubhouse’s similarities to Twitter for audio are striking. Now broadcasting a conversation doesn’t require expensive equipment, audio editing, CDNs, feed management. Just tap to create a room and notify your followers to join in.

I want to hear from notable people I follow more often

This one has been true for me a few times. With the app’s notifications feature, you can get alerts when people you follow start up a room, then join in on conversations involving your network whenever they pop up. I’ve hopped in when I saw notable folks I follow sitting in rooms, without really looking at the topic. For those interesting people you follow that you make sure to listen to, Clubhouse expands those opportunities. Follow them on Clubhouse and drop in on rooms they go into. Not only can you hear more often from folks you like, you also get a more unscripted and raw version of their thoughts and ideas with on-the-fly Clubhouse sessions.

I want to have an intellectual conversation with someone else, but I’m stuck at home!

Or maybe not even an intellectual one, just any social interaction with others!

This is where the timing of Clubhouse’s launch in April of last year was so essential to its growth. COVID quarantines put all of us indoors, unable to get out for social gatherings with friends or colleagues. Happy hours and dinners over Zoom aren’t things any of us thought we’d ever be doing, but when the lockdowns hit, we took to them to fill the need for social engagement. Clubhouse fills this void of providing loose, open-ended zones for conversation just like being at a party. Podcasts, books, and TV are all one-way. Humans need connection, not just consumption.

COVID hurt many businesses, but it sure was a growth hack for Clubhouse.

Future Jobs to Be Done?

Products can serve a job to be done in a zero- or positive-sum way. They can address existing jobs better than the current alternatives, or they can expand the job market to create demand for new unfulfilled ones. I think Clubhouse does a bit of both. From first-hand experience, I’ve popped into some rooms in cases when I otherwise would’ve put on a podcast or audiobook, and several times when I was listening to nothing else and saw a notification of something interesting. Above are just a few of the customer jobs that Clubhouse is filling so far. If you start thinking about adjacent areas they could experiment with, it opens up even more greenfield opportunity. Offering downloads (create a custom podcast feed to listen to later?), monetization for organizers and participants (tipping?), subscription-only rooms (competition with Patreon?). There’s a long list of areas for the product to explore.

Where Does Clubhouse Go Next?

There’s a question in tech that’s brought up any time a hot new entrant comes on the scene. It goes something like:

Can a new product grow its network or user base faster than the existing players can copy the product?

This has to be at the forefront of the Clubhouse founders’ minds as their product is taking off. Twitter’s already launched Spaces, a clone of Clubhouse that shows up in the Fleets feed. That kind of prominent presentation to Twitter’s existing base adds quite the competitive threat, though Twitter isn’t known for it’s lightning-quick product innovation over the last decade. But maybe they’ve learned their lesson in all their past missed opportunities. What could play out is another round of what happened to Snap with Stories, a concept that’s been copied by just about every product now.

Clubhouse is doing a respectable job managing the technical scalability of the platform as it grows. The growth tactics they’re using with pulling in contacts, while controversial, appear to be helping to replicate the webs of user connections. The friction in building new social interest graphs is one of the primary things that’s stifled other social products over the last 10 years. By the time new players achieve some traction, they’re either gobbled up by Twitter or Facebook, or copied by them (aside from a few, like TikTok). Can Clubhouse reach TikTok scale before Twitter can copy it?

There are still unanswered questions on how Clubhouse’s growth plays out over time:

  • How far can it reach into the general public audience outside of its core tech-centric “online” crowd?
  • Like any new network-driven product, when it’s shiny and new, we see a gold rush for followers. What behaviors will live chat incentivize?
  • How will room hosts behave competing for attention? What will be the “clickbait” of live audio chat?
  • What mechanisms can they create for generating social capital on the network? How does one build an initial following and expand reach?

Right now, the easiest way to build a following on Clubhouse is just like every other social network’s default: bring your already-existing network to the platform. It’s a bit early to see how Clubhouse might address this differently, but most of the big time users were folks with large followings on Twitter, YouTube, or elsewhere. It’d be cool to see something like TikTok-esque algorithm-driven recommendations to raise distribution for ideas or topics even outside of the follower graphs of the members of the rooms.

Clubhouse (and this category of live multi-way audio chat) is still in the newborn stages. As it matures and makes its way to wider audiences outside of mostly tech circles, it’ll be interesting to see what other “jobs” are out there unfilled by existing products that it can perform.

✦

Enterprises Don't Self Serve

December 11, 2020 • #

In the wake of Salesforce’s acquisition of Slack, there’s been a flood of analysis on whether it was a sign of Slack’s success or failure to grow as a company. It’s funny that we live in a time when a $27bn acquisition of a 7-year-old company gets interpreted as a failure. I’d consider it validation for their business that a $200bn company like Salesforce makes their largest acquisition ever on you. Broadly, it’s a move to make Salesforce more competitive with Microsoft as an operating system for business productivity writ-large.

One likely driver of selling now vs. later was the ever-expanding threat from Microsoft’s fantastic execution on Teams over the few years. Slack saw Microsoft’s distribution and customer relationship advantage, and that they’d have a beast of a challenge peeling away big MS customers. This sort of “incumbent” position in the enterprise is one of the strongest advantages Microsoft has, and they’ve been savvy in playing their cards to feed off of this position.

As a new entrant to the enterprise software space, Slack’s bottom-up product strategy has been one of their key advantages that fed their hypergrowth since 2014. The relentless focus on product quality drove viral adoption within user groups inside of organizations. Classic land-and-expand: get teams to adopt for themselves, and weave your way from that beachhead into the rest of the organization, with an eventual (often reluctant) official blessing from IT departments. The product-led growth (PLG) model (of which Slack was an early success story) allows new entrants to serve users first and foremost, sliding in under the radar of corporate buy-in inside companies: “shadow IT”, as it’s known.

Within large companies, self-service and a product-led approach can get you a long way, as Slack and many others have demonstrated. But at a certain size you hit friction points with growth inside large accounts. Enterprise customers rarely adopt software with zero engagement from product makers. But Slack and other PLG successes have been able to push deeper than previously thought possible with hands-off, sales-free tactics.

Former founder and now-investor David Sacks wrote a great Twitter thread on this topic (also discussed on the All-In Podcast), reacting to Slack’s lateness to implement a sales organization:

“Enterprises don’t self serve”

There’s no question that product-led is the way to go to get validation, traction, and growth, and that it’s still instrumental to building horizontal customer footprint. Sacks’s point is that Slack didn’t handle the enterprise scaling requirements early enough (they now are).

Bottom-up is great for top-of-funnel customer acquisition (Sacks says “lead gen”), but starts to falter as a growth driver at some scale. The trick in architecting a hybrid product-led vs. sales-led dichotomy is finding where and when in the lifecycle to transition growing customers from one to the other. What the PLG movement has done for SaaS companies is carry customer expansion further into companies than before. The likes of Slack, Atlassian, and Twilio carried themselves to enormous scales on the back of a PLG, self-service strategy.

Why does PLG decelerate?

Why would an enterprise company (or one that’s grown their use of a product to enterprise-scale penetration) not be able to self-serve the larger deployment? Why couldn’t a product company rely on self-service once a company’s usage grows to that point? It seems reasonable that if a customer scaled to a couple hundred users that the continued expansion would be an easy justification; if it’s working, why not keep expanding?

There are a few related reasons why relying on customers to serve themselves slows down at scale:

  • In large companies, individuals are no longer able to make decisions — champions for a product (that may already be using it in their team) need to build consensus across a diverse group of stakeholders to justify expanding
  • Too many cats need to be herded to get a deal done — see item 1, often a stunning number of heads need to be convinced, justified to, and won over; corralling the bureaucracy is a whole separate project unrelated to the effectiveness or utility of the product
  • Rarely no individual “buyer” — user can’t purchase product, purchaser has never used product; incentives for each stakeholder working at cross purposes (one is looking to complete a project, one is looking to cut budget, one is looking to impress the press, etc etc)
  • If you have a champion, they have a day job — And that job isn’t playing politics with accounting, legal, execs, IT, and others; there’s no time for the customer to play this role for you I can speak from experience on all dimensions of this. In the early days of a bottom-up product, landing that big logo and watching them grow looks like this — you’re growing seat count and things look to be taking off:
Bottom-up, product-led growth

Watching it happen is magical, especially if you’ve got an early product and/or small team. You’re building product you think is useful, and you’re being validated by watching it weave its way up into a company with a household name.

But you eventually discover that true enterprise-scale adoption looks more like this:

Moving laterally

The customer you thought you were growing wasn’t truly the whole enterprise, but only a department or division1. In many (most?) national- or international-scale companies, bridging to neighboring departments is effectively selling to a whole new customer. Sure, the story of your product’s impact from adjacent teams’ use cases is helpful, but often the barriers between these columns are enormous.

What you need is some fuel to help jump the gaps.

Enter your sales team

The reason for the sales team is primarily to coordinate and communicate with the stakeholders described above the on behalf of the buyer.

Sales to transition between departments

There are unicorn enterprise customers out there where you’ll find a champion willing to saddle this burden of selling your product to themselves — sometimes a particularly aggressive or visionary IT leader, or exec — but this is a rarity. You can’t and shouldn’t rely on this existing in most organizations.

On the surface this thought runs counter to a lot of recently popular ideas on product-led growth. But what Sacks is claiming in his thread doesn’t invalidate product-led, bottom-up as a strategy — in fact he says the opposite.

What it does say is that the go-to-market shouldn’t be a binary methodology: either you’re bottom-up / product-led OR top-down / sales-led. For many B2B SaaS companies, the ideal system design is optimizing for product-led evolving into a sales-led approach when a customer reaches a certain stage of the lifecycle.

Product-led to sales-led transition

Even for teams that understand the dynamics of both methods, the hard part is finding the right place in the cycle for the methodology to flip. If one set of tactics is largely owned by the product, design, and marketing teams (PLG), and the other owned by sales and customer success teams (SLG), then without proper experimentation, management, and cultural behavior reinforcement, it’s possible that one of those teams leans too far beyond the transition point.

Sales too early stunts investments on super-efficient organic growth techniques with PLG; too late means customers may have slowed expansion because you weren't there for the assist in keeping the growth moving upward
Sales too early stunts investments on super-efficient organic growth techniques with PLG; too late means customers may have slowed expansion because you weren't there for the assist in keeping the growth moving upward

This continuum is, of course, not fixed for all time or all companies. And those transition points are a lot more fuzzy in reality than in a chart.

PLG is growing in effectiveness over time, so the optimum transition stage from PLG to SLG is moving rightward for many types of products. A number of factors could cause this phenomenon. There are more and more companies opting for a PLG approach, but I think this is a response to changes in customer behavior more than it’s a modifier of customer behavior (though those effects move both ways). Things like technical comfort, the prevalence of self-service solutions in consumer technology, ease-of-use as a table stakes expectation, a wider competitive market for tools, and the sophistication level of technology expanding tremendously over the last 10 years are all moving parts that contribute to self-service becoming more widespread.

  1. The more hands-off you are in early usage and ramping up, the less you often know about the specifics of the customer. Is it an intern leading a pilot project? Is it a real, funded initiative? Often hard to tell if you’re “auto-scaling” on a PLG strategy. 

✦
✦

Fulcrum Field Day

November 9, 2020 • #

A fun aspect of working on a business product with a product-led growth strategy that you get to use the your product for in your personal life. I’ve used Fulcrum for creating personal tracking databases, collecting video for OpenStreetMap, and even documenting my map collection.

There’s no better way to build an empathetic perspective of your customer’s life than to go and be one as often as you can.

Last week our team did an afternoon field day where the entire company went out on a scavenger hunt of sorts, using Fulcrum to log some basic neighborhood sightings. 42 people scattered across the US collected 1,230 records in about an hour, which is an impressive pace even if the use case was a simple one!

Data across the nation, and my own fieldwork in St. Pete
Data across the nation, and my own fieldwork in St. Pete

It’s unfortunate how easy it is to stray away from the realities of what customers deal with day in and day out. Any respectable product person has a deep appreciation for how their product works for customers on the ground, at least academically. What exercises like this help us do is to get out of the realm of academics and try to do a real job. With B2B software, especially the kind built for particular industrial or domain applications, it’s hard to do this frequently since you aren’t your canonical user; you have to contrive your own mock scenarios to tease out the pain points in workflow.

The problem is that manufactured tests can’t be representative of all the messy realities in utilities, construction, engineering, or the myriad other cases we serve.

There’s no silver bullet for this. Acknowledging imperfect data and remaining aware of the gaps in your knowledge is the foundation. Then fitting your solution to the right problem, at the right altitude, is the way to go.

Exercises like ours last week are always energizing, though. Anytime you can rally attention around what your customers go through every day it’s a worthy cause. The list of observations and feedback is a mile long, and all high value stuff to investigate.

✦
✦

Weekend Reading: Options Over Roadmaps, Ghost, and Spaced Repetition

September 12, 2020 • #

🛣 Options, Not Roadmaps

An option is something you can do but don’t have to do. All our product ideas are exactly that: options we may exercise in some future cycle—or never.

Without a roadmap, without a stated plan, we can completely change course without paying a penalty. We don’t set any expectations internally or externally that these things are actually going to happen.

I know Basecamp is always the industry outlier with these things, but the thoughts on roadmaps are probably more true for many companies in reality than we’d all like to admit. We tend to look at things in a sort of hybrid way — not a fully baked roadmap with timelines, but a general list of roughly-sorted candidates that gain more and more momentum as we shape them out and prioritize. Every product team has a list of ideas 10x+ longer than anything they can build, so optionality is required to make the right decisions.

🚁 Anduril Ghost 4

Defense tech startup Anduril’s latest hardware, the Ghost UAV system. A pretty impressive and unique modular design for an unmanned platform.

🗂 Guide to Roam’s Spaced Repetition

Roam launched an interesting new feature (Δ) for setting up spaced repetition flows in your graph.

✦

Go-to-Market Fit

August 10, 2020 • #

I recently watched this Mark Roberge session where he had an interesting way of describing the challenge that follows product-market fit. Tons of startup literature is out there talking about p-m fit. And likewise there’s plenty out there about scaling, leadership, and company-building.

Go-to-market fit

One of the most fascinating stages is in between, what he calls “go-to-market fit.” This is where you’ve found some traction and solved a problem, but haven’t figured out how to do it efficiently. Here’s how you think about the key goals in each phase:

  • Product-market fit: customer retention
    • If you can attract users but they don’t stick around, you aren’t yet solving a painful problem (assuming you haven’t let pricing and other things get in the way)
  • Go-to-market fit: scalable unit economics
    • You know you’re there when you can repeatably deliver something valuable scalably and profitably

In each of these cases the real measurement lags your execution, so you need to find a proxy metric that predicts the goal number.

You can find metrics that are predictive signals of retention, but they’ll shift from product to product pretty widely. Things like active sessions, session lengths, sign in frequency, time-in-app, and the like can track with likelihood to stick around, but you’d have to experiment with ways to measure this if you’re in pre-product-market fit territory.

To predict go-to-market fit, you should know what a set of scalable and profitable metrics look like for your business. If you set down your target unit economics, like the LTV:CAC ratio (Mark uses the industry-common 3:1 as an example), you can work backwards to daily behaviors you can orient your team on to see how sustainable your pricing, packaging, and positioning are. It might take some experimentation given the acceptable goals would vary by company, but what you want to do is pick things you can measure quickly, like driving all the way down to leads per day, so you can adapt and change your tactics to zero in on what works. Waiting around for longer “actuals” to come back from accounting on your revenue means you can’t change quickly enough to sustain unprofitable models long enough to figure it out.

We often think a lot about product-market fit stage being the fast and loose experimental phase of a startup, but what Mark makes clear here is experimentation doesn’t stop — it merely shifts from product and customer success to sales and marketing. Though the tighter all these areas work together to experiment, the better the results.

✦

Weekend Reading: Quarantine Talks

July 11, 2020 • #

🛠 Attitudes, Aptitudes, and Progress

Joel Mokyr’s talk on the most recent session of The Torch of Progress series.

🧠 How to Be a Neo-Cartesian Cyborg

A recent talk from Maggie Appleton on the “building a second brain” concept.

👋🏼 Take a Tour of HEY

Great example of how to do a product demo. Informal style, clearly prepared but not “scripted,” and deep care and attention to the product.

✦
✦

A Nomenclature for Low-Code Users

July 7, 2020 • #

The low-code “market” isn’t really a market. Rather, I see it as an attribute of a software product, an implementation factor in how a product works. A product providing low-code capability says nothing about its intended value — it could be a product for sending emails, building automation rules, connecting APIs, or designing mobile forms.

What are termed “LCAP” (low-code application platform) software are often better described as “tools to build your own apps, without having to write all the code yourself.”

This post isn’t really about low-code as a marketplace descriptor, but about refining the nomenclature for how we talk about users we have in mind when designing low-code tools. Who are we building them for? Who needs them the most?

As Aron Korenblit wrote a few months back, low-code as a term isn’t really about code, per se, but often things like process modeling, workflows, data flows, data cleanliness, speed of prototyping, and low cost trial and error:

If what we’re trying communicate is that no-code helps get things done faster, we should elevate that fact in how we name ourselves instead of objecting to code itself.

For many years, all sorts of tools from Mailchimp or Webflow to Fulcrum or Airtable provide layers of capabilities for a continuum of user types, moving from the non-technical through to full developers. The non-tech space wants templates and WYSIWIG tools, the devs want an API, JavaScript customization, and full HTML/CSS editing suites. I think a two-type dichotomy isn’t descriptive enough, though. We need a third “semi-technical” user in the middle.

The spectrum of users could look something like this — we analogize these to an Microsoft Excel user equivalent (parenthesized):

Users of low-code software
  • Novice — anything that looks like code is totally opaque to novices. They’re scared off by it and afraid to change anything for fear of breaking something (Can enter data in Excel, and maybe do some sorting, filtering, or data manipulation)
  • Tinkerer — can parse through code examples and pre-existing scripts to roughly understand, uses trial and error and small adjustments to modify or piece together snippets for their own use case; often also can work with data and data tools like database applications and SQL (Can use formulas, pivot tables, lookups, and more with Excel, comfortable slicing and dicing data)
  • Developer — fluent in programming languages; excited about the prospect of writing their own code from scratch, just wants to be pointed to the API docs (Can write VBScripts and macros in Excel, but mostly wants to escape its confines to build their own software)

Of course empowering the Novices is one of the primary goals with low-code approaches, as they’re the least prepared to put together their own solutions. They need turn-key software.

And we can help Developers with low-code, too. If we can bootstrap common patterns and functionality through pre-existing building blocks, they can avoid repetitive work. Much of tool-building involves rebuilding 50-75% of the same parts you built for the last job, so low-code approaches can speed these folks up.

But the largest gap is that middle bunch of Tinkerers. Not only do they stand to gain the most from low-code tools. From my observations, that group is also the fastest-growing category. Every day as more tech-native people enter the workforce, or are compelled to dive into technical tools, people are graduating from Novice to Tinkerer status, realizing that many modern tools are resilient to experimentation and forgiving of user error. The tight feedback loops you can get from low-code affordances provide a cushion to try things, to tweak, modify, and customize gradually until you zero in on something useful. In many cases what a user decides is a “complete” solution is variable — there’s latitude to work with and not an extremely rigid set of hard requirements to be met. By providing building blocks, examples, and snippets, Tinkerers can home in on a solution that works for them.

Those same low-code tactics in user experience also give Novices and Tinkerers the prototyping scaffolds to build partial that can be further refined by a Developer. Sometimes the prototyping stage is plenty to get the job done, but even for more complex endeavors can greatly reduce cost.

✦

Fulcrum's Report Builder

July 5, 2020 • #

After about 6-8 months of forging, shaping, research, design, and engineering, we’ve launched the Fulcrum Report Builder. One of the key use cases with Fulcrum has always been using the platform to design your own data collection processes with our App Builder, perform inspections with our mobile app, then generate results through our Editor, raw data integrations, and, commonly, generating PDF reports from inspections.

Fulcrum Report Builder

For years we’ve offered a basic report template along with an ability to customize the reports through our Professional Services team. What was missing was a way to expose our report-building tools to customers.

With the Report Builder, we now have two modes available: a Basic mode that allows any customer to configure some parameters about the report output through settings, and an Advanced mode that provides a full IDE for building your own fully customized reports with markup and JavaScript, plus a templating engine for pulling in and manipulating data.

Under the hood, we overhauled the generator engine using a library called Puppeteer, a headless Chrome node.js API for doing many things, including converting web pages to documents or screenshots. It’s lightning fast and allows for a live preview of your reports as you’re working on your template customization.

Feedback so far has been fantastic, as this has been of the most requested capabilities on the platform. I can’t wait to see all of the ways people end up using it.

We’ve got a lot more in store for the future. Stay tuned to see what else we add to it.

✦

Roam I and Roam E

April 24, 2020 • #

A neat concept demo from Dhrumil Shah showing possible enhancements for Roam Research. He calls them “Roam-I” and “Roam-E”:

  • Roam-I — for reusing old knowledge
  • Roam-E — collaboration

Most of this is user interface on top of the core technology that underpins how Roam works, but it’s great to see people so passionate about this that they’ll spend this much time prototyping ideas on products they use.

✦

The Idea Maze

March 8, 2020 • #

I ran across this set of lecture notes from Balaji Srinivasan’s “startup engineering” course.

He proposes this format for thinking about the phases a company moves through — from idea to profits:

  • An idea is not a mockup
  • A mockup is not a prototype
  • A prototype is not a program
  • A program is not a product
  • A product is not a business
  • And a business is not profits
Idea maze

You can map this onto the debate between “idea vs. execution” by calling everything below the idea the stage “execution.” In certain circles, especially among normal people not steeped in the universe of tech companies, the idea component is enormously overweighted. If you make software and your friends or acquaintances know it, I’m sure you’re familiar with flavors of “I have this great idea, I just need someone who can code to build it.” They don’t understand that everything following the “just” is about 99.5% of the work to create success (or more)1.

Thinking of these steps as a state machine is a vivid way to describe it. He has them broken out in detail:

The idea state machine

When laid out that way it’s clear why it takes such persistence and wherewithal to see an idea through to being a business.

To understand if you have an idea worth pursuing (or even one good enough to be adapted/modified into a great one), it’s a good exercise to simulate the game in your head, to imagine you’ve already moved through a couple steps of the state machine. What are you encountering? If you think of a roadblock, how would you respond? This sort of “pre-gaming” is what separates the best creators and product minds from everyone else. They take small, minimum-risk steps, look up to absorb new feedback, and adapt accordingly2.

Srinivasan calls this phenomenon the “idea maze”:

One answer is that a good founder doesn’t just have an idea, s/he has a bird’s eye view of the idea maze. Most of the time, end-users only see the solid path through the maze taken by one company. They don’t see the paths not taken by that company, and certainly don’t think much about all the dead companies that fell into various pits before reaching the customer.

A good founder is thus capable of anticipating which turns lead to treasure and which lead to certain death. A bad founder is just running to the entrance of (say) the “movies/music/filesharing/P2P” maze or the “photosharing” maze without any sense for the history of the industry, the players in the maze, the casualties of the past, and the technologies that are likely to move walls and change assumptions.

In other words: a good idea means a bird’s eye view of the idea maze, understanding all the permutations of the idea and the branching of the decision tree, gaming things out to the end of each scenario. Anyone can point out the entrance to the maze, but few can think through all the branches.

I remember Marc Andreessen in an interview talking about questioning founders during pitches: if you can probe deeper and deeper on a particular theme to a founder and they’ve already formulated a thoughtful answer, it means they’ve been navigating the idea maze in their head long before being probed by an investor.

It’s worth thinking about how to incorporate this concept into my thinking on future product growth. I think to some extent this sort of thing comes naturally to certain people; the naturally curious ones are doing a version of this all the time, often unintentionally. But what if you could be intentional about it?

  1. Not to mention the fact that people are typically ignorant to how often their eureka idea has already been tried or has already gained success because it’s obvious enough to have attracted plenty of others. 

  2. See Antifragile, Taleb’s magnum opus. An entire book on the subject of survivability, risk reduction, adaptation, and respect for proceeding with measured caution in “Extremistan” (highly unpredictable environments). 

✦
✦
✦

Balancing Power and Usability

November 18, 2019 • #

This is another one from the archives, written for the Fulcrum blog back in 2016.

Engineering is the art of building things within constraints. If you have no constraints, you aren’t really doing engineering. Whether it’s cost, time, attention, tools, or materials, you’ve always got constraints to work within when building things. Here’s an excerpt describing the challenge facing the engineer:

The crucial and unique task of the engineer is to identify, understand, and interpret the constraints on a design in order to produce a successful result. It is usually not enough to build a technically successful product; it must also meet further requirements.

In the development of Fulcrum, we’re always working within tight boundaries. We try to balance power and flexibility with practicality and usability. Working within constraints produces a better finished product — if (by force) you can’t have everything, you think harder about what your product won’t do to fit within the constraints.

Microsoft Office, exemplifying 'feature creep'
Microsoft Office, exemplifying 'feature creep'

The practice of balancing is also relevant to our customers. Fulcrum is used by hundreds of organizations in the context of their own business rules and processes. Instead of engineering a software product, our users are engineering a solution to their problem using the Fulcrum app builder, custom workflow rules, reporting, and analysis, all customizable to fit the goals of the business. When given a box of tools to build yourself a solution to a problem, the temptation is high to try to make it do and solve everything. But with each increase in power or complexity, usability of your system takes a hit in the form of added burden on your end users to understand the complex system — they’re there to use your tool for a task, finish the job, and go home.

This balance between power and usability is related to my last post on treating causes rather than symptoms of pain. Trying too hard to make a tool solve every potential problem in one step can (and almost always does) lead to overcomplicating the result, to the detriment of everyone.

In our case as a product development and design team, a powerful suite of options without extremely tight attention on implementation runs the risk of becoming so complex that the lion’s share of users can’t even figure it out. GitHub’s Ben Balter recently wrote a great piece on the risks of optimizing your product for edge cases1:

No product is going to satisfy 100% of user needs, although it’s sure tempting to try. If a 20%-er requests a feature that isn’t going to be used by the other 80%, there’s no harm in just making it a non-default option, right?

We have a motto at GitHub, part of the GitHub Zen, that “anything added dilutes everything else”. In reality, there is always a non-zero cost to adding that extra option. Most immediately, it’s the time you spend building feature A, instead of building feature B. A bit beyond that, it’s the cognitive burden you’ve just added to each user’s onboarding experience as they try to grok how to use the thing you’ve added (and if they should). In the long run, it’s much more than maintenance. Complexity begets complexity, meaning each edge case you account for today, creates many more edge cases down the line.

This is relevant to anyone building something to solve a problem, not just software products. Put this in the context of a Fulcrum data collection workflow. The steps might look something like this:

  1. Analyze your requirements to figure out what data is required at what stage in the process.
  2. Build an app in Fulcrum around those needs.
  3. Deploy to field teams.
  4. Collect data.
  5. Run reports or analysis.

What we notice a surprising amount of the time is an enormous investment in step 2, sometimes to the exclusion of much effort on the other stages of the workflow. With each added field on a survey, requirement for data entry, overly-specific validation, you add potential hang-ups for end users responsible for actually collecting data. With each new requirement, usability suffers. People do this for good reason — they’re trying to accommodate those edge cases, the occasions where you do need to collect this one additional piece of info, or validate something against a specific requirement. Do this enough times, however, and your implementation is all about addressing the edge problems, not the core problem.

When you’re building a tool to solve a problem, think about how you may be impacting the core solution when you add knobs and settings for the edge cases. Best-fit solutions require testing your product against the complete ideal life cycle of usage. Start with something simple and gradually add complexity as needed, rather than the reverse.

  1. Ben’s blog is an excellent read if you’re into software and the relationship to government and enterprise. 

✦
✦
✦
✦
✦

Weekend Reading: Intellectual Humility, Scoping, and Gboard

August 31, 2019 • #

🛤 Missing the Light at the End of the Tunnel

Honest postmortems are insightful to get the inside backstory on what happened behind the scenes with a company. In this one, Jason Crawford goes into what went wrong with Fieldbook before they shut it down and were acquired by Flexport a couple years ago:

Now, with a year to digest, I think this is true and was a core mistake. I vastly underestimated the resources it was going to take—in time, effort and money—to build a launchable product in the space.

In the 8 years since we launched the first version of Fulcrum, we’ve had (fortunately) smaller versions of this experience over and over. Each new major overhaul, large feature, or product business model change we’ve undertaken has probably cost us twice the time we initially expected it to. Scoping is a science itself that everyone has to learn.

In Jeff Bezos’s 2018 letter to Amazon shareholders, he discusses the topic of high standards: how to have them and how to get your team to have them. (As a side note, if you don’t read Bezos’s shareholder letters, you’re missing out. Even if you’ve already read all the business and startup advice in the world, you will find new and keen insights there.)

Bezos makes a few interesting points, but I’ll focus on one: To have high standards in practice, you need realistic expectations about the scope of effort required.

As a simple example, he mentions learning to do a handstand. Some people think they should be able to learn a handstand in two weeks; in reality, it takes six months. If you go in thinking it will take two weeks, not only do you not learn it in two weeks, you also don’t learn it in six months—you learn it never, because you get discouraged and quit. Bezos says a similar thing applies to the famous six-page memos that substitute for slide decks at Amazon (the ones that are read silently in meetings). Some people expect they can write a good memo the night before the meeting; in reality, you have to start a week before, in order to allow time for drafting, feedback, and editing.

🏛 Ten Ways to Defuse Political Arrogance

David Blankenhorn calls for a return of intellectual humility in public discourse.

At the personal level, intellectual humility counterbalances narcissism, self-centeredness, pridefulness, and the need to dominate others. Conversely, intellectual humility seems to correlate positively with empathy, responsiveness to reasons, the ability to acknowledge what one owes (including intellectually) to others, and the moral capacity for equal regard of others. Arguably its ultimate fruit is a more accurate understanding of oneself and one’s capacities. Intellectual humility also appears frequently to correlate positively with successful leadership (due especially to the link between intellectual humility and trustworthiness) and with rightly earned self-confidence.

⌨️ The Machine Intelligence Behind Gboard

A fun technical overview of how the Google team is using predictive machine learning models to make typing on mobile devices more efficient.

✦
✦
✦

Weekend Reading: Terrain Mesh, Designing on a Deadline, and Bookshelves

August 17, 2019 • #

🏔 MARTINI: Real-Time RTIN Terrain Mesh

Some cool work from Vladimir Agafonkin on a library for RTIN mesh generation, with an interactive notebook to experiment with it on Observable:

An RTIN mesh consists of only right-angle triangles, which makes it less precise than Delaunay-based TIN meshes, requiring more triangles to approximate the same surface. But RTIN has two significant advantages:

  1. The algorithm generates a hierarchy of all approximations of varying precisions — after running it once, you can quickly retrieve a mesh for any given level of detail.
  2. It’s very fast, making it viable for client-side meshing from raster terrain tiles. Surprisingly, I haven’t found any prior attempts to do it in the browser.

👨🏽‍🎨 Design on a Deadline: How Notion Pulled Itself Back from the Brink of Failure

This is an interesting piece on the Figma blog about Notion and their design process in getting the v1 off the ground a few years ago. I’ve been using Notion for a while and can attest to the craftsmanship in design and user experience. All the effort put in and iterated on really shows in how fluid the whole app feels.

📚 Patrick Collison’s Bookshelf

I’m always a sucker for a curated list of reading recommendations. This one’s from Stripe founder Patrick Collison, who seems to share a lot my interests and curiosities.

✦

Shipping the Right Product

August 14, 2019 • #

This is one from the archives, originally written for the Fulcrum blog back in early 2017. I thought I’d resurface it here since I’ve been thinking more about continual evolution of our product process. I liked it back when I wrote it; still very relevant and true. It’s good to look back in time to get a sense for my thought process from a couple years ago.

In the software business, a lot of attention gets paid to “shipping” as a badge of honor if you want to be considered an innovator. Like any guiding philosophy, it’s best used as a general rule than as the primary yardstick by which you measure every individual decision. Agile, scrum, TDD, BDD — they’re all excellent practices to keep teams focused on results. After all, the longer you’re polishing your work and not putting it in the hands of users, the less you know about how they’ll be using it once you ship it!

These systems followed as gospel (particularly with larger projects or products) can lead to attention on the how rather than the what — thinking about the process as shipping “lines of code” or what text editor you’re using rather than useful results for users. Loops of user feedback are essential to building the right solution for the problem you’re addressing with your product.

Shipping the right product

Thinking more deeply about aligning the desires to both ship _something_ rapidly while ensuring it aligns with product goals, it brings to mind a few questions to reflect on:

  • What are you shipping?
  • Is what you’re shipping actually useful to your user?
  • How does the structure of your team impact your resulting product?

How can a team iterate and ship fast, while also delivering the product they’re promising to customers, that solves the expressed problem?

Defining product goals

In order to maintain a high tempo of iteration without simply measuring numbers of commits or how many times you push to production each day, the goals need to be oriented around the end result, not the means used to get there. Start by defining what success looks like in terms of the problem to be solved. Harvard Business School professor Clayton Christensen developed the jobs-to-be-done framework to help businesses break down the core linkages between a user and why they use a product or service1. Looking at your product or project through the lens of the “jobs” it does for the consumer helps clarify problems you should be focused on solving.

Most of us that create products have an idea of what we’re trying to achieve, but do we really look at a new feature, new project, or technique and truly tie it back to a specific job a user is expecting to get done? I find it helpful to frequently zoom out from the ground level and take a wider view of all the distinct problems we’re trying to solve for customers. The JTBD concept is helpful to get things like technical architecture out of your way and make sure what’s being built is solving the big problems we set out to solve. All the roadmaps, Gantt charts, and project schedules in the world won’t guarantee that your end result solves a problem2. Your product could become an immaculately built ship that’s sailing in the wrong direction. For more insight into the jobs-to-be-done theory, check out This is Product Management’s excellent interview with its co-creator, Karen Dillon.

Understanding users

On a similar thread as jobs-to-be-done, having a deep understanding of what the user is trying to achieve is essential in defining what to build.

This quote from the article gets to the heart of why it matters to understand with empathy what a user is trying to accomplish, it’s not always about our engineering-minded technical features or bells and whistles:

Jobs are never simply about function — they have powerful social and emotional dimensions.

The only way to unroll what’s driving a user is to have conversations and ask questions. Figure out the relationships between what the problem is and what they think the solution will be. Internally we talk a lot about this as “understanding pain”. People “hire” a product, tool, or person to reduce some sort of pain. Deep questioning to get to the root causes of pain is essential. Often times people want to self-prescribe their solution, which may not be ideal. Just look how often a patient browses WebMD, then goes to the doctor with a preconceived diagnosis, without letting the expert do their job.

On the flip side, product creators need to enter these conversations with an open mind, and avoid creating a solution looking for a problem. Doctors shouldn’t consult patients and make assumptions about the underlying causes of a patient’s symptoms! They’d be in for some serious legal trouble.

Organize the team to reflect goals

One of my favorite ideas in product development comes from Steven Sinofsky, former Microsoft product chief of Office and Windows:

“Don’t ship the org chart.”

Org chart

The salient point being that companies have a tendency to create products that align with areas of responsibility within the company3. However, the user doesn’t care at all about the dividing lines within your company, only the resulting solutions you deliver.

A corollary to this idea is that over time companies naturally begin to look like their customers. It’s clearly evident in the federal contracting space: federal agencies are big, slow, and bureaucratic, and large government contracting companies start to reflect these qualities in their own products, services, and org structures.

With our product, we see three primary points to make sure our product fits the set of problems we’re solving for customers:

  • For some, a toolbox — For small teams with focused problems, Fulcrum should be seamless to set up, purchase, and self-manage. Users should begin relieving their pains immediately.
  • For others, a total solution — For large enterprises with diverse use cases and many stakeholders, Fulcrum can be set up as a total turnkey solution for the customer’s management team to administer. Our team of in-house experts consults with the customer for training and on-boarding, and the customer ends up with a full solution and the toolbox.
  • Integrations as the “glue” — Customers large and small have systems of record and reporting requirements with which Fulcrum needs to integrate. Sometimes this is simple, sometimes very complex. But always the final outcome is a unique capability that can’t be had another way without building their own software from scratch.

Though we’re still a small team, we’ve tried to build up the functional areas around these objectives. As we advance the product and grow the team, it’s important to keep this in mind so that we’re still able to match our solution to customer problems.

For more on this topic, Sinofsky’s post on “Functional vs. Unit Organizations” analyzes the pros, cons, and trade offs of different org structures and the impacts on product. A great read.

Continued reflection, onward and upward 📈

In order to stay ahead of the curve and Always Be Shipping (the Right Product), it’s important to measure user results, constantly and honestly. The assumption should be that any feature could and should be improved, if we know enough from empirical evidence how we can make those improvements. With this sort of continuous reflection on the process, hopefully we’ll keep shipping the Right Product to our users.

  1. Christensen is most well known for his work on disruption theory. 

  2. Not to discount the value of team planning. It’s a crucial component of efficiency. My point is the clean Gantt chart on its own isn’t solving a customer problem! 

  3. Of course this problem is only minor in small companies. It’s of much greater concern to the Amazons and Microsofts of the world. 

✦

On Retention

July 12, 2019 • #

Earlier this year at SaaStr Annual, we spent 3 days with 20,000 people in the SaaS market, hearing about best practices from the best in the business, from all over the world.

If I had to take away a single overarching theme this year (not by any means “new” this time around, but louder and present in more of the sessions), it’s the value of customer success and retention of core, high-value customers. It’s always been one of SaaStr founder Jason Lemkin’s core focus areas in his literature about how to “get to $10M, $50M, $100M” in revenue, and interwoven in many sessions were topics and questions relevant to things in this area — onboarding, “aha moments,” retention, growth, community development, and continued incremental product value increases through enhancement and new features.

Mark Roberge (former CRO of Hubspot) had an interesting talk that covered this topic. In it he focused on the power of retention and how to think about it tactically at different stages in the revenue growth cycle.

If you look at growth (adding new revenue) and retention (keeping and/or growing existing revenue) as two axes on a chart of overall growth, a couple of broad options present themselves to get the curve arrow up and to the right:

Retention vs. growth

If you have awesome retention, you have to figure out adding new business. If you’re adding new customers like crazy but have trouble with customer churn, you have to figure out how to keep them. Roberge summed up his position after years of working with companies:

It’s easier to accelerate growth with world class retention than fix retention while maintaining rapid growth.

The literature across industries is also in agreement on this. There’s an adage in business that it’s “cheaper to keep a customer than to acquire a new one.” But to me there’s more to this notion than the avoidance of the acquisition cost for a new customer, though that’s certainly beneficial. Rather it’s the maximization of the magic SaaS metric: LTV (lifetime value). If a subscription customer never leaves, their revenue keeps growing ad infinitum. This is the sort of efficiency ever SaaS company is striving for — to maximize fixed investments over the long term. It’s why investors are valuing SaaS businesses at 10x revenue these days. But you can’t get there without unlocking the right product-market fit to switch on this kind of retention and growth.

So Roberge recommends keying in on this factor. One of the key first steps in establishing a strong position with any customer is to have a clear definition of when they cross a product fit threshold — when they reach the “aha” moment and see the value for themselves. He calls this the “customer success leading indicator”, and explains that all companies should develop a metric or set of metrics that indicates when customers cross this mark. Some examples from around the SaaS universe of how companies are measuring this:

  • Slack — 2000 team messages sent
  • Dropbox — 1 file added to 1 folder on 1 device
  • Hubspot — Using 5 of 20 features within 60 days

Each of these companies has correlated these figures with strong customer fits. When these targets are hit, there’s a high likelihood that a customer will convert, stick around, and even expand. It’s important that the selected indicator be clear and consistent between customers and meet some core criteria:

  • Observable in weeks or months, not quarters or years — need to see rapid feedback on performance.
  • Measurement can be automated — again, need to see this performance on a rolling basis.
  • Ideally correlated to the product core value proposition — don’t pick things that are “measurable” but don’t line up with our expectations of “proper use.” For example, in Fulcrum, whether the customer creates an offline map layer wouldn’t correlate strongly with the core value proposition (in isolation).
  • Repeat purchase, referral, setup, usage, ROI are all common (revenue usually a mistake — it’s a lagging rather than a leading indicator)
  • Okay to combine multiple metrics — derived “aggregate” numbers would work, as long as they aren’t overcomplicated.

The next step is to understand what portion of new customers reach this target (ideally all customers reach it) and when, then measure by cohort group. Putting together cohort analyses allows you to chart the data over time, and make iterative changes to early onboarding, product features, training, and overall customer success strategy to turn the cohorts from “red” to “green”.

Retention cohorts

We do cohort tracking already, but it’d be hugely beneficial to analyze and articulate this through a filter of a key customer success metric is and track it as closely as MRR. I think a hybrid reporting mechanism that tracks MRR, customer success metric achievement, and NPS by cohort would show strong correlation between each. The customer success metric can serve as an early signal of customer “activation” and, therefore, future growth potential.

Customer success leading indicator

I also sat in on a session with Tom Tunguz, VC from RedPoint Ventures, who presented on a survey they had conducted with almost 600 different business SaaS companies across a diverse base of categories. The data demonstrated a number of interesting points, particularly on the topic of retention. Two of the categories touched on were logo retention and net dollar retention (NDR). More than a third of the companies surveyed retain 90+% of their logos year over year. My favorite piece of data showed that larger customers churn less — the higher products go up market, the better the retention gets. This might sound counterintuitive on the surface, but as Tunguz pointed out in his talk, it makes sense when you think about the buying process in large vs. small organizations. Larger customers are more likely to have more rigid, careful buying processes (as anyone doing enterprise sales is well aware) than small ones, which are more likely to buy things “on the fly” and also invest less time and energy in their vendors’ products. The investment poured in by an enterprise customer makes them averse to switching products once on board1:

Enterprise churn is lower

On the subject of NDR, Tunguz reports that the tendency toward expansion scales with company size, as well. In the body of customers surveyed, those that focus on the mid-market and enterprise tiers report higher average NDR than SMB. This aligns with the logic above on logo retention, but there’s also the added factor that enterprises have more room to go higher than those on the SMB end of the continuum. The higher overall headcount in an enterprise leaves a higher ceiling for a vendor to capture:

Enterprise expansion

Overall, there are two big takeaways to worth bringing home and incorporating:

  1. Create (and subsequently monitor) a universal “customer success indicator” that gives a barometer for measuring the “time to value” for new customers, and segment accordingly by size, industry, and other variables.
  2. Focus on large Enterprise organizations — particularly their use cases, friction points to expansion, and customer success attention.

We’ve made good headway a lot of these findings with our Enterprise product tier for Fulcrum, along with the sales and marketing processes to get it out there. What’s encouraging about these presentations is that we already see numbers leaning in this direction, aligning with the “best practices” each of these guys presented — strong logo retention and north of 100% NDR. We’ve got some other tactics in the pipeline, as well as product capabilities, that we’re hoping bring even greater efficiency, along with the requisite additional value to our customers.

  1. Assuming there’s tight product-market fit, and you aren’t selling them shelfware! 

✦
✦
✦

Wireframing with Moqups

May 16, 2019 • #

Wireframing is a critical technique in product development. Most everyone in software does a good bit of it for communicating requirements to development teams and making iterative changes. For me, the process of wireframing is about figuring out what needs to be built as much as how. When we’re discussing new features or enhancements, rather than write specs or BDD stories or something like that, I go straight to a pen and paper or the iPad to sketch out options. You get a sense for how a UI needs to come together, and also for us visual thinkers, the new ideas really start to show up when I see things I can tweak and mold.

We’ve been using Moqups for a while on our product team to do quick visuals of new screens and workflows in our apps. I’ve loved using it so far — its interface is simple and quick to use, it’s got an archive of icons and pre-made blocks to work with, and has enough collaboration features to be useful without being overwhelming.

Moqups wireframe

We’ve spent some time building out “masters” that (like in PowerPoint or Keynote) you can use as baseline starters for screens. It also has a feature called Components where you can build reusable objects — almost like templates for commonly-used UI affordances like menus or form fieldsets.

One of the slickest features is the ability to add interactions between mocks, so you can wire up simulated user flows through a series of steps.

I’ve also used it to do things like architecture diagrams and flowcharts, which it works great for. Check it out if you need a wireframing tool that’s easy to use and all cloud-based.

✦

Entering Product Development: Geodexy

March 27, 2019 • #

I started with the first post in this series back in January, describing my own entrance into product development and management.

When I joined the company we were in the very early stages of building a data collection tool, primarily for internal use to improve speed and efficiency on data project work. That product was called Geodexy, and the model was similar to Fulcrum in concept, but in execution and tech stack, everything was completely different. A few years back, Tony wrote up a retrospective post detailing out the history of what led us down the path we took, and how Geodexy came to be:

After this experience, I realized there was a niche to carve out for Spatial Networks but I’d need to invest whatever meager profits the company made into a capability to allow us to provide high fidelity data from the field, with very high quality, extremely fast and at a very low cost (to the company). I needed to be able to scale up or down instantly, given the volatility in the project services space, and I needed to be able to deploy the tools globally, on-demand, on available mobile platforms, remotely and without traditional limitations of software CDs.

Tony’s post was an excellent look back at the business origin of the product — the “why” we decided to do it piece. What I wanted to cover here was more on the product technology end of things, and our go-to-market strategy (where you could call it that). Prior to my joining, the team had put together a rough go-to-market plan trying to guesstimate TAM, market fit, customer need, and price points. Of course without real market feedback (as in, will someone actually buy what you’ve built, versus say they would buy it one day), it’s hard to truly gauge the success potential.

Geodexy

Back then, modern web frameworks in use today were around, but there were very few and not yet mature, like Rails and it’s peers. It’s astonishing to think back on the tech stack we were using in the first iteration of Geodexy, circa 2008. That first version was built on a combination of Flex, Flash, MySQL, and Windows Mobile1. It all worked, but was cumbersome to iterate on even back then. This was not even that long ago, and back then that was a reasonable suite of tooling; now it looks antiquated, and Flex was abandoned and donated to Apache Foundation a long time ago. We had success with that product version for our internal efforts; it powered dozens of data collection projects in 10+ countries around the world, allowing us to deliver higher-quality data than we could before. The mobile application (which was the key to the entire product achieving its goals) worked, but still lacked the native integration of richer data sources — primarily for photos and GPS data. The former could be done with some devices that had native cameras, but the built-in sensors were too low quality on most devices. The latter almost always required an external Bluetooth GPS device to integrate the location data. It was all still an upgrade from pen, paper, and data transcription, but not free from friction on the ground at the point of data collection. Being burdened by technology friction while roaming the countryside collecting data doesn’t make for the smoothest user experience or prevent problems. We still needed to come up with a better way to make it happen, for ourselves and absolutely before we went to market touting the workflow advantages to other customers.

Geodexy Windows Mobile

In mid-2009 we spun up an effort to reset on more modern technology we could build from, learning from our first mistakes and able to short-circuit a lot of the prior experimentation. The new stack was Rails, MongoDB, and PostgreSQL, which looking back from 10 years on sounds like a logical stack to use even today, depending on the product needs. Much of what we used back then still sits at the core of Fulcrum today.

What we never got to with the ultimate version of Geodexy was a modern mobile client for the data collection piece. That was still the early days of the App Store, and I don’t recall how mature the Android Market (predecessor to Google Play) was back then, but we didn’t have the resources to start of with 2 mobile clients anyway. We actually had a functioning Blackberry app first, which tells you how different the mobile platform landscape looked a decade ago2.

Geodexy’s mobile app for iOS was, on the other hand, an excellent window into the potential iOS development unlocked for us as a platform going forward. In a couple of months one of our developers that knew his way around C++ learned some Objective-C and put together a version that fully worked — offline support for data collection, automatic GPS integration, photos, the whole nine yards of the core toolset we always wanted. The new wave of platform with a REST API, online form designer, and iOS app allowed us to up our game on Foresight data collection efforts in a way that we knew would have legs if we could productize it right.

We didn’t get much further along with the Geodexy platform as it was before we refocused our SaaS efforts around a new product concept that’d tie all of the technology stack we’d built around a single, albeit large, market: the property inspection business. That’s what led us to launch allinspections, which I’ll continue the story on later.

In an odd way, it’s pleasing to think back on the challenges (or things we considered challenges) at the time and think about how they contrast with today. We focused so much attention on things that, in the long run, aren’t terribly important to the lifeblood of a business idea (tech stack and implementation), and not enough on the things worth thinking about early on (market analysis, pricing, early customer development). Part of that I think stems from our indexing on internal project support first, but also from inexperience with go-to-market in SaaS. The learnings ended up being invaluable for future product efforts, and still help to inform decision making today.

  1. As painful as this sounds we actually had a decent tool built on WM. But the usability of it was terrible, which if you can recall the time period was par for the course for mobile applications of all stripes. 

  2. That was a decade ago. Man. 

✦
✦
✦

Weekend Reading: Geocomputation, Customers, and Linear Growth

October 13, 2018 • #

🎛 Geocomputation with R

I’ve had R on my list for a long time to dig deeper with. A while back I set myself up with RStudio and went through some DataCamp stuff. This online book seems like excellent material in how to apply R to geostatistics.

☎️ Listening to Customers At Scale

Given where we are with Fulcrum in the product lifecycle, this rang very familiar on the struggles with how to listen to customers effectively, who to listen to, and how to absorb or deflect ideas. Once you get past product-market fit, the same tight connection between your customers and product team becomes impossible. Glad to hear we aren’t alone in our struggles here.

📈 Linear Growth Companies

This piece from David Heinemeier Hansson is a good reminder that steady, linear growth is still great performance for a business. Every business puts itself in a different situation, and certainly many are in debt or investment positions that linear growth isn’t good enough for. Even so, consistent growth in the positive direction should always be commended.

✦

A Product Origin Story

September 11, 2018 • #

Fulcrum, our SaaS product for field data collection, is coming up on its 7th birthday this year. We’ve come a long way: from a bootstrapped, barely-functional system at launch in 2011 to a platform with over 1,800 customers, healthy revenue, and a growing team expanding it to ever larger clients around the world. I thought I’d step back and recall its origins from a product management perspective.

We created Fulcrum to address a need we had in our business, and quickly realized its application to dozens of other markets with a slightly different color of the same issue: getting accurate field reporting from a deskless, mobile workforce back to a centralized hub for reporting and analysis. While we knew it wasn’t a brand new invention to create a data collection platform, we knew we could bring a novel solution combining our strengths, and that other existing tools on the market had fundamental holes we saw as essential to our own business. We had a few core ideas, all of which combined would give us a unique and powerful foundation we didn’t see elsewhere:

  1. Use a mobile-first design approach — Too many products at the time still considered their mobile offerings afterthoughts (if they existed at all).
  2. Make disconnected, offline use seamless to a mobile user — They shouldn’t have to fiddle. Way too many products in 2011 (and many still today) took the simpler engineering approach of building for always-connected environments. (requires #1)
  3. Put location data at the core — Everything geolocated. (requires #1)
  4. Enable business analysis with spatial relationships — Even though we’re geographers, most people don’t see the world through a geo lens, but should. (requires #3)
  5. Make it cloud-centric — In 2011 desktop software was well on the way out, so we wanted an platform we could cloud host with APIs for everything. Creating from building block primitives let us horizontally scale on the infrastructure.

Regardless of the addressable market for this potential solution, we planned to invest and build it anyway. At the beginning, it was critical enough to our own business workflow to spend the money to improve our data products, delivery timelines, and team efficiency. But when looking outward to others, we had a simple hypothesis: if we feel these gaps are worth closing for ourselves, the fusion of these ideas will create a new way of connecting the field to the office seamlessly, while enhancing the strengths of each working context. Markets like utilities, construction, environmental services, oil and gas, and mining all suffer from a similar body of logistical and information management challenges we did.

Fulcrum wasn’t our first foray into software development, or even our first attempt to create our own toolset for mobile mapping. Previously we’d built a couple of applications: one never went to market, was completely internal-only, and one we did bring to market for a targeted industry (building and home inspections). Both petered out, but we took away revelations about how to do it better and apply what we’d done to a wider market. In early 2011 we went back to the whiteboard and conceptualized how to take what we’d learned the previous years and build something new, with the foundational approach above as our guidebook.

We started building in early spring, and launched in September 2011. It was free accounts only, didn’t have multi-user support, there was only a simple iOS client and no web UI for data management — suffice it to say it was early. But in my view this was essential to getting where we are today. We took our infant product to FOSS4G 2011 to show what we were working on to the early adopter crowd. Even with such an immature system we got great feedback. This was the beginning of learning a core competency you need to make good products, what I’d call “idea fusion”: the ability to aggregate feedback from users (external) and combine with your own ideas (internal) to create something unified and coherent. A product can’t become great without doing these things in concert.

I think it’s natural for creators to favor one path over the other — either falling into the trap of only building specifically what customers ask for, or creating based solely on their own vision in a vacuum with little guidance from customers on what pains actually look like. The key I’ve learned is to find a pleasant balance between the two. Unless you have razor sharp predictive capabilities and total knowledge of customer problems, you end up chasing ghosts without course correction based on iterative user feedback. Mapping your vision to reality is challenging to do, and it assumes your vision is perfectly clear.

On the other hand, waiting at the beck and call of your user to dictate exactly what to build works well in the early days when you’re looking for traction, but without an opinion about how the world should be, you likely won’t do anything revolutionary. Most customers view a problem with a narrow array of options to fix it, not because they’re uninventive, but because designing tools isn’t their mission or expertise. They’re on a path to solve a very specific problem, and the imagination space of how to make their life better is viewed through the lens of how they currently do it. Like the quote (maybe apocryphally) attributed to Henry Ford: “If I’d asked customers what they wanted, they would’ve asked for a faster horse.” In order to invent the car, you have to envision a new product completely unlike the one your customer is even asking for, sometimes even requiring other industry to build up around you at the same time. When automobiles first hit the road, an entire network of supporting infrastructure existed around draft animals, not machines.

We’ve tried to hold true to this philosophy of balance over the years as Fulcrum has matured. As our team grows, the challenge of reconciling requests from paying customers and our own vision for the future of work gets much harder. What constitutes a “big idea” gets even bigger, and the compulsion to treat near term customer pains becomes ever more attractive (because, if you’re doing things right, you have more of them, holding larger checks).

When I look back to the early ‘10s at the genesis of Fulcrum, it’s amazing to think about how far we’ve carried it, and how evolved the product is today. But while Fulcrum has advanced leaps and bounds, it also aligns remarkably closely with our original concept and hypotheses. Our mantra about the problem we’re solving has matured over 7 years, but hasn’t fundamentally changed in its roots.

✦

The Power of the SaaS Business Model

February 1, 2018 • #

We’re about to head to SaaStr Annual again this year, an annual gathering of companies all focused on the same challenges of how to build and grow SaaS businesses. I’ve had some thoughts on SaaS business models that I wanted write down as they’ve matured over the years of building a SaaS product.

I wrote a post a while back on subscription models, but in the context of consumer applications. My favorite thing about the subscription structure is how well it aligns incentives for both buyers and sellers. While this alignment applies to app developers and buyers in consumer software, I think the incentives are even more substantial with business applications, and they’re more important long term. The issue of ongoing support and maintenance with a high-investment business application is more pronounced — if Salesforce is down, my sales team’s time is wasted and I’m losing money. Whereas if I can’t get support for my personal text editor application that’s $5/mo, the same criticality isn’t there. Support and updates are just a small (and obvious) reason why the ongoing subscription model is better for product makers, and in turn, buyers. But let’s dig in some more. What’s better about the SaaS model?

First: subscription pricing significantly reduces the “getting started” barrier for buyers and sellers. If I go from charging you $1,000 up front for a powerful CAD application to a monthly subscription model for $79/mo, you and I both win. You like it because you’re comfortable paying that first $79 with no touch to get started, just subscribing online; no friction there. I like it because I don’t have to front-load investment in convincing you of the value. This potentially expands my customer count and gets past the initial transaction quickly.

Second: there’s predictability on the spending and earning side. If you’re a buyer of fixed cost products, you have to predict ahead of time what next year’s cost might be for the 2019 version, decide whether or not you need to include it in your budget, and have to forecast possible expansion use far in advance. With SaaS you can limit all three of those problems1. As the seller, I get to enjoy the magic of recurring revenue (or in the lingo, MRR – monthly recurring revenue).

Third: pricing is easier2. In an older “box software” model, I would have to figure out the appropriate “lifetime value” my product has on the day I sell it, and balance this with what price the market will bear. Once it’s sold, there’s no space for experimentation to map price to value, the deal’s done. SaaS can be fluid here, giving me space to fit the ongoing delivery of value to the price. Of course I don’t want to be changing pricing every month, but it’s within my control to keep the pricing at an effective and sustainable level. When setting pricing, I can break it down to a smaller unit of time, as in “what value does this have to my customer over a month or quarter?”, without trying to predict how long they’ll be my customer. That’s called CLTV (customer lifetime value) and it’s a key metric to track after signing on customers. After a year or so, I have CLTV data I can use to inform pricing. Managing CLTV versus CAC (customer acquisition cost) relationship is part of the SaaS pathfinding to a repeatable business3.

So what are the downsides? I don’t think there are any true negatives for anyone. For the seller, the major downside is that you have to keep earning the money for your product month over month, year over year. And I’d say buyers would actually call that a major upside! There’s no opportunity to sell a lemon to the customer and take home the reward — if your product doesn’t live up to the promise, you might only collect 1/12th of what you spent to get the customer in the first place (see CAC v LTV!). That’s no good. In SaaS you have to keep delivering and growing value if you want to keep that middle R in “MRR” alive. I call this a “downside” for sellers insofar as it creates a new business challenge to overcome. Selling your product this way actually has huge long-term benefits to your product and company health. It prevents you from taking shortcuts for easy money.

This is the greatest thing about the SaaS model: keeping everyone honest. It allows the best products to float to the top of the market. To compete and grow as a SaaS product, you have to keep up with the competition, track ever-growing customer expectations, release new capabilities, maintain stability, and continually harden your security. Buyers are kept honest by their spend; they have to keep buying if they want the backup of ongoing support, updates, new features, solid security, and more.

One thing I’ve seen with a SaaS business is the perception from buyers that the recurring costs will incur a higher total lifetime cost for a solution. So in the case above, if my CAD software is a subscription seat for $79/mo per license, customers will immediately compare it to the old model — “it was $1,000 one time fee, now it’s $79 each month. After a year I’ll be paying more than the one-time cost. The product is core to my business, so I’ll definitely be using it longer than a year.”

While this is true when strictly comparing costs, it doesn’t tell the whole story. In the early days testing a new product, it’s hard to see where the invisible costs will be. How much support will I need? Are there going to be bugs that need patching? What if I need to call someone to troubleshoot major issues? What’ll my internal IT costs be to roll out updates? The SaaS advantage is that (in general) there are good answers to these questions; ongoing support and improvements are part of the monthly tab. Another thing you run into, though less and less these days, is the compulsion to build the capability internally. The perception of high lifetime cost compels technical buyers to spend that money on their internal IT department rolling their own software to solve the problem. Not that this solution never makes sense, but most buyers are not software companies at the core. They’ll never build a great solution to their problem and be willing to commit to the maintenance investment to keep pace with what SaaS providers are doing. Over time as the SaaS model spreads, buyers will get more comfortable with the process and better understand where their SaaS spend is going.

These two posts from Ben Thompson give a great rundown of companies switching to SaaS, and why subscription business models are better for incentives.

  1. There’ll always be exceptions here, even in SaaS. But you can at least put most of your customers in a consistent bucket. 

  2. Of course a SaaS product could change their pricing along the way, too. But at least the individual purchasing events are more predictable, on average. And not to imply that pricing is ever objectively easy. 

  3. SaaStr is the best resource for all things unit economics and metrics. A gold mine of prior art for anyone in the SaaS market. 

✦
✦