A better path is to reflect forward, not backwards. Develop a loose theory while working on whatâs next. Appreciate thereâs no certainty to be found, and put all your energy into doing better on an upcoming project. But how will you do better next time if you donât know what went wrong last time? Nothing is guaranteed other than experience. Youâll simply have more time under the curve, and more moments under tension, to perform better moving forward. Internalize as you go, not as you went.
With product design, constraints are your friend. Great products emerge from teams able to differentiate between control and noise factors: things they can control vs. things they canât. Many teams are tempted to waste time worrying about things outside of their control.
In this example, the control factors are all the things we get to decide: Which quotes to include, whether the block has some header text or not, the language of the header text, the position of the name of each reviewer relative to their quote, whether to use just first name or first and last name, whether to include their title, etc.
The noise factors are all the things that we canât control. Thatâs the width of the container that our block fits into, the maximum height, the colors and typography of the existing site, etc. There can also be noise factors in the implementation. For example, the CSS framework used for styling the elements, or the content management system used to build the page, or the Javascript framework available, etc.
He links to an episode of the Circuit Breaker podcast that covers this topic. Bobâs example of a ânoiseâ factor comes from his work with a dish soap company. If the soap they design doesnât work as an effective solvent in room temperature water (it requires hot water to function), then it isnât respecting a noise factor thatâs a simple reality of the environment: some people wash their dishes with cold water. Regardless of what you print on the label, people will do what theyâll do as part of their routine, and they may have their reasons. You can attempt to control the noise factor if you want, but your time is better spent incorporating common noisy elements as givens and designing for the best fit you can, using factors you can control (surfactant volumes in the soap, ingredients, etc.).
In product development, you can orient a team toward process or practice. Process is about repeatability, scalability, efficiency, execution. Practice is about creativity, inventiveness, searching for solutions.
Choosing between them isnât purely zero-sum (like more practice = worse process), but thereâs a natural tension between the two. And as with most ideas, the right approach varies depending on your product, your stage, your team, your timing. In general, youâre looking for a balance.
I heard about this concept on a recent episode of the Circuit Breaker podcast, with Bob Moesta and Greg Engle. A recommended listen.
In the discussion Bob mentions his experiences in Japan, and how the Japanese see process differently than we do here in the US:
A lot of this for me roots back to some of the early work I did in Japan around process. The interesting part is the difference between the way the Japanese talked about process was around the boundaries by which you have control.
The boundaries of the process basically say that you have responsibility and authority to change things inside that process, and that it was more about continuous improvement, changing, and realizing you know thereâs always a way to make it better, but here are the boundaries. When it got translated over to the US, it got turned into âbest practicesâ â to building a process and defining the steps. âThese are ways to do it. I donât want you to think. Stop thinking and just do the process and it works.â And so what happens is that most people equate making a process to not thinking and âjust following the stepsâ. And so thatâs where I think thereâs this big difference is that at some point time thereâs always got to be some deeper thinking inside the process.
Process assumes thereâs a linearity to problem-solving â you program the steps in sequence: do step 1, do step 2, do step 3. At a certain stage of maturity, work benefits from this sort of superstructure. Once youâve nailed the productâs effectiveness (it solves a known problem well), itâs time to swing the other way and start working out a process to bring down costs, speed things up, and optimize.
So what happens when a team over-indexes on process when they should be in creative practice mode?
A key indicator that itâs âpractice timeâ is when youâve got more unknowns than knowns. When there are still more unanswered questions than answered ones and you try to impose a programmatic process, people get confused and feel trapped. If you start to impose a linear process before itâs time, your team will grind to a halt and (very slowly) deliver a product that wonât solve user problems.
Too much process (or too early process) means you donât leave room for the creative thinking required to answer the unanswered.
Legendary engineer and management consultant W. Edwards Deming had a saying about process:
âIf you canât describe what you do as a process, then you donât know what youâre doing.â
But I love that Moesta calls this out, which I agree with. The quip overstates the value of process:
âBut that doesnât mean that if we can describe it as a process that we know what weâre doing. We can have a process and it doesnât work!
The best position for the process â practice pendulum is a function of your need at a point in time, and the maturity of your particular product (or feature, or function). In general the earlier you are on a given path, the less you should be concerned with process. You need the divergent-thinking creativity to search the problem space. Youâre in âsolve for unknownsâ mode. In contrast, later on once youâve solved for more of the unknowns and have confidence in your chosen direction, itâs beneficial to create repeatability, to shore up your ability to execute reliably. At that point itâs time to converge on executing, not to keep diverging into new territory.
Back to Bobâs point about process meaning âno thinking inside the processâ, perhaps we could contrast process and practice by the size of the boundaries inside which we can be divergent and experimental. When we need to converge on scalability and consistency we donât want to eliminate all thinking, just shrink down the confines of creativity. Even at this point in a mature cycle, the team will still encounter problems that we need them to think creatively to navigate â but the range of options should be limited (e.g. they canât âstart overâ). When our problem space is still rife with unanswered questions, we want a pretty expansive space to wander in search of answers. If our problem space is defined by having Hard Edges and a Soft Middle, at different stages of our work we should size that circle according to how much divergence or convergence we need.
All this isnât to say that during the creative, divergent thinking period that you should have an unbounded lack of structure to how you conduct the work. Perhaps itâs better to say that at this stage you want to define principles to follow that give you the degrees of freedom you need to explore the solution space.
âI would not give a fig for the simplicity on this side of complexity, but I would give my life for the simplicity on the other side of complexity.â
What a great line from Oliver Wendell Holmes.
When you think youâre coming up with âsimpleâ responses to complex problems, make sure youâre not (as Bob Moesta says) creating âsimplicity on the wrong side of the complexity.â
What we really want is to work through all the tangled complexity ourselves as weâre picking apart the problem and designing well-fit solutions.
A great (simple) solution to a complex problem can be that way because someoneâs taken on the burden of detangling the complexity first.
This is an interesting look into how an effective team works through the weeds of a product design review. I love how it shows the warts and complexities of even seemingly-simple flow of sending a batch email in an email client. So many little forking paths and specific details need direct thinking to shape a product that works well.
Building new things is an expensive, arduous, and long path, so product builders are always hunting for means to validate new ideas. Chasing ghosts is costly and deadly.
The âdata-drivenâ culture we now live in discourages making bets from the gut. âI have a hunchâ isnât good enough. We need to hold focus groups, do market research, and validate our bets before we make them. You need to come bearing data, learnings, and business cases before allowing your dev team to get started on new features.
And thereâs nothing wrong with validation! If you can find sources to reliably test and verify assumptions, go for it.
But these days teams are compelled to conduct user testing and research to vet a concept before diving in.
This push for data-driven validation I see as primarily a modern phenomenon, for two reasons:
First, in the old days all development of any new product was an enormously costly affair. You were in it for millions before you had anything even resembling a prototype, let alone a marketable finished product. Today, especially in software, the costs of bringing a new tool to market are dramatically slashed from 20 or 30 years ago. The toolchains and techniques developed over the past couple decades are incredible. Between the rise of the cloud, AWS, GitHub, and the plethora of open source tools, a couple of people have superpowers to do what used to take teams of dozens, and thousands of hours before you even had anything beyond a requirements document.
Second, we have tools and the cultural motivation to test early ideas, which werenât around back in the day. Thereâs a rich ecosystem of easy-to-use design and prototyping tools, plus a host of user testing tools (like UserTesting, appropriately) to carry your quick-and-dirty prototypes out into the world for feedback. Tools like these have contributed to the push for data-driven-everything. After all, we have the data now, and it must be valuable. Why not use it?
I have no problem with being data-driven. Of course you should leverage all the information you can get to make bets. But data can lie to you, trick you, overwhelm you. Weâre inundated with data and user surveys and analytics, but how do we separate signal from noise?
One of my contrarian views is that Iâm a big defender of gut-based decision making. Not in an âalways trust your gutâ kind of way, but rather, I donât think listening to your intuition means youâre âignoring the data.â Youâre just using data that canât be articulated. Itâs hard to get intuitive, experiential out of your head and into someone elseâs. You should combine your intuitive biases with other objective data sources, not attempt to ignore them completely.
Iâm fascinated by tacit knowledge (knowledge gained through practice, experience). It differentiates between what can be learned only through practice from what can be read or learned through formal sources â things we can only know through hands-on experience, versus things we can know from reading a book, hearing a lecture, or studying facts. Importantly, tacit knowledge is still knowledge. When you have a hunch or a directional idea about how to proceed on a problem, thereâs always an intrinsic basis for why youâre leaning that way. The difference between tacit knowledge and âdata-drivenâ knowledge isnât that thereâs no data in the former case; itâs merely that the data canât be articulated in a spreadsheet or chart.1
Iâve done research and validation so many ways over the years â tried it all. User research, prototype workshopping, jobs-to-be-done interviews, all of these are tools in the belt that can help refine or inform an idea. But none of them truly validate that yes, this thing will work.
One of the worst traps with idea validation is falling prey to your own desire for your idea to be valid. Youâre looking for excuses to put a green checkmark next to an idea, not seeking invalidation, which might actually be the more informative exercise. You find yourself writing down all the reasons that support building the thing, without giving credence to the reasons not to. With user research processes, so many times thereâs little to no skin in the game on the part of your subject. Whatâs stopping them from simply telling you want you want to hear?
Paul Graham wrote a post years ago with a phenomenal, dead-simple observation on the notion of polling people on your ideas. A primitive and early form of validation. I reference this all the time in discussions on how to digest feedback during user research (emphasis mine):
For example, a social network for pet owners. It doesnât sound obviously mistaken. Millions of people have pets. Often they care a lot about their pets and spend a lot of money on them. Surely many of these people would like a site where they could talk to other pet owners. Not all of them perhaps, but if just 2 or 3 percent were regular visitors, you could have millions of users. You could serve them targeted offers, and maybe charge for premium features.
The danger of an idea like this is that when you run it by your friends with pets, they donât say âI would never use this.â They say âYeah, maybe I could see using something like that.â Even when the startup launches, it will sound plausible to a lot of people. They donât want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.
The hardest part is separating the genuine from the imaginary. Because of the skin-in-the-game problem with validation (exacerbated by the fact that the proposal youâre validating is still an abstraction), youâre likely to get a deceitful sense of the value of what youâre proposing. Would you pay for this? is a natural follow up to a userâs theoretical interest, but is usually infeasible at this stage.
Itâs very hard to get early critical, honest feedback when users people have no reason to invest the time and mental energy thinking about it. So the best way to solve for this is to reduce abstraction â give the user a concrete, real, tangible thing to try. The closer they can get to a substantive thing to assess, the less youâre leaving to their imagination as to whether the thing will be useful for them. When an idea is abstract and a person says âThis sounds awesome, I would love thisâ, you can safely assume theyâre filling in unknowns with their own interpretation, which may be way off the mark. Tangibility and getting hands-on with a complete, usable thing to try, removes many false assumptions. You want their opinion on what it actually will be, not what theyâre imagining it might be.
Thereâs really only one real way to get as close to certain as possible. Thatâs to build the actual thing and make it actually available for anyone to try, use, and buy. Real usage on real things on real days during the course of real work is the only way to validate anything.
The Agile-industrial complex has promoted this idea for many years: moving fast, prototyping, putting things in market, iterating. But our need for validation â our need for certainty â has overtaken our willingness to take some risk, trust in our tacit knowledge, and put early, but concrete and minimal-but-complete representations out there to test fitness.
De-risking investments is a critical element of building a successful product. But some attempts to de-risk actively trick us into thinking an idea is better than it is.
So Iâll end with two suggestions: be willing to trust your intuition more often on your ideas, and try your damnedest to smoke test an idea with a complete representation of it, removing as much abstraction as possible.
Iâd highly recommend Gerd Gigerenzerâs book Gut Feelings, which goes deep on this topic. For a great prĂŠcis on the topic, check out his conversation from a few years back on the EconTalk podcast. ↩
Reading Ryan Singerâs Shape Up a few years ago was formative (or re-formative, or something) in my thinking on how product development can and should work. After that it was a rabbit hole on jobs-to-be-done, Bob Moestaâs Demand-Side Sales, demand thinking, and more.
Since he wrote the book in 2019, he talks about 2 new concepts that extend Shape Up a little further: the role of the âtechnical shaperâ and the stage of âframingâ that happens prior to shaping.
Framing is a particularly useful device to add meat onto the bone of working through defining the problem youâre trying to solve. Shaping as the first step doesnât leave enough space for the team to really articulate the dimensions of the problem itâs attempting to solve. Shaping is looking for the contours of a particular solution, setting the desired appetite for a solution (âweâre okay spending 6 weeks worth of time on this problemâ), and laying out the boundaries around the scope for the team to work within.
I know in our team, the worst project execution happens when the problem is poorly-defined before we get started, or when we donât get down into enough specificity to set proper scopes. I love these additions to the framework.
Sam Gerstenzang wrote an excellent piece a couple weeks ago with operating lessons for growing companies, driven by his learnings from the product team at Stripe. Personally, Iâve got a decade or so of experience as an âoperatorâ at a âstartupâ (two words I wouldnât have used to describe my job during most of that time). Since 2011 Iâve led the product team at Fulcrum, a very small team until the last few years, and still only in the medium size range. So my learnings on what âgood operatingâ looks like are based mostly on this type of experience, not necessarily how to lead a massive product team at a 10,000 person company. I think a surprising number of the desirable characteristics translate well, though. Many more than BigCo operator types would have you believe.
Overall this list that Sam put together is especially great for teams that are small, early, experimental, or trying to move fast building and validating products. There are a few tips in here that are pretty contrarian (unfortunately so, since they shouldnât be contrarian at all) to most of what youâll read in the literature on lean, agile, startuppy teams. But I like that many of these still apply to a company like Stripe thatâs in the few-thousands headcount range now. There isnât that stark a difference in desirable characteristics between small and large teams â or at least there doesnât need to be.
On the desire to enforce consistency:
Consistency is hard to argue against â consistency reinforces brand and creates better ease of use â but the costs are massive. Consistency just feels good to our system centric product/engineering/design brains. But it creates a huge coordination cost and prohibits local experimentation â everything has to be run against a single standard that multiplies in communication complexity as the organization gets larger. Iâd try to ask âif we aim for consistency here, what are the costs, and is it worth it?â I think a more successful way to launch new products is to ignore consistency, and add it back in later once the project is successful. It will be painful, but less painful than risking success.
Iâd put this in the same category as feeling need to âset yourself up to scaleâ. When a team lead is arguing to do something in a particular way that conforms with a specific process or procedure you want to reinforce through the company, itâs an easy thing to argue for. But too often it ignores the trade-offs in coordination overhead itâll take to achieve. Then the value of the consistency ends up suspect anyway. In my experience, coordination cost is a brutal destroyer of momentum in growing companies. Yes, some amount of it is absolutely necessary to keep the ship floating, but you have to avoid saddling yourself with coordination burdens you donât need (and wonât benefit from anyway). Apple or Airbnb might feel the need to tightly coordinate consistency, but you arenât them. Donât encumber yourself with their problems when you donât have to.
Enforcement of consistency â whether in in design, org charts, processes, output â has a cost, sometimes a high cost. So itâs not always worth the trade-off you have to make in speed or shots-on-goal. With a growing company, the number of times you can iterate and close feedback loops is the coin of the realm. Certain things may be so important to the culture youâre building that the trade-off is worth it, but be judicious about this if you want to maintain velocity.
On focus:
People think focus is about the thing youâre focused on. But itâs actually about putting aside the big shiny, exciting things you could be working on. The foundation of focus is being clear upfront about what mattersâbut the hard work is saying no along the way to the things that feel like they might matter.
True focus is one of the hardest parts once your team gets some traction, success, and revenue. Itâs actually easier in the earlier days when the sense of desperation is more palpable (I used to say we were pretty focused early on because it was the only way to get anything made at all). But once thereâs some money coming in, users using your product, and customers are growing, you have time and resources you didnât have before that you can choose to allocate as you wish. But the thing is, the challenge ahead is still enormous, and the things youâve yet to build are going to get even more intricate, complex, and expensive.
Itâs simple to pay lip service to staying focused, and to write OKRs for specific things. But somehow little things start to creep in. Things that seem urgent pop up, especially dangerous if theyâre âquick winsâ. You only have to let a few good-but-not-that-important ideas float around and create feel-good brainstorming sessions, and before you know it, youâve burned days on stuff that isnât the most important thing. Thereâs power in taking an ax to your backlog or your kanban board of ideas explicitly. Delete all the stuff you arenât gonna do, or at least ship it off to Storage B and get to work.
On product-market fit and small teams:
Start with a small team, especially when navigating product market fit. Larger teams create communication overhead. But more importantly they force definition = you have to divide up the work and make it clear who is responsible for what. Youâre writing out project plans and architecture diagrams before you even should know what youâre building. So start small and keep it loose until you have increased clarity and are bursting at the seams with work.
To restate what I said above, itâs all about feedback loops. How many at-bats can you get? How many experiments can you run? Seeking product-market fit is a messy, failure-ridden process that requires a ton of persistence to navigate. But speed is one of your best friends when it comes to maintaining the determination to unlock the job to be done and find just enough product fit that you get the signals needed to inform the next step. Therefore, small, surgical teams are more likely to successfully run this gauntlet than big ones. All of the coordination cost on top of a big, cross-functional group will drain the team, and greatly reduce the number of plate appearances you get. If you have fewer points of feedback from your user, by definition youâll be less likely to take a smart second or third step.
The responsibility point is also a sharp one â big teams diffuse the responsibility so thinly that team members feel no ownership over the outcomes. And it might be my cynicism poking through, but on occasion advocates for teams to be bigger, broader, and more cross-functional are really on the hunt for a crutch to avoid ownership. As the aphorism goes, âa dog with two owners dies of hunger.â Small teams have stronger bonds to the problem and greater commitment to finding solutions.
Overall, I think his most important takeaway is the value of trust systems:
Build trust systems. The other way is to create systems that create trust and distributed ownership â generally organized under some sort of âbusiness leadâ that multiple functions report to. Itâs easier to scale. Youâll move much more quickly. A higher level of ownership will create better job satisfaction. But you wonât have a consistent level of quality by function, and youâll need to hire larger numbers of good people.
A company is nothing but a network of relationships between people on a mission to create something. If those connections between people arenât founded in trust and a shared understanding of goals and objectives, the cost of coordination and control skyrockets. Teams low in trust require enormous investment in coordination â endless status update meetings, check-ins, reviews, et al1. If you can create strong trust-centric operating principles, you can move so much faster than your competition that itâs like having a superpower. The larger teams grow, of course, the more discipline is required to reinforce foundations of trust, but also the more important those systems become. A large low-trust team is incredibly slower than a small one. Coordination costs compound exponentially.
Donât take this to mean Iâm anti-meeting! Iâve very pro-meeting. The problem is that ineffective ones are pervasive. ↩
With Shape Up, the process of âshapingâ new product takes investment to really understand and communicate an idea. But in the process, this precedes the âpitchâ, the step where the team reviews a shaped concept and buys into working on it:
According to the book, shapers prepare pitches, pitches go to the betting table, and then the business decides at the last moment which pitches to âbet on.â Teams who follow this to the letter encounter a problem: how do you justify spending the time to shape something when you donât know if the business will value it?
This easily leads to a situation where shaping is ⌠underdone. Pitches become sales pitches to convince the business to spend time â to bet on â this or that problem or feature request. Those pitches focus on making the case to work on something, and they lack the rigor that goes into shaping a viable technical solution.
The pitch shouldnât be a sales pitch. At that stage, the business shouldnât need convincing anymore that a problem is worth sovling; the pitch is to showcase a proposed solution to the problem. So Ryan recently added a stage he calls âframingâ to convey the work prior to shaping, where we can communicate the value to the business and the nature of the problem to be solved:
To solve this, Iâve coached teams to introduce a new step before Shaping that we call Framing. Framing is all about the problem and the business value. Itâs the work we do to challenge a problem, to narrow it down, and to find out if the business has interest and urgency to solve it.
The framing session is where a feature request or complaint gets evaluated to judge what it really means, whoâs really affected, and whether now is the time to try and shape a solution.
Have you had that feeling of being several weeks into a project, and you find yourself wandering around, struggling to wrangle the scope back to what you thought it was when you started?
Itâs an easy trap to fall into. Itâs why Iâm always thinking about ways to make targets smaller (or closer, if youâre thinking about real physical targets). The bigger and more ambitious you want to be with an objective, the more confidence you need to have that the objective is the right one. What happens often is we decide a project scope â a feature or product prototype we think has legs â but the scope gets bigger than the confidence that weâre right. A few weeks in and thereâs hedging, backtracking, redefining. You realize you went down a blind alley thatâs hard to double-back on.
I heard an interesting perspective on scopes and approaches to building. Think of the âscopeâ as the definition of what the project is seeking to do, and the approach as the how.
In an interview on David Perellâs podcast, Ryan Singer made a comparison between having a hard outer boundary for the work with soft requirements on approach, versus rigid and specific micro-steps, without a solid fence around it, an unclear or amorphous objective. In his words: âhard walls with a soft middleâ or âhard middle with a soft wallâ:
Iâve had this mental image that I havenât been able to shake thatâs working for me lately, which is what weâre doing in Shape Up. We have a very hard outer wall for the work. And we have a soft middle. So thereâs a hard outer boundary perimeter â itâs very fixed, itâs going to be six weeks and weâre doing this, and this is in the project, and this is out of the project, and this is what this solution more or less. Clear hard outer boundaries. But then the middle is totally like âhey, you guys figure it out.â Right now what a lot of companies have is the opposite. They have a hard middle and a soft boundary. So what happens is they commit to this for the first two weeks, weâre going to build this and weâre going to build that, and weâre going to build that all these little things. And these become tickets or issues or very specific things that have to get done. And then what happens the next two weeks you say, okay, now weâre going to do this. Youâre specifying exactly what should go in the middle, and it just keeps growing outward because thereâs no firm boundary on the outside to contain it. So this is the the hard wall and the soft middle or the hard middle in the soft wall. I think our represent two very, very different approaches.
This requires trust in the product team to choose approach trade-offs wisely. If you encounter a library in use for the feature thatâs heavily out of date, but the version update requires sweeping changes throughout the app, youâll need to pick your battles. A team with fixations on particular steps (the âhard middleâ) might decide too early that an adjacent feature needs rework1. Before pulling up to a higher altitude to look at the entire forest, the teamâs already hitched to a particular step.
Setting a hard edge with the soft middle sets what the field of play and game plan look like, but doesnât prescribe for the team what plays to run. The opposite model has a team hung up on specific play calls, with no sense for how far there is to run, or even how large the field is in the first place. When you grant the team the freedom to make the tactical choices, everyone knows thereâs some freedom, but it isnât infinite. The team can explore and experiment to a point, but doesnât have forever to mess around. If you choose to work in the Shape Up-style 6 week cycle, decision velocity on your approaches has to be pretty high to hit your targets.
Any creative work benefits from boundaries, from having constraints on what can be done. The writer is constrained by a deadline or word count. The artist is constrained by the canvas and medium. A product team should be constrained by a hard goal line in terms of time or objective, or preferably both.
Some of the best work Iâve ever been a part of happened when we chose particular things we werenât going to do â when we intentionally blocked specific paths for ourselves for some cost/benefit/time balance. Boundaries allow us to focus on fewer possibilities and give greater useful, serious attention to fewer options. We can strongly consider 10 approaches rather than poorly considering 50 (or, even worse, becoming attached to a specific one before weâve explored any others).
Premature marraige to specific tactics pins you to the ground at the time when you need some space to explore. Because youâve locked yourself into a particular approach too early, it may require tons of effort and time to navigate from your starting point to the right end point. You may end up having to do gymnastics to make your particular decided-upon solution fit a problem you can find (a solution in search of a problem).
Hard edge, soft middle reminds me of a favorite philosophy from the sage Jeff Bezos, talking about Amazonâs aggressive, experimental, but intentional operating culture:
âBe stubborn on the vision, but flexible on the details.â
What you âneedâ to do is a dangerous trigger word. Almost always the perceived need is based on a particular understanding of trade-offs that could be misguided. One engineerâs need to recover some technical debt (while noble, of course) might be the opposite for CEO, who might be seeing a bigger picture existential need to the business. A thing is only âmore neededâ relative to something else. ↩
Product-led growth has been booming in the B2B software universe, becoming the fashionable way to approach go-to-market in SaaS. Iâm a believer in the philosophy, as weâve seen companies grow to immense scales and valuations off of the economic efficiencies of this approach powered by better and better technology. People point to companies like Atlassian, Slack, or Figma as examples that grew enormously through pure self-service, freemium models. You hear a lot of âthey got to $NN million in revenue with no salespeople.â
This binary mental model of either product-led or sales-led leads to a false dichotomy, imagining that these are mutually exclusive models â to grow, you can do it through self-service or you can hire a huge sales team, pick one. Even if itâs not described in such stark terms, claims like âthey did it without salesâ position sales as a sort of necessary evil we once had to contend with against our wills as technology builders.
But all of the great product-led success stories (including those mentioned above) include sales as a component of the go-to-market approach. Whether they refer to the function being performed by that name or they prefer any number of other modern euphemisms (customer happiness advocate, growth advisor, account manager), at scale customers end up demanding an engagement style most of us would call âsales.â
Product-led, self-service models and sales are not incompatible with one another. In fact, if structured well, they snap together into a synergistic flywheel where each feeds off of the other.
Early-stage customers
Product-led tactics have the most benefit in the early stage of a customerâs lifecycle, when your product is unproven. Free trials and freemium options lower the bar to getting started down to the floor, self-service tools allow early users to learn and deploy a tool in hours on their own timeline, and self-directed purchasing lets the buyer buy rather than be sold to. In 2021, flexibility is table stakes for entry-level software adoption. There are so many options now, the buying process is in the customerâs control.
With the right product design, pricing, and packaging structure, customers can grow on their own with little or no interaction through the early days of their expansion. For small to mid-size users, they may expand to maximum size with no direct engagement. Wins all around.
For larger customers (the ones all of us are really after in SaaS), this process gets them pretty far along, but at some stage other frictions enter the picture that have nothing to do with your productâs value or the customerâs knowledge of it. Financial, political, and organizational dynamics start to rear their heads, and these sorts of human factors are highly unlikely to get resolved on their own.
The Sales Transition
Once the bureaucratic dynamics are too great, for expansion to continue we need to intervene to help customers navigate their growing usage. As I wrote about in Enterprises Donât Self-Serve, several categories of friction appear that create growth headwinds:
Too many cats need to be herded to get a deal done â corralling the bureaucracy is a whole separate project unrelated to the effectiveness or utility of the product; no individual decision maker
The buyer isnât the user â user canât purchase product, purchaser has never used product; competing incentivesÂ
If you have an advocate, they have a day job â And that job isnât playing politics with accounting, legal, execs, IT, and others
As you start encountering these, you need to proactively intervene through sales. The role of sales is to connect with and navigate the players in the organization, then negotiate the give and take arrangements that create better deals for both parties: e.g. customer commits to X years, customer gets Y discount. Without a sales-driven approach here, every customer is treated as one-size-fits-all. Not the best deal for the vendor or customer. When you insert sales at the right stage, you increase the prospect of revenue growth, and the customerâs ability to sensibly scale into that growth with proper integration throughout their organization.
In SaaS literature youâll read about the notion of âchampionsâ, internal advocates for your product within your customer that are instrumental in growing usage. Champions serve a function in both methodologies â with product-led, theyâre pivotal for adoption to perpetuate itself without your involvement, and when engaging with sales, we need those champions to be intermediaries between vendor and buyer. They act like fixers or translators, helping to mediate the communication between the sides.
A well-built, product-led product mints these champions through empowerment. We give users all the tools they need â documentation, guides, forums, SDKs â to build and roll out their own solution. After a couple phases of expansion, users evolve from beginners to experts to champions. If weâre doing it right and time sales correctly, champions are a key ingredient to maximizing relationships for customers and product-makers. Product-led approach early creates inertia to keep growing, a back pressure that sales can harness to our advantage.
As they demonstrate, a user becoming a champion isnât the end state; champions beget future brand new users through advocacy, word-of-mouth, and promotion within their own networks.
Itâs dangerously short-sighted to look down the nose at sales as a bad word. Sales isnât just something you resort to when you âcanât do PLGâ, itâs a positive-sum addition to your go-to-market when you execute this flywheel properly.
Ryan Singer shows his work through the process of shaping a new feature for a product heâs building. This is a great detailed example of a process following the Shape Up philosophy.
Looking over the virtual shoulder of someoneâs process as they think through a problem from abstract â concrete is incredibly helpful at sparking ideas to help unstick my own projects. I like the nonlinearity here in his procedure. It doesnât go unidirectionally from coarse to fine; along the way heâs using lower- or higher-fidelity tools to work through questions based on the need that arises at the time:
The chain didnât become more concrete over time. It didnât start in words and end in mock-ups. There were sketches, mockups, and code spikes along the way, but they appeared in the middle of the process to resolve specific questions. When the level of abstraction dipped down to resolve a doubt, it climbed again afterward, and in the end the pitch had the latitude it should so the development team can still determine the final form.
âEfficiency is doing things right; effectiveness is doing the right things.â
â Peter Drucker
People throw around these two words pretty indiscriminately in business, usually not making a distinction between them. Theyâre treated as interchangeable synonyms for broadly being âgoodâ at something.
We can think about effectiveness and efficiency as two dimensions on a grid, often (but not always) in competition with one another. More focus on one means less on the other.
That Drucker quote is a pretty solid one-line distinction. But like many quotes, itâs concerned with being pithy and memorable, but not that helpful.
âDoing things rightâ is too amorphous. Iâd define the two dimensions like this:
Efficiency is concerned with being well-run, applying resources with minimal waste; having an economical approach
Effectiveness is a focus on fit, fitting the right solution to the appropriate problem, being specific and surgical in approach
Where would speed fit into this? Many people would think of velocity of work as an aspect of efficiency, but itâs also a result of and an input to effectiveness. When a team of SEAL operators swoop in to hit a target, weâd say thatâs just about the pinnacle of being âeffectiveâ, and swiftness is a key factor in driving that effectiveness.
Letâs look at some differences through the lens of product and company-building. What does it mean to orient on one over the other? Which one matters more, and when?
A company is like a machine â you can have an incredibly efficient machine that doesnât do anything useful, or you can have a machine that does useful things while wasting a huge amount of energy, money, and time.
With one option, our team leans toward methods and processes that efficiently deploy resources:
Use just the right number of people on a project
Create infrastructure thatâs low-cost
Build supportive environments that get out of peoplesâ way
Instrument processes to measure resource consumption
Spend less on tools along the way
With this sort of focus, a team gets lean, minimizes waste, and creates repeatable systems to build scalable products. Which all sounds great!
On the other dimension, we apply more attention on effectiveness, doing the right things:
Spend lots of time listening to customers to map out their problems (demand thinking!)
Test small, incremental chunks so we stay close to the problem
Make deliberate efforts, taking small steps frequently, not going too far down blind alleys with no feedback
Another great-sounding list of things. So what do we do? Clearly there needs to be a balance.
Depending on preferences, personality types, experiences, and skill sets, different people will tend to orient on one of these dimensions more than the other. People have comfort zones they like to operate in. Each stage of product growth requires a different mix of focuses and preferences, and the wrong match will kill your company.
If youâre still in search of the keys to product-market fit â hunting for the right problem and the fitting solution â you want your team focused on the demand side. What specific pains do customers have? When do they experience those pains? What things are in our range that can function as solutions? You want to spend time with customers and rapidly probe small problems with incremental solutions, testing validity of your work. Thatâs all that matters. This is Paul Grahamâs âdo things that donât scaleâ stage. Perfecting your machineâs efficiency is wasted effort until youâre solving the right problems.
A quick note on speed, and why I think itâs critical to being effective â if youâre laser-focused on moving carefully and deliberately to solve the right problem at the right altitude, but not able to move quickly enough, you wonât have a tight enough feedback loop to run through the iteration cycle enough over a period of time. Essential to the effectiveness problem is the ability to rapidly drive signal back from a user to validate your direction.
When you find the key that unlocks a particular problem-solution pair, then itâs time to consider how efficiently you can expand it to a wider audience. If your hacked-together, duct-taped solution cracks the code and solves problems for customers, you need to address the efficiency with which you can economically expand to others. In the early to mid-stage, effectiveness is far and away the more important thing to focus on.
The traditional definition of efficiency refers to achieving maximum output with the minimum required effort. When youâre still in search of the right solution, the effort:output ratio barely matters. It only matters insofar as you have the required runway to test enough iterations to get something useful before you run out of money, get beat by others, or the environment changes underneath you. But thereâs no benefit to getting 100 miles/gallon if youâre driving the wrong way.
Getting this balance wrong is easy. Thereâs a pernicious aspect to many engineers, particularly so in pre-PM-fit products: they like to optimize things. You need to forcefully resist spending too much time on optimization, rearchitecting, refactoring, et al until itâs the right time (i.e. the go-to-market fit stage, or thereabout). As builders or technologists, most of us bristle at the idea of doing something the quick and dirty way. We have that urge to automate, analyze, and streamline things. Itâs not to say that thereâs zero space for this. If you spend literally zero time on a sustainable foundation, then your product clicks and itâs time to scale up, youâll be building on unstable ground (see the extensive literature on technical debt here).
Thereâs no âcorrectâ approach here. It depends on so many factors. As Tom Sowell says, âthere are no solutions, only trade-offs.â In my first-hand experience, and from sideline observations of other teams, companies are made by favoring effectiveness early and broken by ignoring efficiency later.
I linked a few days ago to Packy McCormickâs piece Excel Never Dies, which went deep on Microsoft Excel, the springboard for a thousand internet businesses over the last 30 years. âLow-codeâ techniques in software have become ubiquitous at this point, and Excel was the proto-low-code environment â one of the first that stepped toward empowering regular people to create their own software. In the mid-80s, if you wanted to make your own software tools, you were in C, BASIC, or Pascal. Excel and its siblings (Lotus 1-2-3, VisiCalc) gave users a visual workspace, an abstraction layer lending power without the need to learn languages.
Today in the low-code ecosystem you have hundreds of products for all sorts of use cases leaning on similar building principles â Bubble and Webflow for websites, Make.com and Zapier for integrations, Notion and Coda for team collaboration, even Figma for designs. The strategy goes hand-in-hand with product-led growth: start with simple use cases, be inviting to new users, and gradually empower them to build their own products.
Excel has benefited from this model since the 80s: give people some building blocks, a canvas, and some guardrails and let them go build. Start out with simple formulas, create multiple sheets, cross-link data, and eventually learn enough to build your own complete custom programs.
What is it about low-code that makes such effective software businesses? Sure, thereâs the flexibility it affords to unforeseen use cases, and the adaptability to apply a tool to a thousand different jobs. But thereâs psychology at play here that makes it particularly compelling for many types of software.
Thereâs a cognitive phenomenon called the âIKEA effectâ, which says:
Consumers are likely to place a disproportionately high value on products they partially created
IKEA is famous for its modular furniture, which customers take home and partially assemble themselves. In a 2011 paper, Michael Norton, Daniel Mochon, and Dan Ariely identified this effect, studying how consumers valued products that they personally took part in creating, from IKEA furniture, to origami figures, to LEGO sets. Other studies of effort justification go way back to the 1950s, so itâs a principle thatâs been understood, even if only implicitly, by product creators for decades.
Low-code tools harness this effect, too. Customers are very willing to participate in the creation process if they get something in return. In the case of IKEA itâs more portable, affordable furniture products. In low-code software itâs a solution tailored to their personal, specific business need. Paradoxically, the additional effort a customer puts into a product through self-service, assembly, or customization generates a greater perception of value than consumer being handed an assembled, completed product.
SaaS companies should embrace this idea. Letting the customer take over for the âlast mileâ creates wins all around for everyone. Mutual benefits accrue to both creator and consumer:
Customers have a sense of ownership when they play a role in building their own solution.
The result can be personalized. In business environments, companies want to differentiate themselves from competitors. They donât want commodities that any other player can simply buy and install. Certainly not for their unique business practices.
Production costs are reduced. The product creator builds the toolbox (or parts list, instructions, and tool kit) and lets the consumer take it from there. Donât have to spend time understanding the nuances of hundreds of different use cases. You provide building blocks and let recombination generate thousands of unique solutions.
Increased retention! Studies showed that consumers consistently rated products they helped assemble higher in value than already-assembled ones. This valuation bias manifests in retention dynamics for your product:Â if customers are committed enough and build their own solution, theyâll more likely imbue it with greater value.
The challenge for product creators is to strike a balance â a âjust-rightâ level of customer participation. Too much abstraction in your product, requiring too much building of primitives, and the customer is confused and unlikely to have the patience to work through it. Likewise, when you buy an IKEA table, you donât want to be sanding, painting, or drilling, but snapping, locking, and bolting are fine. Success is a key criteria to get the positive upside. From the Wikipedia page:
To be sure, âlabor leads to love only when labor results in successful completion of tasks; when participants built and then destroyed their creations, or failed to complete them, the IKEA effect dissipated.â The researchers also concluded âthat labor increases valuation for both âdo-it-yourselfersâ and novices.â
Participation in the process creates a feedback loop: the tool adapts to the unique circumstances of the consumer, functions as a built-in reward, and the consumer learns more about their workflow in the process.
Low-code as a software strategy allows for a personalization on-ramp. Its IKEA effect gives customers the power to participate in building their own solution, tailoring it to their specific tastes along the way.
If youâre on the internet and havenât been living under a rock for the last few months, youâve heard about the startup Clubhouse and its explosive growth. It launched around the time COVID lockdowns started last year, and has been booming in popularity even with (maybe in-part due to?) an invitation gate and waitlist to get access.
The core product idea centers around âdrop-inâ audio conversations. Anyone can spin up a room accessible to the public, others can drop in and out, and, importantly, thereâs a sort of peer-to-peer model on contributing that differentiates it from podcasting, its closest analog.
I got an invite recently and have been checking out sessions from the first 50 or so folks I follow, really just listening so far. Their user and growth numbers arenât public, but from a glance at my follow recommendations I see lots of people I follow on Twitter already on Clubhouse.
They recently closed a B round led by Andreessen Horowitz, who also backed the company in its earlier months last year. Any time an investor does successive rounds this quickly is an indicator of magic substance under the hood, signals that show tremendous upside possibility. In the case of Clubhouse, user growth is obviously a big deal â viral explosion this quickly is always a good early sign â but Iâm sure there are other metrics theyâre seeing that point to something deeper going on with product-market fit. Perhaps DAUs are climbing proportional to new user growth, average session duration is super long, or retention is extremely high (users returning every day).
On the surface a skeptical user might ask: whatâs so different here from podcasts? Itâs amazing what explosive growth theyâve had given the similarities to podcasting (audio conversations), and considering its negatives when compared with podcasts. In all of the Clubhouse rooms Iâve been in, most users have telephone-level audio quality, thereâs somewhat chaotic overtalk, and âinterestingnessâ is hard to predict. With podcasts you can scroll through the feed and immediately tell whether youâll find something interesting; when I see an interesting guest name, I know what Iâm getting myself into. You can reliably predict that youâll enjoy the hour or so of listening.
Whenever a new product starts to take off like this, itâs hitting on some aspect of latent user demand, unfulfilled. What if we think about Clubhouse from a Jobs to Be Done perspective? Thinking about it from the demand side, what role does it play in addressing jobs customers have?
Clubhouseâs Differentiators
Clubhouse describes itself as âDrop-in audio chatâ, which is a stunningly simple product idea. Like most tech innovations of the internet era, the foundational insight is so simple that it sounds like a joke, a toy. Twitter, Facebook, GitHub, Uber â the list goes on and on â none required invention of core new technology to prosper. Each of them combined existing technical foundations in new and interesting ways to create something new. Describing the insights of these services at inception often prompted responses like: âthatâs it?â, âanyone could build thatâ, or âthatâs just a feature X product will add any day nowâ. In so many cases, though, when the startup hits on product-market fit and executes well, products can create their own markets. In the words of Chris Dixon, âthe next big thing will start out looking like a toyâ.
Clubhouse rides on a few key features. Think of these like Twitterâs combo of realtime messaging + 140 characters, or Uberâs connection of two sides of a market (drivers and riders) through smartphones and a userâs current location. For Clubhouse, it takes audio chat and combines:
Drop-in â You browse a list of active conversations, one tap and you drop into the room. Anyone can spin up a room ad-hoc.
Live â Everything happening in Clubhouse is live. In fact, recording isnât allowed at all, so thereâs a âyou had to be thereâ FOMO factor that Clubhouse can leverage to drive attention.
Spontaneous â Rooms are unpredictable, both when theyâll sprout up and what goes on within conversations. Since anyone can raise their hand and be pulled âon stageâ, conversation is unscripted and emergent.
Omni-directional â Podcasts are one-way: from producer to listener, or some shows have âlistener mailâ feedback loops. Clubhouse rooms by definition have a peer-to-peer quality. They truly are conversations, at least as long as the room doesnât have 8,000 people in it.
None of these is a new invention. Livestreaming has been around for years, radio has done much of this over the air for a century, and people have been hosting panel discussions since the time of Socrates and Plato. What Clubhouse does is mix these together in a mobile app, giving you access to live conversations whenever you have your phone plus connectivity. So, any time.
Through the Lens of Jobs
Jobs to Be Done focuses on what specific needs exist in a customerâs life. The theory talks about âstruggling momentsâ: gaps in demand that product creators should be in search of, looking for how to fit the tools we produce into true customer-side demand. It describes a world where customers âhireâ a product to perform a job. Wherever you see products rocketing off like Clubhouse, thereâs a clear fit with the market: users are hiring Clubhouse for a job that wasnât fulfilled before.
Some might make the argument that itâs addressing the same job as podcasts, but I donât think thatâs right exactly. For me it has hardly diminished my podcast listening at all. I think the market for audio is just getting bigger â not a zero-sum taking of attention from podcasts, but an increase in the overall size of the pie. Distributed work and the reduction of in-person interaction and events has amplified this, too (which weâll get to in a moment, a critical piece of the productâs explosive growth).
Letâs go through a few jobs to be done statements that define the role that Clubhouse plays in its usersâ lives. These loosely follow a format for framing jobs to be done, statements that are solution-agnostic, result in progress, and are stable across time (see Brian Rheaâs helpful article on this topic).
Iâm doing something else and want to be entertained, informed, etc.
Podcasts certainly fit the bill here much of the time. Clubhouse adds something new and interesting in how lightweight the decision is to jump into a room and listen. With podcasts thereâs a spectrum: on one end you have informative shows like deep dives on history or academic subjects (think Hardcore History or EconTalk) that demand attention and that entice you to completionism, and on the other, entertainment-centric ones for sports or movies, where you can lightly tune in and scrub through segments.
The spontaneity of Clubhouse rooms lends well to dropping in and listening in on a chat in progress. Because so many rooms tend to be agendaless, unplanned discussions, you can drop in anytime and leave without feeling like you missed something. Traditional podcasts tend to have an agenda or conversational arc that fits better with completionist listening. Think about when you sit down with Netflix and browse for 10 minutes unable to decide what to watch. The same effect can happen with podcasts, decision fatigue on what to pick. Clubhouse is like putting on a baseball game in the background: just pick a room and listen in with your on-and-off attention.
Ben Thompson called it the âfirst Airpods social networkâ. Pop in your headphones and see what your friends or followers are talking about.
I have an idea to express, but donât want to spend time on writing or learning new tools
Clubhouse does for podcasting what Twitter did for blogs: massively drops the barrier to entry to participation. Setting up a blog has always required some upstart cost. Podcasting is even worse. Even with the latest and greatest tools, publishing something new has overhead. Twitter lowered this bar, only requiring users to tap out short thoughts to broadcast them to the world. Podcasting is getting better, but is still hardware-heavy to do well.
Thereâs a cottage industry sprouting up on Clubhouse of âpost-gameâ locker room-style conversations following events, political, sports, television, even other Clubhouse shows. This plays well with the live aspect. Immediately following (or hell, even during) sporting events or TV shows, people can hop in a room and gab their analysis in real time.
Clubhouseâs similarities to Twitter for audio are striking. Now broadcasting a conversation doesnât require expensive equipment, audio editing, CDNs, feed management. Just tap to create a room and notify your followers to join in.
I want to hear from notable people I follow more often
This one has been true for me a few times. With the appâs notifications feature, you can get alerts when people you follow start up a room, then join in on conversations involving your network whenever they pop up. Iâve hopped in when I saw notable folks I follow sitting in rooms, without really looking at the topic. For those interesting people you follow that you make sure to listen to, Clubhouse expands those opportunities. Follow them on Clubhouse and drop in on rooms they go into. Not only can you hear more often from folks you like, you also get a more unscripted and raw version of their thoughts and ideas with on-the-fly Clubhouse sessions.
I want to have an intellectual conversation with someone else, but Iâm stuck at home!
Or maybe not even an intellectual one, just any social interaction with others!
This is where the timing of Clubhouseâs launch in April of last year was so essential to its growth. COVID quarantines put all of us indoors, unable to get out for social gatherings with friends or colleagues. Happy hours and dinners over Zoom arenât things any of us thought weâd ever be doing, but when the lockdowns hit, we took to them to fill the need for social engagement. Clubhouse fills this void of providing loose, open-ended zones for conversation just like being at a party. Podcasts, books, and TV are all one-way. Humans need connection, not just consumption.
COVID hurt many businesses, but it sure was a growth hack for Clubhouse.
Future Jobs to Be Done?
Products can serve a job to be done in a zero- or positive-sum way. They can address existing jobs better than the current alternatives, or they can expand the job market to create demand for new unfulfilled ones. I think Clubhouse does a bit of both. From first-hand experience, Iâve popped into some rooms in cases when I otherwise wouldâve put on a podcast or audiobook, and several times when I was listening to nothing else and saw a notification of something interesting.
Above are just a few of the customer jobs that Clubhouse is filling so far. If you start thinking about adjacent areas they could experiment with, it opens up even more greenfield opportunity. Offering downloads (create a custom podcast feed to listen to later?), monetization for organizers and participants (tipping?), subscription-only rooms (competition with Patreon?). Thereâs a long list of areas for the product to explore.
Where Does Clubhouse Go Next?
Thereâs a question in tech thatâs brought up any time a hot new entrant comes on the scene. It goes something like:
Can a new product grow its network or user base faster than the existing players can copy the product?
This has to be at the forefront of the Clubhouse foundersâ minds as their product is taking off. Twitterâs already launched Spaces, a clone of Clubhouse that shows up in the Fleets feed. That kind of prominent presentation to Twitterâs existing base adds quite the competitive threat, though Twitter isnât known for itâs lightning-quick product innovation over the last decade. But maybe theyâve learned their lesson in all their past missed opportunities. What could play out is another round of what happened to Snap with Stories, a concept thatâs been copied by justaboutevery product now.
Clubhouse is doing a respectable job managing the technical scalability of the platform as it grows. The growth tactics theyâre using with pulling in contacts, while controversial, appear to be helping to replicate the webs of user connections. The friction in building new social interest graphs is one of the primary things thatâs stifled other social products over the last 10 years. By the time new players achieve some traction, theyâre either gobbled up by Twitter or Facebook, or copied by them (aside from a few, like TikTok). Can Clubhouse reach TikTok scale before Twitter can copy it?
There are still unanswered questions on how Clubhouseâs growth plays out over time:
How far can it reach into the general public audience outside of its core tech-centric âonlineâ crowd?
Like any new network-driven product, when itâs shiny and new, we see a gold rush for followers. What behaviors will live chat incentivize?
How will room hosts behave competing for attention? What will be the âclickbaitâ of live audio chat?
What mechanisms can they create for generating social capital on the network? How does one build an initial following and expand reach?
Right now, the easiest way to build a following on Clubhouse is just like every other social networkâs default: bring your already-existing network to the platform. Itâs a bit early to see how Clubhouse might address this differently, but most of the big time users were folks with large followings on Twitter, YouTube, or elsewhere. Itâd be cool to see something like TikTok-esque algorithm-driven recommendations to raise distribution for ideas or topics even outside of the follower graphs of the members of the rooms.
Clubhouse (and this category of live multi-way audio chat) is still in the newborn stages. As it matures and makes its way to wider audiences outside of mostly tech circles, itâll be interesting to see what other âjobsâ are out there unfilled by existing products that it can perform.
In the wake of Salesforceâs acquisition of Slack, thereâs been a flood of analysis on whether it was a sign of Slackâs success or failure to grow as a company. Itâs funny that we live in a time when a $27bn acquisition of a 7-year-old company gets interpreted as a failure. Iâd consider it validation for their business that a $200bn company like Salesforce makes their largest acquisition ever on you. Broadly, itâs a move to make Salesforce more competitive with Microsoft as an operating system for business productivity writ-large.
One likely driver of selling now vs. later was the ever-expanding threat from Microsoftâs fantastic execution on Teams over the few years. Slack saw Microsoftâs distribution and customer relationship advantage, and that theyâd have a beast of a challenge peeling away big MS customers. This sort of âincumbentâ position in the enterprise is one of the strongest advantages Microsoft has, and theyâve been savvy in playing their cards to feed off of this position.
As a new entrant to the enterprise software space, Slackâs bottom-up product strategy has been one of their key advantages that fed their hypergrowth since 2014. The relentless focus on product quality drove viral adoption within user groups inside of organizations. Classic land-and-expand: get teams to adopt for themselves, and weave your way from that beachhead into the rest of the organization, with an eventual (often reluctant) official blessing from IT departments. The product-led growth (PLG) model (of which Slack was an early success story) allows new entrants to serve users first and foremost, sliding in under the radar of corporate buy-in inside companies: âshadow ITâ, as itâs known.
Within large companies, self-service and a product-led approach can get you a long way, as Slack and many others have demonstrated. But at a certain size you hit friction points with growth inside large accounts. Enterprise customers rarely adopt software with zero engagement from product makers. But Slack and other PLG successes have been able to push deeper than previously thought possible with hands-off, sales-free tactics.
Former founder and now-investor David Sacks wrote a great Twitter thread on this topic (also discussed on the All-In Podcast), reacting to Slackâs lateness to implement a sales organization:
1/ Ok since you asked, here are my reactions to the Slack deal and @levie comments to the effect that âthe idea that workers would someday choose all their own tools was always a fantasy... Best product doesnât always win, you also need the biggest sales force.â My thoughts: https://t.co/ZYdaf9XvaF
Thereâs no question that product-led is the way to go to get validation, traction, and growth, and that itâs still instrumental to building horizontal customer footprint. Sacksâs point is that Slack didnât handle the enterprise scaling requirements early enough (they now are).
Bottom-up is great for top-of-funnel customer acquisition (Sacks says âlead genâ), but starts to falter as a growth driver at some scale. The trick in architecting a hybrid product-led vs. sales-led dichotomy is finding where and when in the lifecycle to transition growing customers from one to the other. What the PLG movement has done for SaaS companies is carry customer expansion further into companies than before. The likes of Slack, Atlassian, and Twilio carried themselves to enormous scales on the back of a PLG, self-service strategy.
Why does PLG decelerate?
Why would an enterprise company (or one thatâs grown their use of a product to enterprise-scale penetration) not be able to self-serve the larger deployment? Why couldnât a product company rely on self-service once a companyâs usage grows to that point? It seems reasonable that if a customer scaled to a couple hundred users that the continued expansion would be an easy justification; if itâs working, why not keep expanding?
There are a few related reasons why relying on customers to serve themselves slows down at scale:
In large companies, individuals are no longer able to make decisions â champions for a product (that may already be using it in their team) need to build consensus across a diverse group of stakeholders to justify expanding
Too many cats need to be herded to get a deal done â see item 1, often a stunning number of heads need to be convinced, justified to, and won over; corralling the bureaucracy is a whole separate project unrelated to the effectiveness or utility of the product
Rarely no individual âbuyerâ â user canât purchase product, purchaser has never used product; incentives for each stakeholder working at cross purposes (one is looking to complete a project, one is looking to cut budget, one is looking to impress the press, etc etc)
If you have a champion, they have a day job â And that job isnât playing politics with accounting, legal, execs, IT, and others; thereâs no time for the customer to play this role for you
I can speak from experience on all dimensions of this. In the early days of a bottom-up product, landing that big logo and watching them grow looks like this â youâre growing seat count and things look to be taking off:
Watching it happen is magical, especially if youâve got an early product and/or small team. Youâre building product you think is useful, and youâre being validated by watching it weave its way up into a company with a household name.
But you eventually discover that true enterprise-scale adoption looks more like this:
The customer you thought you were growing wasnât truly the whole enterprise, but only a department or division1. In many (most?) national- or international-scale companies, bridging to neighboring departments is effectively selling to a whole new customer. Sure, the story of your productâs impact from adjacent teamsâ use cases is helpful, but often the barriers between these columns are enormous.
What you need is some fuel to help jump the gaps.
Enter your sales team
The reason for the sales team is primarily to coordinate and communicate with the stakeholders described above the on behalf of the buyer.
There are unicorn enterprise customers out there where youâll find a champion willing to saddle this burden of selling your product to themselves â sometimes a particularly aggressive or visionary IT leader, or exec â but this is a rarity. You canât and shouldnât rely on this existing in most organizations.
On the surface this thought runs counter to a lot of recently popular ideas on product-led growth. But what Sacks is claiming in his thread doesnât invalidate product-led, bottom-up as a strategy â in fact he says the opposite.
What it does say is that the go-to-market shouldnât be a binary methodology: either youâre bottom-up / product-led OR top-down / sales-led. For many B2B SaaS companies, the ideal system design is optimizing for product-led evolving into a sales-led approach when a customer reaches a certain stage of the lifecycle.
Even for teams that understand the dynamics of both methods, the hard part is finding the right place in the cycle for the methodology to flip. If one set of tactics is largely owned by the product, design, and marketing teams (PLG), and the other owned by sales and customer success teams (SLG), then without proper experimentation, management, and cultural behavior reinforcement, itâs possible that one of those teams leans too far beyond the transition point.
Sales too early stunts investments on super-efficient organic growth techniques with PLG; too late means customers may have slowed expansion because you weren't there for the assist in keeping the growth moving upward
This continuum is, of course, not fixed for all time or all companies. And those transition points are a lot more fuzzy in reality than in a chart.
PLG is growing in effectiveness over time, so the optimum transition stage from PLG to SLG is moving rightward for many types of products. A number of factors could cause this phenomenon. There are more and more companies opting for a PLG approach, but I think this is a response to changes in customer behavior more than itâs a modifier of customer behavior (though those effects move both ways). Things like technical comfort, the prevalence of self-service solutions in consumer technology, ease-of-use as a table stakes expectation, a wider competitive market for tools, and the sophistication level of technology expanding tremendously over the last 10 years are all moving parts that contribute to self-service becoming more widespread.
The more hands-off you are in early usage and ramping up, the less you often know about the specifics of the customer. Is it an intern leading a pilot project? Is it a real, funded initiative? Often hard to tell if youâre âauto-scalingâ on a PLG strategy. ↩
This week Stripe launched two new major products in their ever-expanding mission to build the economic and financial backbone for the internet.
Ben Thompson was one of two (along with the Wall Street Journal) to have embargoed early access to their launch of Stripe Treasury, their latest major product category. This interview with Stripe co-founder John Collison dives into the background on the product launches, Stripeâs strategy, and where these fit into the wider Stripe mission.
Theyâre extending their Capital product, which originally launched in 2019 to give Stripe customers access to capital for running their businesses, to their customersâ customers â as Thompson described: âbuilding a platform of platforms.â
But Treasury is the big deal. It provides what they call âbanking-as-a-serviceâ; developers can now embed full financial services into their products, using Stripeâs passthrough platform APIs to generate bank accounts and perform other financial transaction types. The key component here is not only that theyâre making it instantaneous to set up financial infrastructure through their banking partner network, but also extending that toolkit to the customers of customers, to allow building financial products on top of the Stripe platform.
John mentions in the interview that theyâve been describing this intent for years, calling the company a âpayments and treasury network.â I guess we shouldnât be surprised that they meant what they said, even though it sounded absurdly ambitious at the time. Donât underestimate Stripe.
Thereâs no better way to build an empathetic perspective of your customerâs life than to go and be one as often as you can.
Last week our team did an afternoon field day where the entire company went out on a scavenger hunt of sorts, using Fulcrum to log some basic neighborhood sightings. 42 people scattered across the US collected 1,230 records in about an hour, which is an impressive pace even if the use case was a simple one!
Data across the nation, and my own fieldwork in St. Pete
Itâs unfortunate how easy it is to stray away from the realities of what customers deal with day in and day out. Any respectable product person has a deep appreciation for how their product works for customers on the ground, at least academically. What exercises like this help us do is to get out of the realm of academics and try to do a real job. With B2B software, especially the kind built for particular industrial or domain applications, itâs hard to do this frequently since you arenât your canonical user; you have to contrive your own mock scenarios to tease out the pain points in workflow.
The problem is that manufactured tests canât be representative of all the messy realities in utilities, construction, engineering, or the myriad other cases we serve.
Thereâs no silver bullet for this. Acknowledging imperfect data and remaining aware of the gaps in your knowledge is the foundation. Then fitting your solution to the right problem, at the right altitude, is the way to go.
Exercises like ours last week are always energizing, though. Anytime you can rally attention around what your customers go through every day itâs a worthy cause. The list of observations and feedback is a mile long, and all high value stuff to investigate.
I just finished publishing my summary and takeaways from Marty Caganâs Inspired, his collection of ideas on building product teams. A lot of solid fundamentals there for startups, more meat on the bone in this one than most business books of its ilk.
Iâm gradually working back through book highlights and building out literature notes, which Iâd also one day like to get published in full somehow. Iâm thinking about how I can do that while preserving some of the interlinking in my Roam graph, and publishing some of those evergreen notes, as well.
An option is something you can do but donât have to do. All our product ideas are exactly that: options we may exercise in some future cycleâor never.
Without a roadmap, without a stated plan, we can completely change course without paying a penalty. We donât set any expectations internally or externally that these things are actually going to happen.
I know Basecamp is always the industry outlier with these things, but the thoughts on roadmaps are probably more true for many companies in reality than weâd all like to admit. We tend to look at things in a sort of hybrid way â not a fully baked roadmap with timelines, but a general list of roughly-sorted candidates that gain more and more momentum as we shape them out and prioritize. Every product team has a list of ideas 10x+ longer than anything they can build, so optionality is required to make the right decisions.
I recently watched this Mark Roberge session where he had an interesting way of describing the challenge that follows product-market fit. Tons of startup literature is out there talking about p-m fit. And likewise thereâs plenty out there about scaling, leadership, and company-building.
One of the most fascinating stages is in between, what he calls âgo-to-market fit.â This is where youâve found some traction and solved a problem, but havenât figured out how to do it efficiently. Hereâs how you think about the key goals in each phase:
Product-market fit: customer retention
If you can attract users but they donât stick around, you arenât yet solving a painful problem (assuming you havenât let pricing and other things get in the way)
Go-to-market fit: scalable unit economics
You know youâre there when you can repeatably deliver something valuable scalably and profitably
In each of these cases the real measurement lags your execution, so you need to find a proxy metric that predicts the goal number.
You can find metrics that are predictive signals of retention, but theyâll shift from product to product pretty widely. Things like active sessions, session lengths, sign in frequency, time-in-app, and the like can track with likelihood to stick around, but youâd have to experiment with ways to measure this if youâre in pre-product-market fit territory.
To predict go-to-market fit, you should know what a set of scalable and profitable metrics look like for your business. If you set down your target unit economics, like the LTV:CAC ratio (Mark uses the industry-common 3:1 as an example), you can work backwards to daily behaviors you can orient your team on to see how sustainable your pricing, packaging, and positioning are. It might take some experimentation given the acceptable goals would vary by company, but what you want to do is pick things you can measure quickly, like driving all the way down to leads per day, so you can adapt and change your tactics to zero in on what works. Waiting around for longer âactualsâ to come back from accounting on your revenue means you canât change quickly enough to sustain unprofitable models long enough to figure it out.
We often think a lot about product-market fit stage being the fast and loose experimental phase of a startup, but what Mark makes clear here is experimentation doesnât stop â it merely shifts from product and customer success to sales and marketing. Though the tighter all these areas work together to experiment, the better the results.
Basecampâs Ryan Singer has been doing this solo podcast on a lot of his favored topics, centered around product design. But he also branches into adjacent, related areas of systems, research, user experiences, and more.
I like the solo format as a different approach to your standard conversation or interview shows. Iâve listened to a couple of these, and the best way to describe the content is somewhere between a Twitter thread and a blog post series. You get the good parts about the Twitter medium â the sort of unstructured, âthinking out loudâ quality â with more space and freedom to wander between topics.
The low-code âmarketâ isnât really a market. Rather, I see it as an attribute of a software product, an implementation factor in how a product works. A product providing low-code capability says nothing about its intended value â it could be a product for sending emails, building automation rules, connecting APIs, or designing mobile forms.
What are termed âLCAPâ (low-code application platform) software are often better described as âtools to build your own apps, without having to write all the code yourself.â
This post isnât really about low-code as a marketplace descriptor, but about refining the nomenclature for how we talk about users we have in mind when designing low-code tools. Who are we building them for? Who needs them the most?
As Aron Korenblit wrote a few months back, low-code as a term isnât really about code, per se, but often things like process modeling, workflows, data flows, data cleanliness, speed of prototyping, and low cost trial and error:
If what weâre trying communicate is that no-code helps get things done faster, we should elevate that fact in how we name ourselves instead of objecting to code itself.
For many years, all sorts of tools from Mailchimp or Webflow to Fulcrum or Airtable provide layers of capabilities for a continuum of user types, moving from the non-technical through to full developers. The non-tech space wants templates and WYSIWIG tools, the devs want an API, JavaScript customization, and full HTML/CSS editing suites. I think a two-type dichotomy isnât descriptive enough, though. We need a third âsemi-technicalâ user in the middle.
The spectrum of users could look something like this â we analogize these to an Microsoft Excel user equivalent (parenthesized):
Novice â anything that looks like code is totally opaque to novices. Theyâre scared off by it and afraid to change anything for fear of breaking something (Can enter data in Excel, and maybe do some sorting, filtering, or data manipulation)
Tinkerer â can parse through code examples and pre-existing scripts to roughly understand, uses trial and error and small adjustments to modify or piece together snippets for their own use case; often also can work with data and data tools like database applications and SQL (Can use formulas, pivot tables, lookups, and more with Excel, comfortable slicing and dicing data)
Developer â fluent in programming languages; excited about the prospect of writing their own code from scratch, just wants to be pointed to the API docs (Can write VBScripts and macros in Excel, but mostly wants to escape its confines to build their own software)
Of course empowering the Novices is one of the primary goals with low-code approaches, as theyâre the least prepared to put together their own solutions. They need turn-key software.
And we can help Developers with low-code, too. If we can bootstrap common patterns and functionality through pre-existing building blocks, they can avoid repetitive work. Much of tool-building involves rebuilding 50-75% of the same parts you built for the last job, so low-code approaches can speed these folks up.
But the largest gap is that middle bunch of Tinkerers. Not only do they stand to gain the most from low-code tools. From my observations, that group is also the fastest-growing category. Every day as more tech-native people enter the workforce, or are compelled to dive into technical tools, people are graduating from Novice to Tinkerer status, realizing that many modern tools are resilient to experimentation and forgiving of user error. The tight feedback loops you can get from low-code affordances provide a cushion to try things, to tweak, modify, and customize gradually until you zero in on something useful. In many cases what a user decides is a âcompleteâ solution is variable â thereâs latitude to work with and not an extremely rigid set of hard requirements to be met. By providing building blocks, examples, and snippets, Tinkerers can home in on a solution that works for them.
Those same low-code tactics in user experience also give Novices and Tinkerers the prototyping scaffolds to build partial that can be further refined by a Developer. Sometimes the prototyping stage is plenty to get the job done, but even for more complex endeavors can greatly reduce cost.
After about 6-8 months of forging, shaping, research, design, and engineering, weâve launched the Fulcrum Report Builder. One of the key use cases with Fulcrum has always been using the platform to design your own data collection processes with our App Builder, perform inspections with our mobile app, then generate results through our Editor, raw data integrations, and, commonly, generating PDF reports from inspections.
For years weâve offered a basic report template along with an ability to customize the reports through our Professional Services team. What was missing was a way to expose our report-building tools to customers.
With the Report Builder, we now have two modes available: a Basic mode that allows any customer to configure some parameters about the report output through settings, and an Advanced mode that provides a full IDE for building your own fully customized reports with markup and JavaScript, plus a templating engine for pulling in and manipulating data.
Under the hood, we overhauled the generator engine using a library called Puppeteer, a headless Chrome node.js API for doing many things, including converting web pages to documents or screenshots. Itâs lightning fast and allows for a live preview of your reports as youâre working on your template customization.
Feedback so far has been fantastic, as this has been of the most requested capabilities on the platform. I canât wait to see all of the ways people end up using it.
Weâve got a lot more in store for the future. Stay tuned to see what else we add to it.
A neat concept demo from Dhrumil Shah showing possible enhancements for Roam Research. He calls them âRoam-Iâ and âRoam-Eâ:
Roam-I â for reusing old knowledge
Roam-E â collaboration
Most of this is user interface on top of the core technology that underpins how Roam works, but itâs great to see people so passionate about this that theyâll spend this much time prototyping ideas on products they use.
He proposes this format for thinking about the phases a company moves through â from idea to profits:
An idea is not a mockup
A mockup is not a prototype
A prototype is not a program
A program is not a product
A product is not a business
And a business is not profits
You can map this onto the debate between âidea vs. executionâ by calling everything below the idea the stage âexecution.â In certain circles, especially among normal people not steeped in the universe of tech companies, the idea component is enormously overweighted. If you make software and your friends or acquaintances know it, Iâm sure youâre familiar with flavors of âI have this great idea, I just need someone who can code to build it.â They donât understand that everything following the âjustâ is about 99.5% of the work to create success (or more)1.
Thinking of these steps as a state machine is a vivid way to describe it. He has them broken out in detail:
When laid out that way itâs clear why it takes such persistence and wherewithal to see an idea through to being a business.
To understand if you have an idea worth pursuing (or even one good enough to be adapted/modified into a great one), itâs a good exercise to simulate the game in your head, to imagine youâve already moved through a couple steps of the state machine. What are you encountering? If you think of a roadblock, how would you respond? This sort of âpre-gamingâ is what separates the best creators and product minds from everyone else. They take small, minimum-risk steps, look up to absorb new feedback, and adapt accordingly2.
Srinivasan calls this phenomenon the âidea mazeâ:
One answer is that a good founder doesnât just have an idea, s/he has a birdâs eye view of the idea maze. Most of the time, end-users only see the solid path through the maze taken by one company. They donât see the paths not taken by that company, and certainly donât think much about all the dead companies that fell into various pits before reaching the customer.
A good founder is thus capable of anticipating which turns lead to treasure and which lead to certain death. A bad founder is just running to the entrance of (say) the âmovies/music/filesharing/P2Pâ maze or the âphotosharingâ maze without any sense for the history of the industry, the players in the maze, the casualties of the past, and the technologies that are likely to move walls and change assumptions.
In other words: a good idea means a birdâs eye view of the idea maze, understanding all the permutations of the idea and the branching of the decision tree, gaming things out to the end of each scenario. Anyone can point out the entrance to the maze, but few can think through all the branches.
I remember Marc Andreessen in an interview talking about questioning founders during pitches: if you can probe deeper and deeper on a particular theme to a founder and theyâve already formulated a thoughtful answer, it means theyâve been navigating the idea maze in their head long before being probed by an investor.
Itâs worth thinking about how to incorporate this concept into my thinking on future product growth. I think to some extent this sort of thing comes naturally to certain people; the naturally curious ones are doing a version of this all the time, often unintentionally. But what if you could be intentional about it?
Not to mention the fact that people are typically ignorant to how often their eureka idea has already been tried or has already gained success because itâs obvious enough to have attracted plenty of others. ↩
See Antifragile, Talebâs magnum opus. An entire book on the subject of survivability, risk reduction, adaptation, and respect for proceeding with measured caution in âExtremistanâ (highly unpredictable environments). ↩
I liked this newsletter post from Aron Korenblit on the no-code movement. The name overstates the problem as being with âcodeâ in workflows, when the problem is really deeper than that:
How we feel about code âand why we want to avoid itâ has in fact nothing to do it with code itself. Code is a language that helps us tell inanimate objects what to do so we donât have to do it. Our real frustration lies with what usually comes with software projects: overblown budgets, long delays and expensive maintenance costs. Thatâs what weâre saying ânoâ to, not code.
These issues have nothing to do with code per se but are due to scope creep, poor planning and underestimating difficulty of a task. Issues which, let me tell you, are also very very present in projects using visual development tools (doesnât have the same ring to it, thatâs for sureâŚ).
If what weâre trying communicate is that no-code helps get things done faster, we should elevate that fact in how we name ourselves instead of objecting to code itself.
We notice the same with customers of Fulcrum. Since weâve been in the âno-code/low-codeâ market since 2011, weâve been seeing this for years. Most of our users are tech-literate, but a long way from being developers.
The common approach of all no-code platforms is to make visual user interfaces to represent dictating to the machine what you want it to do (thatâs all programming is, after all â a set of instructions guiding the computer on how you want to capture and process inputs to outputs). But even with drag-and-drop, configuration-based interfaces and pretty technical users, there are frequently hurdles to understanding and usage.
One of the biggest challenges is converting squishy, fungible processes that humans deal with easily into specific, articulated actions for the machine to take. Iâve always said that developing a useful software product is an exercise in first automating a workflow, then building in all the exception-handling required to do it reliably in the hundreds of scenarios the user could put it through.
So itâs not only about the code, but also creating visual representations of abstract processes rapidly, then making it easy to iterate and experiment with them.
Itâs common practice now in software development to do âcontinuous integrationâ â a constant reintegration of newly-written code with the master application to continuously run regression and unit tests, making the process of integrating new features a series of small efforts rather than one massive one at the end of a sprint cycle:
Many product teams have taken this principle to the next step and have learned that integration problems are time consuming, and that by integrating early and often (rather than a âphaseâ before testing), they can significantly speed up their overall throughput by minimizing the time that they are working in isolation.
Similarly, instead of testing everything in a phase at the end of a release cycle (even a 2-week release cycle) and finding all the problems at once, it is much better to run automated regression test suites continuously to find newly introduced issues as soon as possible (which significantly reduces the possible sources of the issue and hence the time to correct).
Marty Cagan makes the case here for a similar process for shaping new product opportunities: âcontinuous discoveryâ:
I have long argued for exactly the same principle in terms of how we come up with product backlog items. Rather than a âProduct Discovery Phaseâ where we come up with several weeks of validated product backlog items and deliver them to engineering, I encourage teams to do continuous product discovery â where we are constantly identifying, validating and describing new product backlog items. Some discovery work takes a few hours and other things can take longer, but it is an ongoing process of ideation, validation and description.
I think many companies do this already, at least some individuals do (including myself) without recognizing it as such. I like the idea of making it part of the discipline of how teams work all the time â for a particular part of your product team (owners, PMS, designers) to be spending almost all their time working with the raw material inputs from your own backlog ideas, customer feedback, conversations with sales, marketing leading indictators, market trends, and more, constantly molding and forming these elements into new product while others are building.
This falls in line with the Basecamp technique of maintaining no backlog. It makes sense to me: if youâre constantly sculpting and shaping new solutions from all your input channels, youâll be much more likely to create the right product, and to have a tight grip on what to build when the time comes for engineering to dig in.
This is another one from the archives, written for the Fulcrum blog back in 2016.
Engineering is the art of building things within constraints. If you have no constraints, you arenât really doing engineering. Whether itâs cost, time, attention, tools, or materials, youâve always got constraints to work within when building things. Hereâs an excerpt describing the challenge facing the engineer:
The crucial and unique task of the engineer is to identify, understand, and interpret the constraints on a design in order to produce a successful result. It is usually not enough to build a technically successful product; it must also meet further requirements.
In the development of Fulcrum, weâre always working within tight boundaries. We try to balance power and flexibility with practicality and usability. Working within constraints produces a better finished product â if (by force) you canât have everything, you think harder about what your product wonât do to fit within the constraints.
Microsoft Office, exemplifying 'feature creep'
The practice of balancing is also relevant to our customers. Fulcrum is used by hundreds of organizations in the context of their own business rules and processes. Instead of engineering a software product, our users are engineering a solution to their problem using the Fulcrum app builder, custom workflow rules, reporting, and analysis, all customizable to fit the goals of the business. When given a box of tools to build yourself a solution to a problem, the temptation is high to try to make it do and solve everything. But with each increase in power or complexity, usability of your system takes a hit in the form of added burden on your end users to understand the complex system â theyâre there to use your tool for a task, finish the job, and go home.
This balance between power and usability is related to my last post on treating causes rather than symptoms of pain. Trying too hard to make a tool solve every potential problem in one step can (and almost always does) lead to overcomplicating the result, to the detriment of everyone.
In our case as a product development and design team, a powerful suite of options without extremely tight attention on implementation runs the risk of becoming so complex that the lionâs share of users canât even figure it out. GitHubâs Ben Balter recently wrote a great piece on the risks of optimizing your product for edge cases1:
No product is going to satisfy 100% of user needs, although itâs sure tempting to try. If a 20%-er requests a feature
that isnât going to be used by the other 80%, thereâs no harm in just making it a non-default option, right?
We have a motto at GitHub, part of the GitHub Zen, that âanything added dilutes everything elseâ. In reality, there is
always a non-zero cost to adding that extra option. Most immediately, itâs the time you spend building feature A,
instead of building feature B. A bit beyond that, itâs the cognitive burden youâve just added to each userâs
onboarding experience as they try to grok how to use the thing youâve added (and if they should). In the long run,
itâs much more than maintenance. Complexity begets complexity, meaning each edge case you account for today, creates
many more edge cases down the line.
This is relevant to anyone building something to solve a problem, not just software products. Put this in the context of a Fulcrum data collection workflow. The steps might look something like this:
Analyze your requirements to figure out what data is required at what stage in the process.
Build an app in Fulcrum around those needs.
Deploy to field teams.
Collect data.
Run reports or analysis.
What we notice a surprising amount of the time is an enormous investment in step 2, sometimes to the exclusion of much effort on the other stages of the workflow. With each added field on a survey, requirement for data entry, overly-specific validation, you add potential hang-ups for end users responsible for actually collecting data. With each new requirement, usability suffers. People do this for good reason â theyâre trying to accommodate those edge cases, the occasions where you do need to collect this one additional piece of info, or validate something against a specific requirement. Do this enough times, however, and your implementation is all about addressing the edge problems, not the core problem.
When youâre building a tool to solve a problem, think about how you may be impacting the core solution when you add knobs and settings for the edge cases. Best-fit solutions require testing your product against the complete ideal life cycle of usage. Start with something simple and gradually add complexity as needed, rather than the reverse.
Benâs blog is an excellent read if youâre into software and the relationship to government and enterprise. ↩
As weâve started to adopt a process similar to Basecampâs, weâve been revisiting how we think about âbacklogsâ â the list of ideas and various requests we could work on in the product roadmap.
I liked this piece from Rich Ziade on the downsides of backlogs.
The term âbacklogâ makes me anxious. It implies being behind. It also implies that what youâve got today is an incomplete thing. You need to get through that backlog. It also impliesâdangerouslyâthat this is the true unrealized ideal for a product.
After working on a (now) successful product for almost 10 years, this one is familiar:
When any piece of software makes it out into the world, inevitably feedback follows. The bigger the impact of the software, the louder and more varied the feedback. Itâs actually a sign of success.
An interesting article from Kevin Kwok on Superhumanâs (the email client) user acquisition cycle. An ultimate example of the product-led growth model. Good observations here on what theyâre doing differently in having a product-centric strategy to drive the social referral loop. I liked this piece, about their hands-on, tailored onboarding process, which is almost totally unheard of in the consumer apps space:
Human Connection. Every onboarding at Superhuman is done by someone on their team. As importantly, because new users are already high intent, it feels far less transactional and sales oriented. And more aligned and focused on setting up and learning the product. Knowing someone at the company I suspect lowers churn. When the CEO onboards you personally, it becomes harder and more personal to unsubscribe.
I also think that associating a human face with the product (and one that isnât trying to get you to buy more) also changes the dynamic. It makes the product feel more personal. Beyond churn, I think this translates into things like how you feel when emailing them with feedback or complaints, makes you more likely to read their emails, etc.
Bryan wrote this up about the latest major release of Fulcrum, which added Views to the Editor tool. This is a cool feature that allows users doing QA and data analysis to save sets of columns and filters, akin to how views work in databases like PostgreSQL. We have some plans next to let users share or publish Views, and also to expose them via our Query API, with the underlying data functioning just like a database view does.
Thisâll be a foundational feature for a lot of upcoming neat stuff.
What definition do we mean when we talk about product? Like being a âproduct companyâ or working in âproduct managementâ?
A few things spring to my mind â creating something for use by a customer that can be packaged and sold to many customers in a generalized form, done repeatably and sustainably in a process.
Marty Cagan has a cogent definition here that I really liked:
When I was being coached on the tech lead role, my engineering manager needed me to understand that when creating products for the real world, engineering was not enough. He drew on the whiteboard a very simple but important equation:
Product = Customer x Business x Technology
He went on to explain that a successful tech product has to solve for the customer, has to solve for our business, and has to solve for the technology.
This equation maps to our four big risks in tech products: addressing usability risk is part of solving for the customer; addressing feasibility risk is part of solving for the technology; and addressing business viability risk is part of solving for the business. And value risk is a function of all three.
In my decade of experience doing this, Iâve come to discover that thereâs a âspecial sauceâ to the talent combination that makes a person thrive in working on a product. If core members of your product development team donât have an understanding of (or at least a respect for) their adjacent areas, they wonât make it:
Itâs absolutely critical that your companyâs leaders in product management, user experience design, and engineering all have a deep understanding of this fundamental equation of product, and they need to actively coach their product managers, designers and engineers on this as well.
Honest postmortems are insightful to get the inside backstory on what happened behind the scenes with a company. In this one, Jason Crawford goes into what went wrong with Fieldbook before they shut it down and were acquired by Flexport a couple years ago:
Now, with a year to digest, I think this is true and was a core mistake. I vastly underestimated the resources it was going to takeâin time, effort and moneyâto build a launchable product in the space.
In the 8 years since we launched the first version of Fulcrum, weâve had (fortunately) smaller versions of this experience over and over. Each new major overhaul, large feature, or product business model change weâve undertaken has probably cost us twice the time we initially expected it to. Scoping is a science itself that everyone has to learn.
In Jeff Bezosâs 2018 letter to Amazon shareholders, he discusses the topic of high standards: how to have them and how to get your team to have them. (As a side note, if you donât read Bezosâs shareholder letters, youâre missing out. Even if youâve already read all the business and startup advice in the world, you will find new and keen insights there.)
Bezos makes a few interesting points, but Iâll focus on one: To have high standards in practice, you need realistic expectations about the scope of effort required.
As a simple example, he mentions learning to do a handstand. Some people think they should be able to learn a handstand in two weeks; in reality, it takes six months. If you go in thinking it will take two weeks, not only do you not learn it in two weeks, you also donât learn it in six monthsâyou learn it never, because you get discouraged and quit. Bezos says a similar thing applies to the famous six-page memos that substitute for slide decks at Amazon (the ones that are read silently in meetings). Some people expect they can write a good memo the night before the meeting; in reality, you have to start a week before, in order to allow time for drafting, feedback, and editing.
David Blankenhorn calls for a return of intellectual humility in public discourse.
At the personal level, intellectual humility counterbalances narcissism, self-centeredness, pridefulness, and the need to dominate others. Conversely, intellectual humility seems to correlate positively with empathy, responsiveness to reasons, the ability to acknowledge what one owes (including intellectually) to others, and the moral capacity for equal regard of others. Arguably its ultimate fruit is a more accurate understanding of oneself and oneâs capacities. Intellectual humility also appears frequently to correlate positively with successful leadership (due especially to the link between intellectual humility and trustworthiness) and with rightly earned self-confidence.
This is a cool little background post from Ryan Singer explaining the origins and his process behind Shape Up, a web book about product development.
One of the things he did when getting started (with no solid idea of how to approach it) was commit to giving a workshop on how Basecamp worked. This created a deadlined forcing function to compel him to come up with the initial content and framework:
I didnât know how to write a book. But I did know how to give a workshop. So I put a call out on Twitter. The workshop was a prototyping device. It was going to do three things:
Force me to come up with a full dayâs worth of content that could eventually become a book.
Get instant feedback from an audience to learn whatâs interesting, what resonates and what doesnât.
Put something out into the world that people could buy, so I could interview them afterward using the jobs-to-be-done approach.
I priced the workshop at $1,000 a seat and made people fill out a lengthy application before letting them buy a ticket. The application asked them to tell stories about problems they experienced with product development. Using their answers, I could screen out anyone who was merely curious or a fan but wasnât really struggling. This created the best possible audience for getting feedback: hungry and motivated with skin in the game.
Evolution has a mysterious and amazing way of driving relentlessly toward simplicity and specialization:
Evolution figured outs its version of simplification. It (if you can imagine it talking) says, âGet all that useless crap out of the way. Just give me the few things I need and make them really effective.â
The question, then, is why complexity sells in the modern world.
Morgan Housel compares this phenomenon to why in the world of business, market motivations are almost always the reverse: consumers generally want a more complex product, service, or deliverable (or at least producers of goods convince themselves this is true). I love this quote he pulls from Edsger Djikstra:
Simplicity is the hallmark of truthâ we should know better, but complexity continues to have a morbid attraction. When you give for an academic audience a lecture that is crystal clear from alpha to omega, your audience feels cheated and leaves the lecture hall commenting to each other: âThat was rather trivial, wasnât it? The sore truth is that complexity sells better.
And as a reader of many books, this one strikes home in particular:
Length is often the only thing that can signal effort and thoughtfulness.
The U.S. constitution is 7,591 words. A typical business management book covering a single topic is perhaps 250 pages, or something like 65,000 words.
The funny thing is the average reader does not come close to finishing most books they buy. Even among bestsellers, average readers quit after a few dozen pages. Length, then, has to serve a purpose other than providing more material. My theory is that length indicates the author has spent more time thinking about a topic than you have, which can be the only data point signaling they might have insight you donât. It doesnât mean their thinking is right. And you may get enough of their thinking after two chapters. But the purpose of chapters 3-16 is often to show the author has done so much work that chapters 1 and 2 might have some insight. Same for research reports and white papers.
I can understand the motivations here: the perception of thoroughness signaled by high word counts, the perception of that perception on the part of authors and publishers. But as an experienced reader, Iâve learned I appreciate the reverse. Itâs rare to find anymore the brief, memorable, and impactful books â the ones that imprint themselves on your mind in a hundred pages. Books like Manâs Search for Meaning, Invisible Cities, or Meditations. Certain works can pack in a few chapters what it takes many a thousand pages to convey. A market where we get better at recognizing the value of simplicity in products would be a welcome one.
An RTIN mesh consists of only right-angle triangles, which makes it less precise than Delaunay-based TIN meshes, requiring more triangles to approximate the same surface. But RTIN has two significant advantages:
The algorithm generates a hierarchy of all approximations of varying precisions â after running it once, you can quickly retrieve a mesh for any given level of detail.
Itâs very fast, making it viable for client-side meshing from raster terrain tiles. Surprisingly, I havenât found any prior attempts to do it in the browser.
This is an interesting piece on the Figma blog about Notion and their design process in getting the v1 off the ground a few years ago. Iâve been using Notion for a while and can attest to the craftsmanship in design and user experience. All the effort put in and iterated on really shows in how fluid the whole app feels.
Iâm always a sucker for a curated list of reading recommendations. This oneâs from Stripe founder Patrick Collison, who seems to share a lot my interests and curiosities.
This is one from the archives, originally written for the Fulcrum blog back in early 2017. I thought Iâd resurface it here since Iâve been thinking more about continual evolution of our product process. I liked it back when I wrote it; still very relevant and true. Itâs good to look back in time to get a sense for my thought process from a couple years ago.
In the software business, a lot of attention gets paid to âshippingâ as a badge of honor if you want to be considered an innovator. Like any guiding philosophy, itâs best used as a general rule than as the primary yardstick by which you measure every individual decision. Agile, scrum, TDD, BDD â theyâre all excellent practices to keep teams focused on results. After all, the longer youâre polishing your work and not putting it in the hands of users, the less you know about how theyâll be using it once you ship it!
These systems followed as gospel (particularly with larger projects or products) can lead to attention on the how rather than the what â thinking about the process as shipping âlines of codeâ or what text editor youâre using rather than useful results for users. Loops of user feedback are essential to building the right solution for the problem youâre addressing with your product.
Thinking more deeply about aligning the desires to both ship _something_ rapidly while ensuring it aligns with product goals, it brings to mind a few questions to reflect on:
What are you shipping?
Is what youâre shipping actually useful to your user?
How does the structure of your team impact your resulting product?
How can a team iterate and ship fast, while also delivering the product theyâre promising to customers, that solves the expressed problem?
Defining product goals
In order to maintain a high tempo of iteration without simply measuring numbers of commits or how many times you push to production each day, the goals need to be oriented around the end result, not the means used to get there. Start by defining what success looks like in terms of the problem to be solved. Harvard Business School professor Clayton Christensen developed the jobs-to-be-done framework to help businesses break down the core linkages between a user and why they use a product or service1. Looking at your product or project through the lens of the âjobsâ it does for the consumer helps clarify problems you should be focused on solving.
Most of us that create products have an idea of what weâre trying to achieve, but do we really look at a new feature, new project, or technique and truly tie it back to a specific job a user is expecting to get done? I find it helpful to frequently zoom out from the ground level and take a wider view of all the distinct problems weâre trying to solve for customers. The JTBD concept is helpful to get things like technical architecture out of your way and make sure whatâs being built is solving the big problems we set out to solve. All the roadmaps, Gantt charts, and project schedules in the world wonât guarantee that your end result solves a problem2. Your product could become an immaculately built ship thatâs sailing in the wrong direction. For more insight into the jobs-to-be-done theory, check out This is Product Managementâs excellent interview with its co-creator, Karen Dillon.
Understanding users
On a similar thread as jobs-to-be-done, having a deep understanding of what the user is trying to achieve is essential in defining what to build.
This quote from the article gets to the heart of why it matters to understand with empathy what a user is trying to accomplish, itâs not always about our engineering-minded technical features or bells and whistles:
Jobs are never simply about function â they have powerful social and emotional dimensions.
The only way to unroll whatâs driving a user is to have conversations and ask questions. Figure out the relationships between what the problem is and what they think the solution will be. Internally we talk a lot about this as âunderstanding painâ. People âhireâ a product, tool, or person to reduce some sort of pain. Deep questioning to get to the root causes of pain is essential. Often times people want to self-prescribe their solution, which may not be ideal. Just look how often a patient browses WebMD, then goes to the doctor with a preconceived diagnosis, without letting the expert do their job.
On the flip side, product creators need to enter these conversations with an open mind, and avoid creating a solution looking for a problem. Doctors shouldnât consult patients and make assumptions about the underlying causes of a patientâs symptoms! Theyâd be in for some serious legal trouble.
Organize the team to reflect goals
One of my favorite ideas in product development comes from Steven Sinofsky, former Microsoft product chief of Office and Windows:
âDonât ship the org chart.â
The salient point being that companies have a tendency to create products that align with areas of responsibility within the company3. However, the user doesnât care at all about the dividing lines within your company, only the resulting solutions you deliver.
A corollary to this idea is that over time companies naturally begin to look like their customers. Itâs clearly evident in the federal contracting space: federal agencies are big, slow, and bureaucratic, and large government contracting companies start to reflect these qualities in their own products, services, and org structures.
With our product, we see three primary points to make sure our product fits the set of problems weâre solving for customers:
For some, a toolbox â For small teams with focused problems, Fulcrum should be seamless to set up, purchase, and self-manage. Users should begin relieving their pains immediately.
For others, a total solution â For large enterprises with diverse use cases and many stakeholders, Fulcrum can be set up as a total turnkey solution for the customerâs management team to administer. Our team of in-house experts consults with the customer for training and on-boarding, and the customer ends up with a full solution and the toolbox.
Integrations as the âglueâ â Customers large and small have systems of record and reporting requirements with which Fulcrum needs to integrate. Sometimes this is simple, sometimes very complex. But always the final outcome is a unique capability that canât be had another way without building their own software from scratch.
Though weâre still a small team, weâve tried to build up the functional areas around these objectives. As we advance the product and grow the team, itâs important to keep this in mind so that weâre still able to match our solution to customer problems.
For more on this topic, Sinofskyâs post on âFunctional vs. Unit Organizationsâ analyzes the pros, cons, and trade offs of different org structures and the impacts on product. A great read.
Continued reflection, onward and upward đ
In order to stay ahead of the curve and Always Be Shipping (the Right Product), itâs important to measure user results, constantly and honestly. The assumption should be that any feature could and should be improved, if we know enough from empirical evidence how we can make those improvements. With this sort of continuous reflection on the process, hopefully weâll keep shipping the Right Product to our users.
Not to discount the value of team planning. Itâs a crucial component of efficiency. My point is the clean Gantt chart on its own isnât solving a customer problem! ↩
Of course this problem is only minor in small companies. Itâs of much greater concern to the Amazons and Microsofts of the world. ↩
Earlier this year at SaaStr Annual, we spent 3 days with 20,000 people in the SaaS market, hearing about best practices from the best in the business, from all over the world.
If I had to take away a single overarching theme this year (not by any means ânewâ this time around, but louder and present in more of the sessions), itâs the value of customer success and retention of core, high-value customers. Itâs always been one of SaaStr founder Jason Lemkinâs core focus areas in his literature about how to âget to $10M, $50M, $100Mâ in revenue, and interwoven in many sessions were topics and questions relevant to things in this area â onboarding, âaha moments,â retention, growth, community development, and continued incremental product value increases through enhancement and new features.
Mark Roberge (former CRO of Hubspot) had an interesting talk that covered this topic. In it he focused on the power of retention and how to think about it tactically at different stages in the revenue growth cycle.
If you look at growth (adding new revenue) and retention (keeping and/or growing existing revenue) as two axes on a chart of overall growth, a couple of broad options present themselves to get the curve arrow up and to the right:
If you have awesome retention, you have to figure out adding new business. If youâre adding new customers like crazy but have trouble with customer churn, you have to figure out how to keep them. Roberge summed up his position after years of working with companies:
Itâs easier to accelerate growth with world class retention than fix retention while maintaining rapid growth.
The literature across industries is also in agreement on this. Thereâs an adage in business that itâs âcheaper to keep a customer than to acquire a new one.â But to me thereâs more to this notion than the avoidance of the acquisition cost for a new customer, though thatâs certainly beneficial. Rather itâs the maximization of the magic SaaS metric: LTV (lifetime value). If a subscription customer never leaves, their revenue keeps growing ad infinitum. This is the sort of efficiency ever SaaS company is striving for â to maximize fixed investments over the long term. Itâs why investors are valuing SaaS businesses at 10x revenue these days. But you canât get there without unlocking the right product-market fit to switch on this kind of retention and growth.
So Roberge recommends keying in on this factor. One of the key first steps in establishing a strong position with any customer is to have a clear definition of when they cross a product fit threshold â when they reach the âahaâ moment and see the value for themselves. He calls this the âcustomer success leading indicatorâ, and explains that all companies should develop a metric or set of metrics that indicates when customers cross this mark. Some examples from around the SaaS universe of how companies are measuring this:
Slack â 2000 team messages sent
Dropbox â 1 file added to 1 folder on 1 device
Hubspot â Using 5 of 20 features within 60 days
Each of these companies has correlated these figures with strong customer fits. When these targets are hit, thereâs a high likelihood that a customer will convert, stick around, and even expand. Itâs important that the selected indicator be clear and consistent between customers and meet some core criteria:
Observable in weeks or months, not quarters or years â need to see rapid feedback on performance.
Measurement can be automated â again, need to see this performance on a rolling basis.
Ideally correlated to the product core value proposition â donât pick things that are âmeasurableâ but donât line up with our expectations of âproper use.â For example, in Fulcrum, whether the customer creates an offline map layer wouldnât correlate strongly with the core value proposition (in isolation).
Repeat purchase, referral, setup, usage, ROI are all common (revenue usually a mistake â itâs a lagging rather than a leading indicator)
Okay to combine multiple metrics â derived âaggregateâ numbers would work, as long as they arenât overcomplicated.
The next step is to understand what portion of new customers reach this target (ideally all customers reach it) and when, then measure by cohort group. Putting together cohort analyses allows you to chart the data over time, and make iterative changes to early onboarding, product features, training, and overall customer success strategy to turn the cohorts from âredâ to âgreenâ.
We do cohort tracking already, but itâd be hugely beneficial to analyze and articulate this through a filter of a key customer success metric is and track it as closely as MRR. I think a hybrid reporting mechanism that tracks MRR, customer success metric achievement, and NPS by cohort would show strong correlation between each. The customer success metric can serve as an early signal of customer âactivationâ and, therefore, future growth potential.
I also sat in on a session with Tom Tunguz, VC from RedPoint Ventures, who presented on a survey they had conducted with almost 600 different business SaaS companies across a diverse base of categories. The data demonstrated a number of interesting points, particularly on the topic of retention. Two of the categories touched on were logo retention and net dollar retention (NDR). More than a third of the companies surveyed retain 90+% of their logos year over year. My favorite piece of data showed that larger customers churn less â the higher products go up market, the better the retention gets. This might sound counterintuitive on the surface, but as Tunguz pointed out in his talk, it makes sense when you think about the buying process in large vs. small organizations. Larger customers are more likely to have more rigid, careful buying processes (as anyone doing enterprise sales is well aware) than small ones, which are more likely to buy things âon the flyâ and also invest less time and energy in their vendorsâ products. The investment poured in by an enterprise customer makes them averse to switching products once on board1:
On the subject of NDR, Tunguz reports that the tendency toward expansion scales with company size, as well. In the body of customers surveyed, those that focus on the mid-market and enterprise tiers report higher average NDR than SMB. This aligns with the logic above on logo retention, but thereâs also the added factor that enterprises have more room to go higher than those on the SMB end of the continuum. The higher overall headcount in an enterprise leaves a higher ceiling for a vendor to capture:
Overall, there are two big takeaways to worth bringing home and incorporating:
Create (and subsequently monitor) a universal âcustomer success indicatorâ that gives a barometer for measuring the âtime to valueâ for new customers, and segment accordingly by size, industry, and other variables.
Focus on large Enterprise organizations â particularly their use cases, friction points to expansion, and customer success attention.
Weâve made good headway a lot of these findings with our Enterprise product tier for Fulcrum, along with the sales and marketing processes to get it out there. Whatâs encouraging about these presentations is that we already see numbers leaning in this direction, aligning with the âbest practicesâ each of these guys presented â strong logo retention and north of 100% NDR. Weâve got some other tactics in the pipeline, as well as product capabilities, that weâre hoping bring even greater efficiency, along with the requisite additional value to our customers.
Assuming thereâs tight product-market fit, and you arenât selling them shelfware! ↩
Ryan Singer and the Basecamp team just released their new ebook on product development, called Shape Up, made available for free online. Me and some of our team here have already dug into it and are finding some interesting ideas to experiment with in our own product development cycles.
On shaping and wireframing:
When design leaders go straight to wireframes or high-fidelity mockups, they define too much detail too early. This leaves designers no room for creativity.
Appetites:
Whether weâre champing at the bit or reluctant to dive in, it helps to explicitly define how much of our time and attention the subject deserves. Is this something worth a quick fix if we can manage? Is it a big idea worth an entire cycle? Would we redesign what we already have to accommodate it? Will we only consider it if we can implement it as a minor tweak?
Breadboarding:
Deciding to include an indicator light and a rotary knob is very different from debating the chassis material, whether the knob should go to the left of the light or the right, how sharp the corners should be, and so on.
Similarly, we can sketch and discuss the key components and connections of an interface idea without specifying a particular visual design. To do that, we can use a simple shorthand. There are three basic things weâll draw:
Places: These are things you can navigate to, like screens, dialogs, or menus that pop up.
Affordances: These are things the user can act on, like buttons and fields. We consider interface copy to be an affordance, too. Reading it is an act that gives the user information for subsequent actions.
Connection lines: These show how the affordances take the user from place to place.
Iâm planning to go through it completely this weekend. There are a couple of ideas here to try out right out of the gate.
Basecampâs Ryan Singer articulates well the struggles with adopting and truly getting value out of an agile workflow. The core problem most teams face isnât that theyâre bad at estimating time to completion, itâs that they donât even know what exactly theyâre trying to complete. Knowing the broad outline of the objective â like ârefactor user management interfaceâ or âbuild dashboarding systemâ â is one thing, and the team could all largely agree on the target. But itâs another thing entirely to break down each step along the way into a discrete element with a clear beginning, end, and lane markers on either side.
In macro terms the team may know where the goal is, but along the way there are plenty of unknowns encountered on the path that throw wrenches into progress.
The idea in this post of the âhill chartâ for visualizing the product development process is great. To me it represents reality much more accurately than simply perceiving work on a feature as âpointsâ to be applied.
Typically, once youâre over this âhill of uncertaintyâ, estimation becomes realistic and useful. Getting into the âmaking it happenâ state is where teams want to be:
Teams that track âvelocityâ and âstory pointsâ treat development as if itâs linear labor. Software development is not like moving a pile of stones.
If work was like that, you could count the stones, count the time to move one, do the math and be done.
Work that requires problem solving is more like a hill. Thereâs an uphill phase where you figure out what youâre doing. Then when you get to the top you can see down the other side and what itâll take to finish.
Even if you take a large feature with a health dose of uncertainty, applying a âstory pointâ quantity of 15 doesnât really help the rest of the team grasp the stage in the cycle â you end up with a binary status: itâs either done or itâs not done yet.
When I think about the potential of this as it pertains to our teamâs process, the soundest way to apply it is to think about everything left of the crest in the hill as happening in a âplanningâ stage, where weâre thinking through the implementation details, pros/cons, and risk/reward ahead of resource-heavy development. Creating specs, what-ifs, and wireframes as much as possible to lower the cost of reaching the top of the âhill of uncertaintyâ. Of course thereâs no way to reach 100% certainly before writing code, but it avoids falling into that trap of jumping the gun before weâve resolved the most obvious uncertainties.
Wireframing is a critical technique in product development. Most everyone in software does a good bit of it for communicating requirements to development teams and making iterative changes. For me, the process of wireframing is about figuring out what needs to be built as much as how. When weâre discussing new features or enhancements, rather than write specs or BDD stories or something like that, I go straight to a pen and paper or the iPad to sketch out options. You get a sense for how a UI needs to come together, and also for us visual thinkers, the new ideas really start to show up when I see things I can tweak and mold.
Weâve been using Moqups for a while on our product team to do quick visuals of new screens and workflows in our apps. Iâve loved using it so far â its interface is simple and quick to use, itâs got an archive of icons and pre-made blocks to work with, and has enough collaboration features to be useful without being overwhelming.
Weâve spent some time building out âmastersâ that (like in PowerPoint or Keynote) you can use as baseline starters for screens. It also has a feature called Components where you can build reusable objects â almost like templates for commonly-used UI affordances like menus or form fieldsets.
One of the slickest features is the ability to add interactions between mocks, so you can wire up simulated user flows through a series of steps.
Iâve also used it to do things like architecture diagrams and flowcharts, which it works great for. Check it out if you need a wireframing tool thatâs easy to use and all cloud-based.
I started with the first post in this series back in January, describing my own entrance into product development and management.
When I joined the company we were in the very early stages of building a data collection tool, primarily for internal use to improve speed and efficiency on data project work. That product was called Geodexy, and the model was similar to Fulcrum in concept, but in execution and tech stack, everything was completely different. A few years back, Tony wrote up a retrospective post detailing out the history of what led us down the path we took, and how Geodexy came to be:
After this experience, I realized there was a niche to carve out for Spatial Networks but Iâd need to invest whatever meager profits the company made into a capability to allow us to provide high fidelity data from the field, with very high quality, extremely fast and at a very low cost (to the company). I needed to be able to scale up or down instantly, given the volatility in the project services space, and I needed to be able to deploy the tools globally, on-demand, on available mobile platforms, remotely and without traditional limitations of software CDs.
Tonyâs post was an excellent look back at the business origin of the product â the âwhyâ we decided to do it piece. What I wanted to cover here was more on the product technology end of things, and our go-to-market strategy (where you could call it that). Prior to my joining, the team had put together a rough go-to-market plan trying to guesstimate TAM, market fit, customer need, and price points. Of course without real market feedback (as in, will someone actually buy what youâve built, versus say they would buy it one day), itâs hard to truly gauge the success potential.
Back then, modern web frameworks in use today were around, but there were very few and not yet mature, like Rails and itâs peers. Itâs astonishing to think back on the tech stack we were using in the first iteration of Geodexy, circa 2008. That first version was built on a combination of Flex, Flash, MySQL, and Windows Mobile1. It all worked, but was cumbersome to iterate on even back then. This was not even that long ago, and back then that was a reasonable suite of tooling; now it looks antiquated, and Flex was abandoned and donated to Apache Foundation a long time ago. We had success with that product version for our internal efforts; it powered dozens of data collection projects in 10+ countries around the world, allowing us to deliver higher-quality data than we could before. The mobile application (which was the key to the entire product achieving its goals) worked, but still lacked the native integration of richer data sources â primarily for photos and GPS data. The former could be done with some devices that had native cameras, but the built-in sensors were too low quality on most devices. The latter almost always required an external Bluetooth GPS device to integrate the location data. It was all still an upgrade from pen, paper, and data transcription, but not free from friction on the ground at the point of data collection. Being burdened by technology friction while roaming the countryside collecting data doesnât make for the smoothest user experience or prevent problems. We still needed to come up with a better way to make it happen, for ourselves and absolutely before we went to market touting the workflow advantages to other customers.
In mid-2009 we spun up an effort to reset on more modern technology we could build from, learning from our first mistakes and able to short-circuit a lot of the prior experimentation. The new stack was Rails, MongoDB, and PostgreSQL, which looking back from 10 years on sounds like a logical stack to use even today, depending on the product needs. Much of what we used back then still sits at the core of Fulcrum today.
What we never got to with the ultimate version of Geodexy was a modern mobile client for the data collection piece. That was still the early days of the App Store, and I donât recall how mature the Android Market (predecessor to Google Play) was back then, but we didnât have the resources to start of with 2 mobile clients anyway. We actually had a functioning Blackberry app first, which tells you how different the mobile platform landscape looked a decade ago2.
Geodexyâs mobile app for iOS was, on the other hand, an excellent window into the potential iOS development unlocked for us as a platform going forward. In a couple of months one of our developers that knew his way around C++ learned some Objective-C and put together a version that fully worked â offline support for data collection, automatic GPS integration, photos, the whole nine yards of the core toolset we always wanted. The new wave of platform with a REST API, online form designer, and iOS app allowed us to up our game on Foresight data collection efforts in a way that we knew would have legs if we could productize it right.
We didnât get much further along with the Geodexy platform as it was before we refocused our SaaS efforts around a new product concept thatâd tie all of the technology stack weâd built around a single, albeit large, market: the property inspection business. Thatâs what led us to launch allinspections, which Iâll continue the story on later.
In an odd way, itâs pleasing to think back on the challenges (or things we considered challenges) at the time and think about how they contrast with today. We focused so much attention on things that, in the long run, arenât terribly important to the lifeblood of a business idea (tech stack and implementation), and not enough on the things worth thinking about early on (market analysis, pricing, early customer development). Part of that I think stems from our indexing on internal project support first, but also from inexperience with go-to-market in SaaS. The learnings ended up being invaluable for future product efforts, and still help to inform decision making today.
As painful as this sounds we actually had a decent tool built on WM. But the usability of it was terrible, which if you can recall the time period was par for the course for mobile applications of all stripes. ↩
Since Iâm big on self-improvement and always looking for ways to get better at my professional craft, I was intrigued by this post from Marty Cagan from several years back. A post with a title the same as mine got me interested. Over the last few years Iâve occasionally read other companiesâ job descriptions for that role to understand what they look for in skillsets and orientation.
It certainly matches broadly with my perception of the proper focus areas. I do think weâre particularly strong in what he terms âproduct cultureâ:
A strong product culture means that the team understands the importance of continuous and rapid testing and learning. They understand that they need to make mistakes in order to learn, but they need to make them quickly and mitigate the risks. They understand the need for continuous innovation. They know that great products are the result of true collaboration. They respect and value their designers and engineers. They understand the power of a motivated product team.
The makeup of our team is such that we donât have to harp on these things much at all. Itâs sort of the default mode for our crew how to approach developing product, not just code or software. Our team is comfortable with the core product development approaches of assessing pros/cons, opportunity cost, future technical debt implications, long-term support strategy, and creating a balance of small, medium, and large muscle movements to create the best end result1.
Versus focusing too heavily on only minor enhancements to existing capability, or only working on Giant New Features. ↩
This was a long time in the making. Weâve launched our latest big feature in Fulcrum: photo annotations.
This feature was an interesting thing to take on. Rather than doing it the quick and dirty way, we did it right and built customized framework we could use across platforms. Because the primary interfaces for annotating are iOS and Android, the library is built in JavaScript and cross-compiled to each native mobile environment, which allows us to lean on a single centralized codebase to support both of our mobile platforms. We even have plans to build annotation support into our web-based Editor eventually, using the same core.
This was an exciting effort to watch come together. From architecting how itâd all work, to building the core, to winnowing down the list of edge cases and quirks, and finally shipping the final shiny new release.
Our entire engineering team â from the core web dev team to mobile â should be commended for the collaborative effort that brought this together. Thereâs nothing like the feeling of shipping new features that are accretive and valuable to our platform.
Iâve had R on my list for a long time to dig deeper with. A while back I set myself up with RStudio and went through some DataCamp stuff. This online book seems like excellent material in how to apply R to geostatistics.
Given where we are with Fulcrum in the product lifecycle, this rang very familiar on the struggles with how to listen to customers effectively, who to listen to, and how to absorb or deflect ideas. Once you get past product-market fit, the same tight connection between your customers and product team becomes impossible. Glad to hear we arenât alone in our struggles here.
This piece from David Heinemeier Hansson is a good reminder that steady, linear growth is still great performance for a business. Every business puts itself in a different situation, and certainly many are in debt or investment positions that linear growth isnât good enough for. Even so, consistent growth in the positive direction should always be commended.
Fulcrum, our SaaS product for field data collection, is coming up on its 7th birthday this year. Weâve come a long way: from a bootstrapped, barely-functional system at launch in 2011 to a platform with over 1,800 customers, healthy revenue, and a growing team expanding it to ever larger clients around the world. I thought Iâd step back and recall its origins from a product management perspective.
We created Fulcrum to address a need we had in our business, and quickly realized its application to dozens of other markets with a slightly different color of the same issue: getting accurate field reporting from a deskless, mobile workforce back to a centralized hub for reporting and analysis. While we knew it wasnât a brand new invention to create a data collection platform, we knew we could bring a novel solution combining our strengths, and that other existing tools on the market had fundamental holes we saw as essential to our own business. We had a few core ideas, all of which combined would give us a unique and powerful foundation we didnât see elsewhere:
Use a mobile-first design approach â Too many products at the time still considered their mobile offerings afterthoughts (if they existed at all).
Make disconnected, offline use seamless to a mobile user â They shouldnât have to fiddle. Way too many products in 2011 (and many still today) took the simpler engineering approach of building for always-connected environments. (requires #1)
Put location data at the core â Everything geolocated. (requires #1)
Enable business analysis with spatial relationships â Even though weâre geographers, most people donât see the world through a geo lens, but should. (requires #3)
Make it cloud-centric â In 2011 desktop software was well on the way out, so we wanted an platform we could cloud host with APIs for everything. Creating from building block primitives let us horizontally scale on the infrastructure.
Regardless of the addressable market for this potential solution, we planned to invest and build it anyway. At the beginning, it was critical enough to our own business workflow to spend the money to improve our data products, delivery timelines, and team efficiency. But when looking outward to others, we had a simple hypothesis: if we feel these gaps are worth closing for ourselves, the fusion of these ideas will create a new way of connecting the field to the office seamlessly, while enhancing the strengths of each working context. Markets like utilities, construction, environmental services, oil and gas, and mining all suffer from a similar body of logistical and information management challenges we did.
Fulcrum wasnât our first foray into software development, or even our first attempt to create our own toolset for mobile mapping. Previously weâd built a couple of applications: one never went to market, was completely internal-only, and one we did bring to market for a targeted industry (building and home inspections). Both petered out, but we took away revelations about how to do it better and apply what weâd done to a wider market. In early 2011 we went back to the whiteboard and conceptualized how to take what weâd learned the previous years and build something new, with the foundational approach above as our guidebook.
We started building in early spring, and launched in September 2011. It was free accounts only, didnât have multi-user support, there was only a simple iOS client and no web UI for data management â suffice it to say it was early. But in my view this was essential to getting where we are today. We took our infant product to FOSS4G 2011 to show what we were working on to the early adopter crowd. Even with such an immature system we got great feedback. This was the beginning of learning a core competency you need to make good products, what Iâd call âidea fusionâ: the ability to aggregate feedback from users (external) and combine with your own ideas (internal) to create something unified and coherent. A product canât become great without doing these things in concert.
I think itâs natural for creators to favor one path over the other â either falling into the trap of only building specifically what customers ask for, or creating based solely on their own vision in a vacuum with little guidance from customers on what pains actually look like. The key Iâve learned is to find a pleasant balance between the two. Unless you have razor sharp predictive capabilities and total knowledge of customer problems, you end up chasing ghosts without course correction based on iterative user feedback. Mapping your vision to reality is challenging to do, and it assumes your vision is perfectly clear.
On the other hand, waiting at the beck and call of your user to dictate exactly what to build works well in the early days when youâre looking for traction, but without an opinion about how the world should be, you likely wonât do anything revolutionary. Most customers view a problem with a narrow array of options to fix it, not because theyâre uninventive, but because designing tools isnât their mission or expertise. Theyâre on a path to solve a very specific problem, and the imagination space of how to make their life better is viewed through the lens of how they currently do it. Like the quote (maybe apocryphally) attributed to Henry Ford: âIf Iâd asked customers what they wanted, they wouldâve asked for a faster horse.â In order to invent the car, you have to envision a new product completely unlike the one your customer is even asking for, sometimes even requiring other industry to build up around you at the same time. When automobiles first hit the road, an entire network of supporting infrastructure existed around draft animals, not machines.
Weâve tried to hold true to this philosophy of balance over the years as Fulcrum has matured. As our team grows, the challenge of reconciling requests from paying customers and our own vision for the future of work gets much harder. What constitutes a âbig ideaâ gets even bigger, and the compulsion to treat near term customer pains becomes ever more attractive (because, if youâre doing things right, you have more of them, holding larger checks).
When I look back to the early â10s at the genesis of Fulcrum, itâs amazing to think about how far weâve carried it, and how evolved the product is today. But while Fulcrum has advanced leaps and bounds, it also aligns remarkably closely with our original concept and hypotheses. Our mantra about the problem weâre solving has matured over 7 years, but hasnât fundamentally changed in its roots.
Weâre about to head to SaaStr Annual again this year, an annual gathering of companies all focused on the same challenges of how to build and grow SaaS businesses. Iâve had some thoughts on SaaS business models that I wanted write down as theyâve matured over the years of building a SaaS product.
I wrote a post a while back on subscription models, but in the context of consumer applications. My favorite thing about the subscription structure is how well it aligns incentives for both buyers and sellers. While this alignment applies to app developers and buyers in consumer software, I think the incentives are even more substantial with business applications, and theyâre more important long term. The issue of ongoing support and maintenance with a high-investment business application is more pronounced â if Salesforce is down, my sales teamâs time is wasted and Iâm losing money. Whereas if I canât get support for my personal text editor application thatâs $5/mo, the same criticality isnât there. Support and updates are just a small (and obvious) reason why the ongoing subscription model is better for product makers, and in turn, buyers. But letâs dig in some more. Whatâs better about the SaaS model?
First: subscription pricing significantly reduces the âgetting startedâ barrier for buyers and sellers. If I go from charging you $1,000 up front for a powerful CAD application to a monthly subscription model for $79/mo, you and I both win. You like it because youâre comfortable paying that first $79 with no touch to get started, just subscribing online; no friction there. I like it because I donât have to front-load investment in convincing you of the value. This potentially expands my customer count and gets past the initial transaction quickly.
Second: thereâs predictability on the spending and earning side. If youâre a buyer of fixed cost products, you have to predict ahead of time what next yearâs cost might be for the 2019 version, decide whether or not you need to include it in your budget, and have to forecast possible expansion use far in advance. With SaaS you can limit all three of those problems1. As the seller, I get to enjoy the magic of recurring revenue (or in the lingo, MRR â monthly recurring revenue).
Third: pricing is easier2. In an older âbox softwareâ model, I would have to figure out the appropriate âlifetime valueâ my product has on the day I sell it, and balance this with what price the market will bear. Once itâs sold, thereâs no space for experimentation to map price to value, the dealâs done. SaaS can be fluid here, giving me space to fit the ongoing delivery of value to the price. Of course I donât want to be changing pricing every month, but itâs within my control to keep the pricing at an effective and sustainable level. When setting pricing, I can break it down to a smaller unit of time, as in âwhat value does this have to my customer over a month or quarter?â, without trying to predict how long theyâll be my customer. Thatâs called CLTV (customer lifetime value) and itâs a key metric to track after signing on customers. After a year or so, I have CLTV data I can use to inform pricing. Managing CLTV versus CAC (customer acquisition cost) relationship is part of the SaaS pathfinding to a repeatable business3.
So what are the downsides? I donât think there are any true negatives for anyone. For the seller, the major downside is that you have to keep earning the money for your product month over month, year over year. And Iâd say buyers would actually call that a major upside! Thereâs no opportunity to sell a lemon to the customer and take home the reward â if your product doesnât live up to the promise, you might only collect 1/12th of what you spent to get the customer in the first place (see CAC v LTV!). Thatâs no good. In SaaS you have to keep delivering and growing value if you want to keep that middle R in âMRRâ alive. I call this a âdownsideâ for sellers insofar as it creates a new business challenge to overcome. Selling your product this way actually has huge long-term benefits to your product and company health. It prevents you from taking shortcuts for easy money.
This is the greatest thing about the SaaS model: keeping everyone honest. It allows the best products to float to the top of the market. To compete and grow as a SaaS product, you have to keep up with the competition, track ever-growing customer expectations, release new capabilities, maintain stability, and continually harden your security. Buyers are kept honest by their spend; they have to keep buying if they want the backup of ongoing support, updates, new features, solid security, and more.
One thing Iâve seen with a SaaS business is the perception from buyers that the recurring costs will incur a higher total lifetime cost for a solution. So in the case above, if my CAD software is a subscription seat for $79/mo per license, customers will immediately compare it to the old model â âit was $1,000 one time fee, now itâs $79 each month. After a year Iâll be paying more than the one-time cost. The product is core to my business, so Iâll definitely be using it longer than a year.â
While this is true when strictly comparing costs, it doesnât tell the whole story. In the early days testing a new product, itâs hard to see where the invisible costs will be. How much support will I need? Are there going to be bugs that need patching? What if I need to call someone to troubleshoot major issues? Whatâll my internal IT costs be to roll out updates? The SaaS advantage is that (in general) there are good answers to these questions; ongoing support and improvements are part of the monthly tab. Another thing you run into, though less and less these days, is the compulsion to build the capability internally. The perception of high lifetime cost compels technical buyers to spend that money on their internal IT department rolling their own software to solve the problem. Not that this solution never makes sense, but most buyers are not software companies at the core. Theyâll never build a great solution to their problem and be willing to commit to the maintenance investment to keep pace with what SaaS providers are doing. Over time as the SaaS model spreads, buyers will get more comfortable with the process and better understand where their SaaS spend is going.
These twoposts from Ben Thompson give a great rundown of companies switching to SaaS, and why subscription business models are better for incentives.
Thereâll always be exceptions here, even in SaaS. But you can at least put most of your customers in a consistent bucket. ↩
Of course a SaaS product could change their pricing along the way, too. But at least the individual purchasing events are more predictable, on average. And not to imply that pricing is ever objectively easy. ↩
SaaStr is the best resource for all things unit economics and metrics. A gold mine of prior art for anyone in the SaaS market. ↩