Coleman McCormick

Archive of posts with tag 'Software'

Personal Software

January 17, 2025 • #

A common problem I encounter with computers is the everyday minor friction in workflow: the repetitive but only occasional task, or the tedious multi-step process.

Perfect example: the other day I wanted to batch resize and compress a bunch of images. It’s something I’ve had to do before, but not an everyday problem.

When you have a problem software can solve, it has to be painful enough to warrant the effort and overhead required to build something. Given my level of knowledge, I could thrash my way through writing a shell script to do this resizing operation (probably). But it’d take me a couple hours of Googling and trying and retrying to eventually get something that works — all for an operation that might take 7 minutes to just do manually. So rather than automate, I just deal with it.

Personal software

This means dozens of daily nags go on nagging —they don’t nag enough to warrant the cost of solving. And they aren’t painful enough to search for and buy software to fix. So I go on muddling through with hacks, workarounds, and unanswered wishes.

But yesterday with a few prompts Cursor, in 15 minutes I made (or the AI made) a shell script to handle images that I can reuse next time. I didn’t even look at the code it wrote. Just typed 3 bullets of a description of what I wanted in a readme file, and out comes the code. An annoying process goes away, never having to search around for existing tools. Even if a solution did exist, it’d probably be part of a bundle of other features I don’t need; I’d pay for the Cadillac when I only need a taxi.

We’re moving into a new phase where personal software like this might often be the simplest path to a solution. In a world where we’re used to going to Google or GitHub, it’s now even faster to make your own. It’s cracked open new possibilities for people formerly incapable of creating their own tools.

Software used to be costly enough that those “hey this might be cool” ideas were quickly set aside when the cost/benefit wasn’t there. There’s potential for this new paradigm of digital goods production to radically alter the landscape of what gets built.

✦
✦
✦

Chris Spiek and Ryan Singer on Shape Up

September 28, 2022 • #

Reading Ryan Singer’s Shape Up a few years ago was formative (or re-formative, or something) in my thinking on how product development can and should work. After that it was a rabbit hole on jobs-to-be-done, Bob Moesta’s Demand-Side Sales, demand thinking, and more.

Since he wrote the book in 2019, he talks about 2 new concepts that extend Shape Up a little further: the role of the “technical shaper” and the stage of “framing” that happens prior to shaping.

Framing is a particularly useful device to add meat onto the bone of working through defining the problem you’re trying to solve. Shaping as the first step doesn’t leave enough space for the team to really articulate the dimensions of the problem it’s attempting to solve. Shaping is looking for the contours of a particular solution, setting the desired appetite for a solution (“we’re okay spending 6 weeks worth of time on this problem”), and laying out the boundaries around the scope for the team to work within.

I know in our team, the worst project execution happens when the problem is poorly-defined before we get started, or when we don’t get down into enough specificity to set proper scopes. I love these additions to the framework.

✦

The Simplest Thing

September 20, 2022 • #

When working through problems, the most impressive creators to me aren’t those that divine an entire solution in their brain for an hour, then slam out a perfect result (spoiler: this doesn’t exist outside of the occasional savant). I love to watch people who are great at avoiding the temptation to overcomplicate. People who can break problems down into components. People who can simplify complex problems by isolating parts, and blocking and tackling.

I enjoyed this, from an interview with Ward Cunningham (programmer and inventor of the wiki):

It was a question: “Given what we’re trying to do now, what is the simplest thing that could possibly work?” In other words, let’s focus on the goal. The goal right now is to make this routine do this thing. Let’s not worry about what somebody reading the code tomorrow is going to think. Let’s not worry about whether it’s efficient. Let’s not even worry about whether it will work. Let’s just write the simplest thing that could possibly work.

Once we had written it, we could look at it. And we’d say, “Oh yeah, now we know what’s going on,” because the mere act of writing it organized our thoughts. Maybe it worked. Maybe it didn’t. Maybe we had to code some more. But we had been blocked from making progress, and now we weren’t. We had been thinking about too much at once, trying to achieve too complicated a goal, trying to code it too well. Maybe we had been trying to impress our friends with our knowledge of computer science, whatever. But we decided to try whatever is most simple: to write an if statement, return a constant, use a linear search. We would just write it and see it work. We knew that once it worked, we’d be in a better position to think of what we really wanted.

The most impressive software engineers I’ve worked with have a knack for this type of chewing through work. The simplest thing usually isn’t the cleanest, fewest lines of code, fewest moving parts, or the most well-tested. Simple means “does a basic function”, something you can categorically check and verify, something a collaborator can easily understand.

Sometimes you just need to first do the Simplest Thing before you can find the Correct thing.

✦
✦
✦

Software and Entropy

June 28, 2021 • #

Marc Andreessen was recently interviewed by Noah Smith in his newsletter. It’s a great post-pandemic update to Marc’s views on technology (spoiler: he’s still as optimistic as ever), following a year after his “Time to Build” essay.

Entropy and Software

When asked about the future of technology, he responds to the common criticism that tech is often gives us progress in the virtual, but not physical world:

Software is a lever on the real world.

Someone writes code, and all of a sudden riders and drivers coordinate a completely new kind of real-world transportation system, and we call it Lyft. Someone writes code, and all of a sudden homeowners and guests coordinate a completely new kind of real-world real estate system, and we call it AirBNB. Someone writes code, etc., and we have cars that drive themselves, and planes that fly themselves, and wristwatches that tell us if we’re healthy or ill.

Software is our modern alchemy. Isaac Newton spent much of his life trying and failing to transmute a base element — lead — into a valuable material — gold. Software is alchemy that turns bytes into actions by and on atoms. It’s the closest thing we have to magic.

Innovations in the virtual realm don’t directly change things in the physical, but they shift motivations and create new incentives for coordination.

Entropy is a state of disorder, the tendency of systems to move toward evenly distributed chaotic randomness. When an engineer creates an engine or a chip designer fabs a new chip architecture, we’re pushing against the forces of entropy to maintain (and improve!) order.

What’s amazing about software is that we can, through pure information (bits), resist entropy and incentivize order. It’s truly an incredible feat. As Andreessen said, it’s like alchemy.

In the introduction to How Innovation Works, Matt Ridley describes innovation as the “infinite improbability drive” (a term coined by Douglas Adams in Hitchhiker’s Guide), a mechanism through which order is created from disorder

Innovations come in many forms, but one thing they all have in common, and which they share with biological innovations created by evolution, is that they are enhanced forms of improbability. That is to say, innovations, be they iPhones, ideas or eider ducklings, are all unlikely, improbable combinations of atoms and digital bits of information.

It is astronomically improbable that the atoms in an iPhone would be neatly arranged by chance into millions of transistors and liquid crystals, or the atoms in an eider duckling would be arranged to form blood vessels and downy feathers, or the firings of neurons in my brain would be arranged in such a pattern that they can and sometimes do represent the concept of ‘the second law of thermodynamics’.

Innovation, like evolution, is a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance – and that happen to be useful.

With code, we rearrange information in novel ways that move things in the real world. A new software program drives change in human behavior, a new savings in production cost, or connections between individuals that spur even further innovation. Ridley again, on thermodynamics:

In this universe it is compulsory, under the second law of Thermodynamics, that entropy cannot be reversed, locally, unless there is a source of energy – which is necessarily supplied by making something else even less ordered somewhere else, so the entropy of the whole system increases.

Software is our method for converting mental energy into information, the lever to apply against entropy.

✦

Weekend Reading: Collaborative Enterprise, Algorithms, and Fifth-Gen Management

October 3, 2020 • #

💼 Collaborative Enterprise

Elad Gil describes the trend of continuing consumerization of enterprise software.

🤖 Seeing Like an Algorithm

Part 2 in Eugene Wei’s series on TikTok. See part 1.

🏫 Fifth Generation Management

Venkatesh Rao’s Breaking Smart podcast is always a must-listen.

✦

Weekend Reading: Software Builders, Scarcity, and Open Source Communities

September 19, 2020 • #

👨‍💻 We Need More Software Builders, Not Just Users

On the announcement of Airtable’s latest round and $2.5b valuation (!), founder Howie Liu puts out a great piece on the latest round of changes in pursuit of their vision.

No matter how much technology has shaped our lives, the skills it takes to build software are still only available to a tiny fraction of people. When most of us face a problem that software can answer, we have to work around someone else’s idea of what the solution is. Imagine if more people had the tools to be software builders, not just software users.

📭 Scarcity as an API

Artificial scarcity has been an effective tactic for certain categories of physical products for years. In the world of atoms, at least scarcity can be somewhat believable if demand outstrips supply — perhaps a supply chain was underfilled to meet the demands of lines around the block. Only recently are we seeing digital goods distributed in similar ways, where the scarcity is truly 100% forced, where the scarcity is a line of code that makes it so. Apps like Superhuman or Clubhouse generate a large part of their prestige status from this approach.

Dopamine Labs, later called Boundless, provided a behavioral change API. The promise was that almost any app could benefit from gamification and the introduction of variable reward schedules. The goal of the company was to make apps more addictive and hook users. Like Dopamine Labs, a Scarcity API would likely be net-evil. But it could be a big business. What if a new software company could programmatically “drop” new features only to users with sufficient engagement, or online at the time of an event? What if unique styles could be purchased only in a specific window of time?

🥇 The Glory of Achievement

Antonio Garcia-Martinez’s review of Nadia Eghbal’s book on open source, Working in Public.

It’s really about how a virtualized, digital world decoupled from the physical constraints of manufacturing and meatspace politics manages to both pay for and govern itself despite no official legal frameworks nor institutional enforcement mechanisms. Every class of open source project in Eghbal’s typology can be extended to just about every digital community you interact with, across the Internet.

✦

Weekend Reading: Children of Men, Google Earth at 15, and Slate Star Codex is Gone

June 27, 2020 • #

📽 How Children of Men Became a Dystopian Masterpiece

I didn’t realize until reading this piece that this movie was a commercial flop. $70m gross on a $76m budget. I remember seeing this several times in theaters, and many times after. This retrospective (from 2016) brought the film back to mind and makes me want to rewatch.

🌍 15 Years of Google Earth and the Lessons That Went Unlearned

Brian Timoney:

Google Earth led us to vastly overestimate the average user’s willingness to figure out our map interfaces. The user experience was so novel and absorbing that people invested time into learning the interface: semi-complex navigation, toggling layers on and off, managing their own content, etc. Unfortunately, our stuff isn’t so novel and absorbing and we’ve learned the hard way that even those forced to use our interfaces for work seem very uninterested in even the most basic interactions.

It’s great to see Brian blogging again!

📄 Doxxing Scott Alexander is Profoundly Illiberal

What’s happening between the New York Times and psychiatrist-rationalist-blogger Scott Alexander is incredibly disappointing to see. In writing a story including him, they want to use his real name (which they found out somehow, S.A. is a pseudonym), which seems completely unnecessary and absurd to the point of disbelief — given the Times’ behavior and policies of late, there should be little benefit of the doubt given here. As a result of this, Scott has deleted his blog, one of the treasures of the internet.

✦

Annotating the Web with Memex

June 5, 2020 • #

I linked a few weeks ago to a new tool called Memex, a browser extension that touts itself as bookmarking for “power users of the web.” Its primary unique differentiator is how they approach the privacy angle.

I’m a couple of weeks into using it and it brings an interesting new approach to the world of bookmarking tools like Pinboard or Raindrop, both of which I’ve used a lot. Raindrop has been my tool of choice lately, but it’s heavy for what I really want, which is a simple, fast way to toss things into a box with tagging nomenclature to organize.

Memex

On the privacy topic from Wednesday’s post, Memex is approaching their product with similar principles. It’s is a browser extension only with no “cloud” element to it like most other services have. All of your data is stored client-side, and the only attachments to the cloud at all have to be opted-into, like syncing backups to Google Drive. It’s got an open source core also, for maximum transparency on how it works. Reading the vision document gives you a sense of where they’re headed:

The long-term mission of WorldBrain.io is to enable people to overcome information overload and the influence of misinformation through collaborative online-research.

We can’t research and understand all the topics we are exposed to well enough to not fall for misinformation. But we all are experts in some of those topics and could help each other understand them better — if we were able to share our existing knowledge more effectively with each other.

Decentralized knowledge management and web annotation is a movement I can get behind. I’m reminded of what Fermat’s Library is doing with academic papers — creating a meta layer of knowledge connection on top research source material. Passages highlighted in Memex could be referenced from other pages to denote connection points or similarities, building a user-generated knowledge graph on top of any internet content.

With Memex you never have to leave the browser. It overlays a small right-hand sidebar on hover with commands for bookmarking, adding tags, or displaying annotations. And they’re following through on their promise for power users with keyboard shortcuts. It also offers the option of indexing your browser history, which if you’re using DuckDuckGo but still want to archive your history for yourself could be useful. I don’t care much one way or another about this particularly, but it’s cool to have the option.

From mobile they have something interesting going. There’s a “Memex Go” app that works well for quickly bookmarking things from the Share Sheet on iOS. Syncing is a paid feature that works through a pairing process with end-to-end encryption to move data between mobile and desktop, synced over wifi. I haven’t tried this yet but I’m looking forward to checking it out. Seems like occasional syncing is all you’d need to move data between desktop and mobile, so this model could work fine.

I don’t think Memex has any integrations yet with other tools, but ones that come to mind that I’d love to see at some point are with two of my favorites: Readwise and Roam. From a technical standpoint I’m not sure how one would integrate a client-side database like what Memex has with a server-side one, but perhaps there could be a “push” capability to sync data up from Memex on-demand to integration points. With Memex’s highlights, perhaps I could decide if and when I want to send my highlights up to Readwise, rather than Readwise doing the pulling. In the case of Roam, even simple tools to drag highlights or bookmarks over as blocks in Roam pages would be a cool addition.

✦
✦
✦

Hardy Boys and Microkids

March 17, 2020 • #

Physicians hang diplomas in their waiting rooms. Some fishermen mount their biggest catch. Downstairs in Westborough, it was pictures of computers.

Over the course of a few decades dating beginning in the mid-40s, computing moved from room-sized mainframes with teletype interfaces to connected panes of glass in our pockets. At breakneck speed, we went from the computer being a massively expensive, extremely specialized tool to a ubiquitous part of daily life.

Data General Massachusetts Office

During the 1950s — the days of Claude Shannon, John von Neumann, and MIT’s Lincoln Lab — a “computer” was a batch processing system. Models like the EDVAC were really just massive calculators. It would be another decade before the computer would be thought of as an “interactive” tool, and even longer before that idea became mainstream.

The 60s saw the rise of IBM its mainframe systems. Moving from paper tape time clocks to tabulating machines, IBM pushed their massive resources into the mainframe computer market. S/360 dominated the computer industry until the 70s (and further on with S/370), when the minicomputer emerged as an interim phase between mainframes and what many computer makers were pursuing: a personal, low-cost computer.

The emergence of the minicomputer should be considered the beginning of the personal computer revolution. Before that, computers were only touched by trained operators — they were too complex and expensive for students, secretaries, or hobbyists to use directly. Minis promised something different, a machine that a programmer could use interactively. In 1964, DEC shipped the first successful mini, the PDP-8. From then on, computer upstarts were sprouting up all over the country getting into the computer business.

The DEC PDP-8
The DEC PDP-8

One of those companies was Data General, a firm founded in 1968 and the subject Tracy Kidder’s book, The Soul of a New Machine. A group of disaffected DEC engineers, impatient with the company’s strategy, left to form Data General to attack the minicomputer market. Founder Edson de Castro, formerly the lead engineer on the PDP-8, thought there was opportunity that DEC was too slow to capitalize on with their minis. So DG designed and brought to market their first offering, the Nova. It was an affordable, 16-bit machine designed for general computing applications, and made DG massively successful in the growing competitive landscape. The Nova and its successor sold like gangbusters into the mid-70s, when DEC brought the legendary VAX “supermini” to market.

DEC’s announcement of the VAX and Data General’s flagging performance in the middle of that decade provide the backdrop for the book. Kidder’s narrative takes you inside the company as it battles for a foothold in the mini market not only against DEC and the rest of the computer industry, but also with itself.

The VAX was set to be the first 32-bit minicomputer, an enormous upgrade from the prior generation of 16-bit machines. In 1976, Data General spun up a project codenamed “Fountainhead,” their big bet to develop a VAX killer, which would be headquartered in a newly-built facility in North Carolina. But back at their New England headquarters, engineer Tom West was already at work leading the Eclipse team in building a successor. So the company ended up with two competing efforts to create a next-generation 32-bit machine.

Data General's Eclipse S230
Data General's Eclipse S230

The book is the story of West’s team as they toil with limited company resources against the clock to get to market with the “Eagle” (as it was then codenamed) before the competition, and before Fountainhead could ship. As the most important new product for the company, Fountainhead had drawn away many of the best engineers who wanted to be working on the company’s flagship product. But the engineers that had stayed behind weren’t content to iterate on old products, they wanted to build something new:

Some of the engineers who had chosen New England over FHP fell under West’s command, more or less. And the leader of the FHP project suggested that those staying behind make a small machine that would solve the 32-bit, logical-address problem and would at the same time exhibit a trait called “software compatibility.”

Some of those who stayed behind felt determined to build something elegant. They designed a computer equipped with something called a mode bit. They planned to build, in essence, two different machines in one box. One would be a regular old 16-bit Eclipse, but flip the switch, so to speak, and the machine would turn into its alter ego, into a hot rod—a fast, good-looking 32-bit computer. West felt that the designers were out to “kill North Carolina,” and there wasn’t much question but that he was right, at least in some cases. Those who worked on the design called this new machine EGO. The individual initials respectively stood one step back in the alphabet from the initials FHP, just as in the movie 2001 the name of the computer that goes berserk—HAL—plays against the initials IBM. The name, EGO, also meant what it said.

What proceeded was a team engaged in long hours, nights and weekends, and hard iteration on a new product to race to market before their larger, much better funded compatriots down south. As West described it to his team, it was all about getting their hands dirty and working with what they had at hand — the definition of the scrappy upstart:

West told his group that from now on they would not be engaged in anything like research and development but in work that was 1 percent R and 99 percent D.

The pace and intensity of technology companies became culturally iconic during the 1990s with the tech and internet boom in that decade. The garage startup living in a house together working around the clock to build their products, a signature of the Silicon Valley lifestyle. But the seeds of those trends were planted back in the 70s and 80s, and on display with the Westborough team and the Eagle (which eventually went to market as the Eclipse MV/80001). Kidder spent time with the team on-site as they were working on the Eagle project, providing an insider’s perspective of life in the trenches with the “Hardy Boys” (who made hardware) and “Microkids” (who wrote software). He observes the team’s engineers as they horse-trade for resources. This was a great anecdote, a testament to the autonomy the young engineers had to get the job done however they could manage:

A Microkid wants the hardware to perform a certain function. A Hardy Boy tells him, “No way—I already did my design for microcode to do that.” They make a deal: “I’ll encode this for you, if you’ll do this other function in hardware.” “All right.”

If you’ve ever seen the TV series Halt and Catch Fire, this book seems like a direct inspiration for the Cardiff Electric team in that show trying to break into the PC business. The Eagle team could represent any of the scrappy startups from the 2000s.

It’s a surprisingly approachable read given its heavy focus on engineers and the technical nature of their work in designing hardware and software. The book won the Pulitzer in 1982, and has become a standard on the shelves of both managers and engineers. The Soul of a New Machine sparked a deeper interest for me in the history of computers, which has led to a wave of new reads I’m just getting started on.

  1. In those days, you could always count on business products to have sufficiently boring names. 

✦
✦

Weekend Reading: Software Dependencies, Conversational AI, and the iPad at 10

February 8, 2020 • #

🛠 Dependency Drift: A Metric for Software Aging

We’ve been doing some thinking on our team about how to systematically address (and repay) technical debt. With the web of interconnected dependencies and micro packages that exists now through tools like npm and yarn, no single person can track all the versions and relationships between modules. This post proposes a “Dependency Drift” metric to quantify how far out of date a codebase is on the latest updates to its dependencies:

  • Create a numeric metric that incorporates the volume of dependencies and the recency of each of them.
  • Devise a simple high level A-F grading system from that number to communicate how current a project is with it’s dependencies. We’ll call this a drift score.
  • Regularly recalculate and publish for open source projects.
  • Publish a command line tool to use in any continuous integration pipeline. In CI, policies can be set to fail CI if drift is too high. Your drift can be tracked and reported to help motivate the team and inform stakeholders.
  • Use badges in source control README files to show drift, right alongside the projects’s Continuous Integration status.

💬 Towards a Conversational Agent that Can Chat About Anything

A technical write-up on a Google chatbot called “Meena,” which they propose has a much more realistic back-and-forth response technique:

Meena is an end-to-end, neural conversational model that learns to respond sensibly to a given conversational context. The training objective is to minimize perplexity, the uncertainty of predicting the next token (in this case, the next word in a conversation). At its heart lies the Evolved Transformer seq2seq architecture, a Transformer architecture discovered by evolutionary neural architecture search to improve perplexity.

Read more in their paper, “Towards a Human-like Open-Domain Chatbot”.

📱 The iPad Awkwardly Turns 10

John Gruber uses the iPad’s recent 10th birthday to reflect missed opportunity and how much better a product it could be/could have been:

Ten years later, though, I don’t think the iPad has come close to living up to its potential. By the time the Mac turned 10, it had redefined multiple industries. In 1984 almost no graphic designers or illustrators were using computers for work. By 1994 almost all graphic designers and illustrators were using computers for work. The Mac was a revolution. The iPhone was a revolution. The iPad has been a spectacular success, and to tens of millions it is a beloved part of their daily lives, but it has, to date, fallen short of revolutionary.

I would agree with most of his criticisms, especially on the multitasking UI and the general impenetrability of the gesturing interfaces. As a very “pro iPad” user, I would love to see a movement toward the device coming into its own as a distinctly different platform than macOS and desktop computers. It has amazing promise even outside of creativity (music, art) and consumption. With the right focus on business model support, business productivity applications could be so much better.

✦
✦

Weekend Reading: Figma Multiplayer, Rice vs. Wheat, and Tuft Cells

November 23, 2019 • #

🕹 How Figma’s Multiplayer Technology Works

An interesting technical breakdown on how Figma built their multiplayer tech (the collaboration capability where you can see other users’ mouse cursors and highlights in the same document, in real time).

🌾 Large-Scale Psychological Differences Within China Explained by Rice Versus Wheat Agriculture

A fascinating paper. This research suggests the possibility that group-conforming versus individualistic cultures may have roots in diet and agricultural practices. From the abstract:

Cross-cultural psychologists have mostly contrasted East Asia with the West. However, this study shows that there are major psychological differences within China. We propose that a history of farming rice makes cultures more interdependent, whereas farming wheat makes cultures more independent, and these agricultural legacies continue to affect people in the modern world. We tested 1162 Han Chinese participants in six sites and found that rice-growing southern China is more interdependent and holistic-thinking than the wheat-growing north. To control for confounds like climate, we tested people from neighboring counties along the rice-wheat border and found differences that were just as large. We also find that modernization and pathogen prevalence theories do not fit the data.

An interesting thread to follow, but worthy of skepticism given the challenge of aggregating enough concrete data to prove anything definitively. There’s some intuitively sensible argument here as to the fundamental differences with subsistence practices in wheat versus rice farming techniques:

The two biggest differences between farming rice and wheat are irrigation and labor. Because rice paddies need standing water, people in rice regions build elaborate irrigation systems that require farmers to cooperate. In irrigation networks, one family’s water use can affect their neighbors, so rice farmers have to coordinate their water use. Irrigation networks also require many hours each year to build, dredge, and drain—a burden that often falls on villages, not isolated individuals.

🦠 Cells That ‘Taste’ Danger Set Off Immune Responses

I’ve talked before about my astonishment with the immune system’s complexity and power. This piece talks about tuft cells and how they use their chemosensory powers to identify parasites and alert the immune system to respond:

Howitt’s findings were significant because they pointed to a possible role for tuft cells in the body’s defenses — one that would fill a conspicuous hole in immunologists’ understanding. Scientists understood quite a bit about how the immune system detects bacteria and viruses in tissues. But they knew far less about how the body recognizes invasive worms, parasitic protozoa and allergens, all of which trigger so-called type 2 immune responses. Howitt and Garett’s work suggested that tuft cells might act as sentinels, using their abundant chemosensory receptors to sniff out the presence of these intruders. If something seems wrong, the tuft cells could send signals to the immune system and other tissues to help coordinate a response.

Given the massive depth of knowledge about biological processes, anatomy, and medical research, it’s incredible how much we still don’t know about how organisms work. Evolution, selection, and time can create some truly complex systems.

✦
✦

The Magic of Recurring Revenue

September 17, 2019 • #

Any business that makes money from the same customer more than once can be said to have “recurring revenue.” But the term in the SaaS universe has a more specific flavor to it, thanks to the unique nature of the business model, value delivery, and the commitments between vendor and consumer. You may think “so what” when you hear that SaaS revenue is special or somehow better than other ways of making money; after all, the money’s still green, right? But there are a number of benefits that come with the “as-a-service” relationship between vendor and customer. Software companies fit uniquely well with a subscription-based model because of the fixed, up-front heavy nature of the investments required to build software platforms. In a traditional business performing services or building physical products, new customers come with higher marginal costs — the cost incurred to add each new dollar of revenue. With hosted software there are certainly marginal costs (scaling servers with growth, providing support, etc.), but the gross margins are typically much higher. And if done efficiently, that margin can stay very high even at scale.

Recurring revenue

Let’s review some advantages of SaaS, each of them a mutual advantage to both the vendor and customer1:

Simpler adoption

Because the customer is buying a product that already exists (not a bespoke one), there’s no need to wait for complex customizations right out of the gate to realize initial value. In order to maximize customer growth and expansion velocity, developers are motivated to create smoother implementation experiences harnessing tools like Intercom, with in-app walkthroughs, on-boarding, guides, and help content to get the customer off the ground. A traditional “old world” software company is less motivated to make on-boarding such a smooth experience, since often they’re already required to do on-premise implementations and trainings for users. There are certainly enterprise SaaS products that start to move into this arena (i.e. non-self-service), but typically that’s due to the specifics of certain business workflows being reliant on custom integrations or customer data imports (think ERP systems). Also, because a customer can start small with “pilot” sized engagements at no additional cost to the vendor, they can ramp up more comfortably.

Low initial costs

Related to adoption, the customer can control their costs transparently as they scale, to see impact before they expand team-wide. Once initial ROI is visible, subsequent expansion is less painful and much easier to justify — after all, you have results to prove that the product is useful before going all-in. The ability to hedge risk in scaling up by monitoring value returned is one that was hard to achieve in the days before service-based products.

Reduced time to benefit

Since the customer can lower the requirements for an initial rollout, they can see benefit quickly. Rather than having to take a salesperson’s word for it that you’ll see an ROI in 6 months, a 30-day pilot can deliver answers much more quickly. Don’t take the vendor at their word; use it for yourself and prove the value. Imagine what it would take to realize any benefit from a completely custom-built piece of software? (Hint: A long time, or maybe never if you don’t ship it. This should cross a customer’s mind when they want to build instead of buy.)

Economies of scale

The SaaS vendor is responsible for hosting, improving, and developing the core systems to the benefit of many at once. The revenue benefit of individual improvements or features are realized across the entire customer base. With good execution, the economy of scale can make the new feature or capability cheaper for each customer, while generating more aggregate revenue for the vendor — everyone wins. Compare this with scaling boxed software or even self-hosted, on-site software where human hours are required for each customer to deliberately receive new things of value. With product maturity, not all new developments provide equal value to every customer, which is where product packaging and positioning becomes critical to align costs and outcomes.

Continuous (versus staggered) upgrade

Any engineer knows that small, frequent updates beat out large, infrequent ones when it comes to efficiency. The overhead involved with testing and shipping each release is minimized, then spread over hundreds of small deployments. With tools like continuous integration, automated testing, and rolling deployment, developers can seamlessly (and with low risk) roll out tiny incremental changes all the time, silently. Every SaaS product of note works this way today, and often only the largest customer-facing features are even announced at all to customers. With many small releases over few large ones, the surface area for potential problems is reduced enormously, making a catastrophic problem far less likely. Also, customers don’t have to arbitrarily wait for the ArcMap 10.1 update or the annual release to receive a minor enhancement or bug fix.

Alignment of incentives

This, to me, is one of the most important factors. Two parties that share incentives make for the most advantageous economic relationships. Both vendor and customer have incentives that benefit both sides baked into the business model, and importantly, these incentives are realized early and often:

  • Customer Incentive: Since the customer has a defined problem for which they seek a solution (in the form of software), they’re incentivized to pay proportionally for received value, security, attention, support, and utility. With a subscription pricing model, customers are happy to pay for a subscription that continues to deliver value to them month after month.
  • Vendor Incentive: For a vendor, the real money is made not from the first deal with the customer, but from a continued relationship over a (hopefully) long period of time. Known as lifetime value (LTV), the goal is to maximize that number with all customers — a good product-market fit and customer success strategy leads to long LTV and therefore very large revenue numbers. To realize that LTV, however, it’s incumbent upon the vendor to stay focused on delivering the above — value, security, support, et al.

With these incentives in lock-step, everyone wins. After achieving product-market fit and a repeatable solution for customers, you turn attention toward continued engagement in customer success, incremental value-add through enhancements and new features, and a long-term customer relationship based on mutual exchange of value. The best customers not only drive high revenues to the top line, but also become better companies as a result of using your software. We’ve had this happen with Fulcrum customers, and for a product developer, it’s the thing that gets your out of bed in the morning; it’s why we do what we do, not just to make money, but to transform something from good to great.

Alignment in vendor-customer goals used to be harder to achieve in the pre-SaaS era. A vendor needed only to be “good enough” to secure the single-point initial purchase, and could largely shirk their end of the bargain in successive months2.

Subscription models for physical products

Subscription business are no longer limited to software. We now see companies operating in the physical realm moving into subscription models — Lyft Pass for transit, Blue Apron for food delivery, or even Apple’s movement in this direction with its Upgrade Program for iPhones3. Once the economics make this possible (more expensive in up-front capital for physical versus software), the subscription model turns into, often, a better deal for both sides.

The market is moving toward services for everything, which is a good thing for the industry all around. Okta’s Businesses at Work report for 2019 reports that their customers are using commonly over 100 apps with Okta in the first year of use. In fact, all of the trends they report on show strong motions up and to the right. Given what I said previously about incentive alignment, I’m a believer that these trends are great for the software economy as a whole, with all parties benefiting from a healthier marketplace.

  1. I wrote a post on this topic a while back, but thought I’d revisit these advantages in more specific detail. 

  2. Of course over time this would catch up to you, but you could get away with it far longer than you can in SaaS. 

  3. Ben Thompson recently wrote about the prospects of Apple moving further in this direction — offering a subscription to the full “Apple Suite”. 

✦
✦

Elevate for Strava

August 29, 2019 • #

Jason turned me onto this Chrome extension for Strava data analysis called Elevate. It’s a pretty amazing tool that adds deep analytics on top of the already-rich data Strava provides natively as part of their Summit plan.

Elevate fitness curve

In addition to having its own metrics like this fitness/freshness curve, it overlays additional metrics into the individual activity pages on the Strava website. My favorite ones are this (which Strava has its own simpler version of) and the year-over-year comparison graph, which lets you see your progression in total mileage over time:

Elevate YoY comparison

I love to see the consistency this year reflected visually like this. I feel like I’m doing well staying on course for hitting my goals, and this cements it. I was surprised to see how well I was doing in 2017 before the health issues struck. My long term goal is to be able to exceed that trend in 2020 after making progress on the fitness front this year.

✦
✦

Wireframing with Moqups

May 16, 2019 • #

Wireframing is a critical technique in product development. Most everyone in software does a good bit of it for communicating requirements to development teams and making iterative changes. For me, the process of wireframing is about figuring out what needs to be built as much as how. When we’re discussing new features or enhancements, rather than write specs or BDD stories or something like that, I go straight to a pen and paper or the iPad to sketch out options. You get a sense for how a UI needs to come together, and also for us visual thinkers, the new ideas really start to show up when I see things I can tweak and mold.

We’ve been using Moqups for a while on our product team to do quick visuals of new screens and workflows in our apps. I’ve loved using it so far — its interface is simple and quick to use, it’s got an archive of icons and pre-made blocks to work with, and has enough collaboration features to be useful without being overwhelming.

Moqups wireframe

We’ve spent some time building out “masters” that (like in PowerPoint or Keynote) you can use as baseline starters for screens. It also has a feature called Components where you can build reusable objects — almost like templates for commonly-used UI affordances like menus or form fieldsets.

One of the slickest features is the ability to add interactions between mocks, so you can wire up simulated user flows through a series of steps.

I’ve also used it to do things like architecture diagrams and flowcharts, which it works great for. Check it out if you need a wireframing tool that’s easy to use and all cloud-based.

✦

Weekend Reading: Product Market Fit, Stripe's 5th Hub, and Downlink

May 11, 2019 • #

🦸🏽‍♂️ How Superhuman Built an Engine to Find Product/Market Fit

As pointed out in this piece from Rahul Vohra, founder of Superhuman, most indicators around product-market fit are lagging indicators. With his company he was looking for leading indicators so they could more accurately predict adoption and retention after launch. His approach is simple: polling your early users with a single question — “How would you feel if you could no longer use Superhuman?”

Too many example methods in the literature on product development orient around asking for user feedback in a positive direction — things like “how much do you like the product?”, “would you recommend to a friend?” Coming at it from the counterpoint of “what if you couldn’t use it” reverses this. It makes the user think about their own experience with the product, versus a disembodied imaginary user that might use it. It brought to mind a piece of the Paul Graham essay “Startup Ideas”, if you go with the wrong measures of product-market fit:

The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.

🛤 Stripe’s Fifth Engineering Hub is Remote

Remote work is creeping up in adoption as companies become more culturally okay with the model, and as enabling technology makes it more effective. In the tech scene it’s common for companies to hire remote, to a point (as Benedict Evans joked: “we’re hiring to build a communications platform that makes distance irrelevant. Must be willing to relocate to San Francisco.”) It’s important for the movement for large and influential companies like Stripe to take this on as a core component of their operation. Companies like Zapier and Buffer are famously “100% remote” — a new concept that, if executed well, gives companies an advantage against to compete in markets they might never be able to otherwise.

A neat Mac app that puts real-time satellite imagery on your desktop background. Every 20 minutes you can have the latest picture of the Earth.

✦
✦
✦
✦

Pinboard

March 14, 2019 • #

I was a big del.icio.us user back in the day, pre- and post-Yahoo. For anyone unfamiliar, it was one of the first tools (before Twitter) for sharing web links and making bookmarks social.

I signed up for Pinboard around the time it launched. Creator Maciej Cegłowski had an interesting concept for making his service paid, a tactic that could allow it to generate enough revenue to be self-sustaining and avoid the acquisition & stagnation that del.icio.us suffered at the hands of Yahoo after they acquired it in 2005.

When it launched it cost around $3 to join, a one-time fee to get in the door that could fund development and hosting, but most importantly deter the spam that plagued del.icio.us over time. His idea was to increase the signup fee by a fraction of a cent with each new user, which functioned as a clever way to increase revenue, but to also incentivize those on the fence to get in early.

I stopped using any bookmarking tools for a while, in favor of using Instapaper to bookmark and read later mostly articles and things. But a couple of things pushed me back to Pinboard recently. First there are all the items I want to save and remember that aren’t articles, but just links. Instapaper could certainly save the URL, but that’s not really that service’s intent. Second is the fact that I don’t even tend to use the in-app reading mode on Instapaper to read articles anyway; most of the time I just click through and read them on their source websites.

Since I’m keeping track of and documenting more of the interesting things I run across here on this site, Pinboard helps to keep and organize them. Pinboard’s description as an “anti-social bookmarking” tool is an apt one, for me. I have all of my bookmarks set to private anyway. I’m not that interested in using it as a sharing mechanism — got plenty of those already between this blog, Twitter, and others.

For mobile I bought an app called Pinner that works well to add pins right from the iOS share sheet, and also browse bookmarks. I’m liking this setup so far and finding it useful for archiving stuff and using as a read-later tool for the flood of things I get through RSS and Twitter.

✦

OpenDroneMap

October 24, 2018 • #

Since I got the Mavic last year, I haven’t had many opportunities to do mapping with it. I’ve put together a few experimental flights to play with DroneDeploy and our Fulcrum extension, but outside of that I’ve mostly done photography and video stuff.

OpenDroneMap came on a scene a couple years ago as a toolkit for processing drone imagery. I’ve been following it loosely through the Twittersphere since. Most of my image processing has been done with DroneDeploy, since we’d been working with them on some integration between our platforms, but I was curious to take a look once I saw the progress on ODM. Specifically what caught my attention was WebODM, a web-based interface to the ODM processing backend — intriguing because it’d reduce friction in generating mosaics and point clouds with sensible defaults and a clean, simple map interface to browse resulting datasets.

OpenDroneMap aerial

The WebODM setup process was remarkably smooth, using Docker to stand-up the stack automatically. All the prerequisites you need are git, Python, and pip running to get started, which I already had. With only these three commands, I had the whole stack set up and ready to process:

git clone https://github.com/OpenDroneMap/WebODM --config core.autocrlf=input --depth 1
cd WebODM
./webodm.sh start

Pretty slick for such a complex web of dependencies under the hood, and a great web interface in front of it all.

Using a set of 94 images from a test flight over a job site in Manatee county, I experimented first with the defaults to see what it’d output on its own. I did have a bit of overlap on the images, maybe 40% or so (which you need to generate quality 3D). I had to up the RAM available to Docker and reboot everything to get it to process properly, I think because my image set is pushing 100 files.

ODM processing results

That project with the default settings took about 30 minutes. It generates the mosaicked orthophoto (TIF, PNG, and even MBTiles), surface model, and point cloud. Here’s a short clip of what the results look like:

This is why open source is so interesting. The team behind the project has put together great documentation and resources to help users get it running on all platforms, including running everything on your own cloud server infrastructure with extended processing resources. I see OpenAerialMap integration was just added, so I’ll have to check that out next.

✦
✦