A common problem I encounter with computers is the everyday minor friction in workflow: the repetitive but only occasional task, or the tedious multi-step process.
Perfect example: the other day I wanted to batch resize and compress a bunch of images. Itâs something Iâve had to do before, but not an everyday problem.
When you have a problem software can solve, it has to be painful enough to warrant the effort and overhead required to build something. Given my level
of knowledge, I could thrash my way through writing a shell script to do this resizing operation (probably). But itâd take me a couple hours of
Googling and trying and retrying to eventually get something that works â all for an operation that might take 7 minutes to just do manually. So
rather than automate, I just deal with it.
This means dozens of daily nags go on nagging âthey donât nag enough to warrant the cost of solving. And they arenât painful enough to search for and
buy software to fix. So I go on muddling through with hacks, workarounds, and unanswered wishes.
But yesterday with a few prompts Cursor, in 15 minutes I made (or the AI made) a shell script to handle images that I can reuse next time. I didnât even look at the code it wrote. Just typed 3 bullets of a description of what I wanted in a readme file, and out comes the code. An annoying process goes away, never having to search around for existing tools. Even if a solution did exist, itâd probably be part of a bundle of other features I donât need; Iâd pay for the Cadillac when I only need a taxi.
Weâre moving into a new phase where personal software like this might often be the simplest path to a solution. In a world where weâre used to going to Google or GitHub, itâs now even faster to make your own. Itâs cracked open new possibilities for people formerly incapable of creating their own tools.
Software used to be costly enough that those âhey this might be coolâ ideas were quickly set aside when the cost/benefit wasnât there. Thereâs
potential for this new paradigm of digital goods production to radically alter the landscape of what gets built.
Sam Arbesman finds similarities between the oral storytelling of myth. With generation retellings and small, effective adaptations over time, ancient tales were made more Lindy:
Over time though, each ever-retold story was sanded into a gem of a tale that could last all those years, not necessarily because the written text was preservedâto be found in a clay pot in some desert cave millennia hence, or etched into a rock so it could not be forgottenâbut because there was a community devoted to its recounting.
This paper describes an experiment to merge freeform document-style text and structured computations into a single interface.
We think a promising workflow is gradual enrichment from docs to apps: starting with regular text documents and incrementally evolving them into interactive software. In this essay, we present a research prototype called Potluck that supports this workflow. Users can create live searches that extract structured information from freeform text, write formulas that compute with that information, and then display the results as dynamic annotations in the original document.
The idea is to prioritize the open-endedness of documents with the benefits of some schema, to enable computations and structure. Documents are powerful for their flexibility to accommodate all kinds of information, but the trade-off is a static environment, whereas applications can be dynamic and responsive. Potluck gives a look at what happens when you get some of both worlds.
Try out some of the examples. Itâs impressive and shows promise for the future of the low-code / build-your-own-software movement.
Reading Ryan Singerâs Shape Up a few years ago was formative (or re-formative, or something) in my thinking on how product development can and should work. After that it was a rabbit hole on jobs-to-be-done, Bob Moestaâs Demand-Side Sales, demand thinking, and more.
Since he wrote the book in 2019, he talks about 2 new concepts that extend Shape Up a little further: the role of the âtechnical shaperâ and the stage of âframingâ that happens prior to shaping.
Framing is a particularly useful device to add meat onto the bone of working through defining the problem youâre trying to solve. Shaping as the first step doesnât leave enough space for the team to really articulate the dimensions of the problem itâs attempting to solve. Shaping is looking for the contours of a particular solution, setting the desired appetite for a solution (âweâre okay spending 6 weeks worth of time on this problemâ), and laying out the boundaries around the scope for the team to work within.
I know in our team, the worst project execution happens when the problem is poorly-defined before we get started, or when we donât get down into enough specificity to set proper scopes. I love these additions to the framework.
When working through problems, the most impressive creators to me arenât those that divine an entire solution in their brain for an hour, then slam out a perfect result (spoiler: this doesnât exist outside of the occasional savant). I love to watch people who are great at avoiding the temptation to overcomplicate. People who can break problems down into components. People who can simplify complex problems by isolating parts, and blocking and tackling.
I enjoyed this, from an interview with Ward Cunningham (programmer and inventor of the wiki):
It was a question: âGiven what weâre trying to do now, what is the simplest thing that could possibly work?â In other words, letâs focus on the goal. The goal right now is to make this routine do this thing. Letâs not worry about what somebody reading the code tomorrow is going to think. Letâs not worry about whether itâs efficient. Letâs not even worry about whether it will work. Letâs just write the simplest thing that could possibly work.
Once we had written it, we could look at it. And weâd say, âOh yeah, now we know whatâs going on,â because the mere act of writing it organized our thoughts. Maybe it worked. Maybe it didnât. Maybe we had to code some more. But we had been blocked from making progress, and now we werenât. We had been thinking about too much at once, trying to achieve too complicated a goal, trying to code it too well. Maybe we had been trying to impress our friends with our knowledge of computer science, whatever. But we decided to try whatever is most simple: to write an if statement, return a constant, use a linear search. We would just write it and see it work. We knew that once it worked, weâd be in a better position to think of what we really wanted.
The most impressive software engineers Iâve worked with have a knack for this type of chewing through work. The simplest thing usually isnât the cleanest, fewest lines of code, fewest moving parts, or the most well-tested. Simple means âdoes a basic functionâ, something you can categorically check and verify, something a collaborator can easily understand.
Sometimes you just need to first do the Simplest Thing before you can find the Correct thing.
Jason Lemkin with some interesting takeaways from Palantirâs performance over the recent years. A couple of highlights:
#2. Its Top 20 Customers pay an average a stunning $45m a year (!). And thatâs 24% higher than a year ago!
#4. Yes, Palantir is Solidly a Software Company. Gross Margins remain well above 80%. It was perhaps fair to criticize Palantir as a services business in large part in the run up to the IPO â but no longer. With many quarters of 80%+ gross margins, thatâs software.
The net retention theyâve demonstrated is bananas: 120%+ consistently since going public in 2020. And $2B ARR. Still an enigmatic and unique business, and the expanding diversification into commercial is something a lot of people thought wouldnât happen.
App launchers have been around a long time. Way back when it was Quicksilver, then it was Launchbar. A quick keystroke would pull up a search box, and a few letters of typing would open what you wanted.
For many years the go-to has been Alfred, which added a whole category of other automations, customized searches, and an entire workflow engine for building your own. I still use it now for certain things, but lately Iâve been kicking the tires on a new entrant in the category called Raycast.
The basic usage of Raycast does the same thing Spotlight or Alfred do. Invoke a hotkey (in Raycast itâs ⼠+ space), type something to get a list of things to open. It adds a whole bunch of built in macros for things like window management, local calendars, and more.
The main thing Iâve seen that Raycast gets you that the others donât is a âcommand barâ for any application. Or any one with a Raycast integration, anyway. What I mean is the cmd-K-style quick action bar that many modern apps are adopting â the likes of Superhuman, Linear, or Height. Productivity tools are rediscovering (again) the benefits of never taking your hands off the keyboard for raw speed.
Raycast takes this approach and extends existing apps with a centralized command bar. So through different Raycast integrations, through the single command bar I can do common operations on my calendar, todo list, Google Drive, GitHub, Slack, and a ton of others.
Marc Andreessen was recently interviewed by Noah Smith in his newsletter. Itâs a great post-pandemic update to Marcâs views on technology (spoiler: heâs still as optimistic as ever), following a year after his âTime to Buildâ essay.
When asked about the future of technology, he responds to the common criticism that tech is often gives us progress in the virtual, but not physical world:
Software is a lever on the real world.
Someone writes code, and all of a sudden riders and drivers coordinate a completely new kind of real-world transportation system, and we call it Lyft. Someone writes code, and all of a sudden homeowners and guests coordinate a completely new kind of real-world real estate system, and we call it AirBNB. Someone writes code, etc., and we have cars that drive themselves, and planes that fly themselves, and wristwatches that tell us if weâre healthy or ill.
Software is our modern alchemy. Isaac Newton spent much of his life trying and failing to transmute a base element â lead â into a valuable material â gold. Software is alchemy that turns bytes into actions by and on atoms. Itâs the closest thing we have to magic.
Innovations in the virtual realm donât directly change things in the physical, but they shift motivations and create new incentives for coordination.
Entropy is a state of disorder, the tendency of systems to move toward evenly distributed chaotic randomness. When an engineer creates an engine or a chip designer fabs a new chip architecture, weâre pushing against the forces of entropy to maintain (and improve!) order.
Whatâs amazing about software is that we can, through pure information (bits), resist entropy and incentivize order. Itâs truly an incredible feat. As Andreessen said, itâs like alchemy.
In the introduction to How Innovation Works, Matt Ridley describes innovation as the âinfinite improbability driveâ (a term coined by Douglas Adams in Hitchhikerâs Guide), a mechanism through which order is created from disorder
Innovations come in many forms, but one thing they all have in common, and which they share with biological innovations created by evolution, is that they are enhanced forms of improbability. That is to say, innovations, be they iPhones, ideas or eider ducklings, are all unlikely, improbable combinations of atoms and digital bits of information.
It is astronomically improbable that the atoms in an iPhone would be neatly arranged by chance into millions of transistors and liquid crystals, or the atoms in an eider duckling would be arranged to form blood vessels and downy feathers, or the firings of neurons in my brain would be arranged in such a pattern that they can and sometimes do represent the concept of âthe second law of thermodynamicsâ.
Innovation, like evolution, is a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance â and that happen to be useful.
With code, we rearrange information in novel ways that move things in the real world. A new software program drives change in human behavior, a new savings in production cost, or connections between individuals that spur even further innovation. Ridley again, on thermodynamics:
In this universe it is compulsory, under the second law of Thermodynamics, that entropy cannot be reversed, locally, unless there is a source of energy â which is necessarily supplied by making something else even less ordered somewhere else, so the entropy of the whole system increases.
Software is our method for converting mental energy into information, the lever to apply against entropy.
On the announcement of Airtableâs latest round and $2.5b valuation (!), founder Howie Liu puts out a great piece on the latest round of changes in pursuit of their vision.
No matter how much technology has shaped our lives, the skills it takes to build software are still only available to a tiny fraction of people. When most of us face a problem that software can answer, we have to work around someone elseâs idea of what the solution is. Imagine if more people had the tools to be software builders, not just software users.
Artificial scarcity has been an effective tactic for certain categories of physical products for years. In the world of atoms, at least scarcity can be somewhat believable if demand outstrips supply â perhaps a supply chain was underfilled to meet the demands of lines around the block. Only recently are we seeing digital goods distributed in similar ways, where the scarcity is truly 100% forced, where the scarcity is a line of code that makes it so. Apps like Superhuman or Clubhouse generate a large part of their prestige status from this approach.
Dopamine Labs, later called Boundless, provided a behavioral change API. The promise was that almost any app could benefit from gamification and the introduction of variable reward schedules. The goal of the company was to make apps more addictive and hook users. Like Dopamine Labs, a Scarcity API would likely be net-evil. But it could be a big business. What if a new software company could programmatically âdropâ new features only to users with sufficient engagement, or online at the time of an event? What if unique styles could be purchased only in a specific window of time?
Antonio Garcia-Martinezâs review of Nadia Eghbalâs book on open source, Working in Public.
Itâs really about how a virtualized, digital world decoupled from the physical constraints of manufacturing and meatspace politics manages to both pay for and govern itself despite no official legal frameworks nor institutional enforcement mechanisms. Every class of open source project in Eghbalâs typology can be extended to just about every digital community you interact with, across the Internet.
I didnât realize until reading this piece that this movie was a commercial flop. $70m gross on a $76m budget. I remember seeing this several times in theaters, and many times after. This retrospective (from 2016) brought the film back to mind and makes me want to rewatch.
Google Earth led us to vastly overestimate the average userâs willingness to figure out our map interfaces. The user experience was so novel and absorbing that people invested time into learning the interface: semi-complex navigation, toggling layers on and off, managing their own content, etc. Unfortunately, our stuff isnât so novel and absorbing and weâve learned the hard way that even those forced to use our interfaces for work seem very uninterested in even the most basic interactions.
Whatâs happening between the New York Times and psychiatrist-rationalist-blogger Scott Alexander is incredibly disappointing to see. In writing a story including him, they want to use his real name (which they found out somehow, S.A. is a pseudonym), which seems completely unnecessary and absurd to the point of disbelief â given the Timesâ behavior and policies of late, there should be little benefit of the doubt given here. As a result of this, Scott has deleted his blog, one of the treasures of the internet.
I linked a few weeks ago to a new tool called Memex, a browser extension that touts itself as bookmarking for âpower users of the web.â Its primary unique differentiator is how they approach the privacy angle.
Iâm a couple of weeks into using it and it brings an interesting new approach to the world of bookmarking tools like Pinboard or Raindrop, bothof which Iâve used a lot. Raindrop has been my tool of choice lately, but itâs heavy for what I really want, which is a simple, fast way to toss things into a box with tagging nomenclature to organize.
On the privacy topic from Wednesdayâs post, Memex is approaching their product with similar principles. Itâs is a browser extension only with no âcloudâ element to it like most other services have. All of your data is stored client-side, and the only attachments to the cloud at all have to be opted-into, like syncing backups to Google Drive. Itâs got an open source core also, for maximum transparency on how it works. Reading the vision document gives you a sense of where theyâre headed:
The long-term mission of WorldBrain.io is to enable people to overcome information overload and the influence of misinformation through collaborative online-research.
We canât research and understand all the topics we are exposed to well enough to not fall for misinformation. But we all are experts in some of those topics and could help each other understand them better â if we were able to share our existing knowledge more effectively with each other.
Decentralized knowledge management and web annotation is a movement I can get behind. Iâm reminded of what Fermatâs Library is doing with academic papers â creating a meta layer of knowledge connection on top research source material. Passages highlighted in Memex could be referenced from other pages to denote connection points or similarities, building a user-generated knowledge graph on top of any internet content.
With Memex you never have to leave the browser. It overlays a small right-hand sidebar on hover with commands for bookmarking, adding tags, or displaying annotations. And theyâre following through on their promise for power users with keyboard shortcuts. It also offers the option of indexing your browser history, which if youâre using DuckDuckGo but still want to archive your history for yourself could be useful. I donât care much one way or another about this particularly, but itâs cool to have the option.
From mobile they have something interesting going. Thereâs a âMemex Goâ app that works well for quickly bookmarking things from the Share Sheet on iOS. Syncing is a paid feature that works through a pairing process with end-to-end encryption to move data between mobile and desktop, synced over wifi. I havenât tried this yet but Iâm looking forward to checking it out. Seems like occasional syncing is all youâd need to move data between desktop and mobile, so this model could work fine.
I donât think Memex has any integrations yet with other tools, but ones that come to mind that Iâd love to see at some point are with two of my favorites: Readwise and Roam. From a technical standpoint Iâm not sure how one would integrate a client-side database like what Memex has with a server-side one, but perhaps there could be a âpushâ capability to sync data up from Memex on-demand to integration points. With Memexâs highlights, perhaps I could decide if and when I want to send my highlights up to Readwise, rather than Readwise doing the pulling. In the case of Roam, even simple tools to drag highlights or bookmarks over as blocks in Roam pages would be a cool addition.
Software has multiple dimensions of elements and affordances, but the spatial dimension is underutilized most software outside of the gaming world.
Designer John Palmer wrote this piece on how different types of applications could harness spatial relationships to improve Iâm thinking and interacting.
Messaging apps are stacks of bubbles. Video calls are faces inside static rectangles. There are only so many degrees of freedom for users inside of these apps, which makes them simple to use. But this simplicity also strips away so much of the freedom we have during in-person interactions. You can type any message, but typing a message is all you can do. You can call any person, but talking directly into your camera is all you can do once youâre connected. Without spatial interfaces, our current software doesnât give us much expressive control.
I found and started testing out Raindrop.io for managing bookmarks. Pinboard has been my go-to for this for years, to which Iâve added the resurrected Instapaper for anything like articles (to support Readwise flow).
The lack of a good mobile app for Pinboard makes that process pretty weak; and Iâd say I end up adding half my bookmarks and âread laterâ stuff from the iPhone.
I imported all the bookmarks I had in Pinboard over there, which worked perfectly smooth. Raindropâs desktop macOS app is fantastic, as well.
Recommended so far. May write about it in more detail later once I get a feel for how it fits into my personal process.
Physicians hang diplomas in their waiting rooms. Some fishermen mount their biggest catch. Downstairs in Westborough, it was pictures of computers.
Over the course of a few decades dating beginning in the mid-40s, computing moved from room-sized mainframes with teletype interfaces to connected panes of glass in our pockets. At breakneck speed, we went from the computer being a massively expensive, extremely specialized tool to a ubiquitous part of daily life.
During the 1950s â the days of Claude Shannon, John von Neumann, and MITâs Lincoln Lab â a âcomputerâ was a batch processing system. Models like the EDVAC were really just massive calculators. It would be another decade before the computer would be thought of as an âinteractiveâ tool, and even longer before that idea became mainstream.
The 60s saw the rise of IBM its mainframe systems. Moving from paper tape time clocks to tabulating machines, IBM pushed their massive resources into the mainframe computer market. S/360 dominated the computer industry until the 70s (and further on with S/370), when the minicomputer emerged as an interim phase between mainframes and what many computer makers were pursuing: a personal, low-cost computer.
The emergence of the minicomputer should be considered the beginning of the personal computer revolution. Before that, computers were only touched by trained operators â they were too complex and expensive for students, secretaries, or hobbyists to use directly. Minis promised something different, a machine that a programmer could use interactively. In 1964, DEC shipped the first successful mini, the PDP-8. From then on, computer upstarts were sprouting up all over the country getting into the computer business.
The DEC PDP-8
One of those companies was Data General, a firm founded in 1968 and the subject Tracy Kidderâs book, The Soul of a New Machine. A group of disaffected DEC engineers, impatient with the companyâs strategy, left to form Data General to attack the minicomputer market. Founder Edson de Castro, formerly the lead engineer on the PDP-8, thought there was opportunity that DEC was too slow to capitalize on with their minis. So DG designed and brought to market their first offering, the Nova. It was an affordable, 16-bit machine designed for general computing applications, and made DG massively successful in the growing competitive landscape. The Nova and its successor sold like gangbusters into the mid-70s, when DEC brought the legendary VAX âsuperminiâ to market.
DECâs announcement of the VAX and Data Generalâs flagging performance in the middle of that decade provide the backdrop for the book. Kidderâs narrative takes you inside the company as it battles for a foothold in the mini market not only against DEC and the rest of the computer industry, but also with itself.
The VAX was set to be the first 32-bit minicomputer, an enormous upgrade from the prior generation of 16-bit machines. In 1976, Data General spun up a project codenamed âFountainhead,â their big bet to develop a VAX killer, which would be headquartered in a newly-built facility in North Carolina. But back at their New England headquarters, engineer Tom West was already at work leading the Eclipse team in building a successor. So the company ended up with two competing efforts to create a next-generation 32-bit machine.
Data General's Eclipse S230
The book is the story of Westâs team as they toil with limited company resources against the clock to get to market with the âEagleâ (as it was then codenamed) before the competition, and before Fountainhead could ship. As the most important new product for the company, Fountainhead had drawn away many of the best engineers who wanted to be working on the companyâs flagship product. But the engineers that had stayed behind werenât content to iterate on old products, they wanted to build something new:
Some of the engineers who had chosen New England over FHP fell under Westâs command, more or less. And the leader of the FHP project suggested that those staying behind make a small machine that would solve the 32-bit, logical-address problem and would at the same time exhibit a trait called âsoftware compatibility.â
Some of those who stayed behind felt determined to build something elegant. They designed a computer equipped with something called a mode bit. They planned to build, in essence, two different machines in one box. One would be a regular old 16-bit Eclipse, but flip the switch, so to speak, and the machine would turn into its alter ego, into a hot rodâa fast, good-looking 32-bit computer. West felt that the designers were out to âkill North Carolina,â and there wasnât much question but that he was right, at least in some cases. Those who worked on the design called this new machine EGO. The individual initials respectively stood one step back in the alphabet from the initials FHP, just as in the movie 2001 the name of the computer that goes berserkâHALâplays against the initials IBM. The name, EGO, also meant what it said.
What proceeded was a team engaged in long hours, nights and weekends, and hard iteration on a new product to race to market before their larger, much better funded compatriots down south. As West described it to his team, it was all about getting their hands dirty and working with what they had at hand â the definition of the scrappy upstart:
West told his group that from now on they would not be engaged in anything like research and development but in work that was 1 percent R and 99 percent D.
The pace and intensity of technology companies became culturally iconic during the 1990s with the tech and internet boom in that decade. The garage startup living in a house together working around the clock to build their products, a signature of the Silicon Valley lifestyle. But the seeds of those trends were planted back in the 70s and 80s, and on display with the Westborough team and the Eagle (which eventually went to market as the Eclipse MV/80001). Kidder spent time with the team on-site as they were working on the Eagle project, providing an insiderâs perspective of life in the trenches with the âHardy Boysâ (who made hardware) and âMicrokidsâ (who wrote software). He observes the teamâs engineers as they horse-trade for resources. This was a great anecdote, a testament to the autonomy the young engineers had to get the job done however they could manage:
A Microkid wants the hardware to perform a certain function. A Hardy Boy tells him, âNo wayâI already did my design for microcode to do that.â They make a deal: âIâll encode this for you, if youâll do this other function in hardware.â âAll right.â
If youâve ever seen the TV series Halt and Catch Fire, this book seems like a direct inspiration for the Cardiff Electric team in that show trying to break into the PC business. The Eagle team could represent any of the scrappy startups from the 2000s.
Itâs a surprisingly approachable read given its heavy focus on engineers and the technical nature of their work in designing hardware and software. The book won the Pulitzer in 1982, and has become a standard on the shelves of both managers and engineers. The Soul of a New Machine sparked a deeper interest for me in the history of computers, which has led to a wave of new reads Iâm just getting started on.
In those days, you could always count on business products to have sufficiently boring names. ↩
I liked this newsletter post from Aron Korenblit on the no-code movement. The name overstates the problem as being with âcodeâ in workflows, when the problem is really deeper than that:
How we feel about code âand why we want to avoid itâ has in fact nothing to do it with code itself. Code is a language that helps us tell inanimate objects what to do so we donât have to do it. Our real frustration lies with what usually comes with software projects: overblown budgets, long delays and expensive maintenance costs. Thatâs what weâre saying ânoâ to, not code.
These issues have nothing to do with code per se but are due to scope creep, poor planning and underestimating difficulty of a task. Issues which, let me tell you, are also very very present in projects using visual development tools (doesnât have the same ring to it, thatâs for sureâŚ).
If what weâre trying communicate is that no-code helps get things done faster, we should elevate that fact in how we name ourselves instead of objecting to code itself.
We notice the same with customers of Fulcrum. Since weâve been in the âno-code/low-codeâ market since 2011, weâve been seeing this for years. Most of our users are tech-literate, but a long way from being developers.
The common approach of all no-code platforms is to make visual user interfaces to represent dictating to the machine what you want it to do (thatâs all programming is, after all â a set of instructions guiding the computer on how you want to capture and process inputs to outputs). But even with drag-and-drop, configuration-based interfaces and pretty technical users, there are frequently hurdles to understanding and usage.
One of the biggest challenges is converting squishy, fungible processes that humans deal with easily into specific, articulated actions for the machine to take. Iâve always said that developing a useful software product is an exercise in first automating a workflow, then building in all the exception-handling required to do it reliably in the hundreds of scenarios the user could put it through.
So itâs not only about the code, but also creating visual representations of abstract processes rapidly, then making it easy to iterate and experiment with them.
Weâve been doing some thinking on our team about how to systematically address (and repay) technical debt. With the web of interconnected dependencies and micro packages that exists now through tools like npm and yarn, no single person can track all the versions and relationships between modules. This post proposes a âDependency Driftâ metric to quantify how far out of date a codebase is on the latest updates to its dependencies:
Create a numeric metric that incorporates the volume of dependencies and the recency of each of them.
Devise a simple high level A-F grading system from that number to communicate how current a project is with itâs dependencies. Weâll call this a drift score.
Regularly recalculate and publish for open source projects.
Publish a command line tool to use in any continuous integration pipeline. In CI, policies can be set to fail CI if drift is too high. Your drift can be tracked and reported to help motivate the team and inform stakeholders.
Use badges in source control README files to show drift, right alongside the projectsâs Continuous Integration status.
A technical write-up on a Google chatbot called âMeena,â which they propose has a much more realistic back-and-forth response technique:
Meena is an end-to-end, neural conversational model that learns to respond sensibly to a given conversational context. The training objective is to minimize perplexity, the uncertainty of predicting the next token (in this case, the next word in a conversation). At its heart lies the Evolved Transformer seq2seq architecture, a Transformer architecture discovered by evolutionary neural architecture search to improve perplexity.
John Gruber uses the iPadâs recent 10th birthday to reflect missed opportunity and how much better a product it could be/could have been:
Ten years later, though, I donât think the iPad has come close to living up to its potential. By the time the Mac turned 10, it had redefined multiple industries. In 1984 almost no graphic designers or illustrators were using computers for work. By 1994 almost all graphic designers and illustrators were using computers for work. The Mac was a revolution. The iPhone was a revolution. The iPad has been a spectacular success, and to tens of millions it is a beloved part of their daily lives, but it has, to date, fallen short of revolutionary.
I would agree with most of his criticisms, especially on the multitasking UI and the general impenetrability of the gesturing interfaces. As a very âpro iPadâ user, I would love to see a movement toward the device coming into its own as a distinctly different platform than macOS and desktop computers. It has amazing promise even outside of creativity (music, art) and consumption. With the right focus on business model support, business productivity applications could be so much better.
I didnât know this was out there, a funny-yet-astute-and-accurate observation from a software engineer at Google:
With a sufficient number of users of an API,
it does not matter what you promise in the contract:
all observable behaviors of your system
will be depended on by somebody.
Even the tiniest, trivial behavior of your API is not only desirable by some user out there, itâs critical. We have a joke internally that goes something like:
âDonât change that {API response | setting behavior | data storage format}, my entire business model depends on it!
The only edit I would make to the Law is that the quantity of users required to start uncovering emergent, unplanned, and assumed behaviors is much smaller than you would think.
We donât have near the scale of API or library usage as Google, and we encounter this regularly with our APIs.
An interesting technical breakdown on how Figma built their multiplayer tech (the collaboration capability where you can see other usersâ mouse cursors and highlights in the same document, in real time).
A fascinating paper. This research suggests the possibility that group-conforming versus individualistic cultures may have roots in diet and agricultural practices. From the abstract:
Cross-cultural psychologists have mostly contrasted East Asia with the West. However, this study shows that there are major psychological differences within China. We propose that a history of farming rice makes cultures more interdependent, whereas farming wheat makes cultures more independent, and these agricultural legacies continue to affect people in the modern world. We tested 1162 Han Chinese participants in six sites and found that rice-growing southern China is more interdependent and holistic-thinking than the wheat-growing north. To control for confounds like climate, we tested people from neighboring counties along the rice-wheat border and found differences that were just as large. We also find that modernization and pathogen prevalence theories do not fit the data.
An interesting thread to follow, but worthy of skepticism given the challenge of aggregating enough concrete data to prove anything definitively. Thereâs some intuitively sensible argument here as to the fundamental differences with subsistence practices in wheat versus rice farming techniques:
The two biggest differences between farming rice and wheat are irrigation and labor. Because rice paddies need standing water, people in rice regions build elaborate irrigation systems that require farmers to cooperate. In irrigation networks, one familyâs water use can affect their neighbors, so rice farmers have to coordinate their water use. Irrigation networks also require many hours each year to build, dredge, and drainâa burden that often falls on villages, not isolated individuals.
Iâve talkedbefore about my astonishment with the immune systemâs complexity and power. This piece talks about tuft cells and how they use their chemosensory powers to identify parasites and alert the immune system to respond:
Howittâs findings were significant because they pointed to a possible role for tuft cells in the bodyâs defenses â one that would fill a conspicuous hole in immunologistsâ understanding. Scientists understood quite a bit about how the immune system detects bacteria and viruses in tissues. But they knew far less about how the body recognizes invasive worms, parasitic protozoa and allergens, all of which trigger so-called type 2 immune responses. Howitt and Garettâs work suggested that tuft cells might act as sentinels, using their abundant chemosensory receptors to sniff out the presence of these intruders. If something seems wrong, the tuft cells could send signals to the immune system and other tissues to help coordinate a response.
Given the massive depth of knowledge about biological processes, anatomy, and medical research, itâs incredible how much we still donât know about how organisms work. Evolution, selection, and time can create some truly complex systems.
I love this story about Access and how itâs still hanging on with a sizable user base after almost a decade of neglect by its parent. It goes to show you that there are still gaps in the market for software being filled by 10 year-stale applications. Getting users to unlearn behaviors is much harder than giving them an Airtable, Webflow, or Fulcrum â too much comfort and muscle memory to overcome.
Its long lifespan can be attributed to how it services the power user, as well as how simple it is to create a relational database with so few steps (âit just worksâ):
Power users can be a dangerous group to help. With a little knowledge, you can make a very powerful weapon for shooting yourself in the foot. But there is a serious untapped potential here. Give a technical person a way to solve their problems that doesnât involve writing pages of code, and they can make a difference â automating small tasks, managing their own islands of data, and helping to keep their local environment organized and effective.
Today, there remains a hunger for codeless or code-light tools. Motivated people want to do their jobs without paying expensive professionals for every semicolon. But so far the only offerings weâve given them are a VBA macro language from a generation ago and pricey tools like PowerApps that only work if your business signs up for a stack of Microsoft cloud products.
The low/no-code universe of applications is expanding in the last couple years. Weâre riding this wave ourselves and have been betting on it since 2011. In the market for data tools, thereâs a spectrum of user capability: from the totally non-technical on one end (Excel might be a stretch for these folks) to the developer user on the other (whereâs my SQL Server?). What stories like this demonstrate is that that âpower userâ middle is a lot wider than most of us have predicted.
Any business that makes money from the same customer more than once can be said to have ârecurring revenue.â But the term in the SaaS universe has a more specific flavor to it, thanks to the unique nature of the business model, value delivery, and the commitments between vendor and consumer. You may think âso whatâ when you hear that SaaS revenue is special or somehow better than other ways of making money; after all, the moneyâs still green, right? But there are a number of benefits that come with the âas-a-serviceâ relationship between vendor and customer. Software companies fit uniquely well with a subscription-based model because of the fixed, up-front heavy nature of the investments required to build software platforms. In a traditional business performing services or building physical products, new customers come with higher marginal costs â the cost incurred to add each new dollar of revenue. With hosted software there are certainly marginal costs (scaling servers with growth, providing support, etc.), but the gross margins are typically much higher. And if done efficiently, that margin can stay very high even at scale.
Letâs review some advantages of SaaS, each of them a mutual advantage to both the vendor and customer1:
Simpler adoption
Because the customer is buying a product that already exists (not a bespoke one), thereâs no need to wait for complex customizations right out of the gate to realize initial value. In order to maximize customer growth and expansion velocity, developers are motivated to create smoother implementation experiences harnessing tools like Intercom, with in-app walkthroughs, on-boarding, guides, and help content to get the customer off the ground. A traditional âold worldâ software company is less motivated to make on-boarding such a smooth experience, since often theyâre already required to do on-premise implementations and trainings for users. There are certainly enterprise SaaS products that start to move into this arena (i.e. non-self-service), but typically thatâs due to the specifics of certain business workflows being reliant on custom integrations or customer data imports (think ERP systems). Also, because a customer can start small with âpilotâ sized engagements at no additional cost to the vendor, they can ramp up more comfortably.
Low initial costs
Related to adoption, the customer can control their costs transparently as they scale, to see impact before they expand team-wide. Once initial ROI is visible, subsequent expansion is less painful and much easier to justify â after all, you have results to prove that the product is useful before going all-in. The ability to hedge risk in scaling up by monitoring value returned is one that was hard to achieve in the days before service-based products.
Reduced time to benefit
Since the customer can lower the requirements for an initial rollout, they can see benefit quickly. Rather than having to take a salespersonâs word for it that youâll see an ROI in 6 months, a 30-day pilot can deliver answers much more quickly. Donât take the vendor at their word; use it for yourself and prove the value. Imagine what it would take to realize any benefit from a completely custom-built piece of software? (Hint: A long time, or maybe never if you donât ship it. This should cross a customerâs mind when they want to build instead of buy.)
Economies of scale
The SaaS vendor is responsible for hosting, improving, and developing the core systems to the benefit of many at once. The revenue benefit of individual improvements or features are realized across the entire customer base. With good execution, the economy of scale can make the new feature or capability cheaper for each customer, while generating more aggregate revenue for the vendor â everyone wins. Compare this with scaling boxed software or even self-hosted, on-site software where human hours are required for each customer to deliberately receive new things of value. With product maturity, not all new developments provide equal value to every customer, which is where product packaging and positioning becomes critical to align costs and outcomes.
Continuous (versus staggered) upgrade
Any engineer knows that small, frequent updates beat out large, infrequent ones when it comes to efficiency. The overhead involved with testing and shipping each release is minimized, then spread over hundreds of small deployments. With tools like continuous integration, automated testing, and rolling deployment, developers can seamlessly (and with low risk) roll out tiny incremental changes all the time, silently. Every SaaS product of note works this way today, and often only the largest customer-facing features are even announced at all to customers. With many small releases over few large ones, the surface area for potential problems is reduced enormously, making a catastrophic problem far less likely. Also, customers donât have to arbitrarily wait for the ArcMap 10.1 update or the annual release to receive a minor enhancement or bug fix.
Alignment of incentives
This, to me, is one of the most important factors. Two parties that share incentives make for the most advantageous economic relationships. Both vendor and customer have incentives that benefit both sides baked into the business model, and importantly, these incentives are realized early and often:
Customer Incentive: Since the customer has a defined problem for which they seek a solution (in the form of software), theyâre incentivized to pay proportionally for received value, security, attention, support, and utility. With a subscription pricing model, customers are happy to pay for a subscription that continues to deliver value to them month after month.
Vendor Incentive: For a vendor, the real money is made not from the first deal with the customer, but from a continued relationship over a (hopefully) long period of time. Known as lifetime value (LTV), the goal is to maximize that number with all customers â a good product-market fit and customer success strategy leads to long LTV and therefore very large revenue numbers. To realize that LTV, however, itâs incumbent upon the vendor to stay focused on delivering the above â value, security, support, et al.
With these incentives in lock-step, everyone wins. After achieving product-market fit and a repeatable solution for customers, you turn attention toward continued engagement in customer success, incremental value-add through enhancements and new features, and a long-term customer relationship based on mutual exchange of value. The best customers not only drive high revenues to the top line, but also become better companies as a result of using your software. Weâve had this happen with Fulcrum customers, and for a product developer, itâs the thing that gets your out of bed in the morning; itâs why we do what we do, not just to make money, but to transform something from good to great.
Alignment in vendor-customer goals used to be harder to achieve in the pre-SaaS era. A vendor needed only to be âgood enoughâ to secure the single-point initial purchase, and could largely shirk their end of the bargain in successive months2.
Subscription models for physical products
Subscription business are no longer limited to software. We now see companies operating in the physical realm moving into subscription models â Lyft Pass for transit, Blue Apron for food delivery, or even Appleâs movement in this direction with its Upgrade Program for iPhones3. Once the economics make this possible (more expensive in up-front capital for physical versus software), the subscription model turns into, often, a better deal for both sides.
The market is moving toward services for everything, which is a good thing for the industry all around. Oktaâs Businesses at Work report for 2019 reports that their customers are using commonly over 100 apps with Okta in the first year of use. In fact, all of the trends they report on show strong motions up and to the right. Given what I said previously about incentive alignment, Iâm a believer that these trends are great for the software economy as a whole, with all parties benefiting from a healthier marketplace.
I wrote a post on this topic a while back, but thought Iâd revisit these advantages in more specific detail. ↩
Of course over time this would catch up to you, but you could get away with it far longer than you can in SaaS. ↩
Ben Thompson recently wrote about the prospects of Apple moving further in this direction â offering a subscription to the full âApple Suiteâ. ↩
I signed up for Readwise earlier this week after seeing it mentioned in Twitter somewhere. If you read a lot of ebooks on Kindle and make highlights, itâs a useful tool for recall and remembering what you read. Each day it sends you a brief digest with 5 highlighted passages from books you read in the past. Iâm even getting reminders on highlights I made in books from 7 or 8 years ago. Already itâs made me consider going back and re-reading some things. Itâs a neat service.
Jason turned me onto this Chrome extension for Strava data analysis called Elevate. Itâs a pretty amazing tool that adds deep analytics on top of the already-rich data Strava provides natively as part of their Summit plan.
In addition to having its own metrics like this fitness/freshness curve, it overlays additional metrics into the individual activity pages on the Strava website. My favorite ones are this (which Strava has its own simpler version of) and the year-over-year comparison graph, which lets you see your progression in total mileage over time:
I love to see the consistency this year reflected visually like this. I feel like Iâm doing well staying on course for hitting my goals, and this cements it. I was surprised to see how well I was doing in 2017 before the health issues struck. My long term goal is to be able to exceed that trend in 2020 after making progress on the fitness front this year.
This group is building some interesting tools to expose and enable sharing and collaboration on academic papers.
We develop software to help illuminate academic papers. Just as Pierre de Fermat scribbled his famous last theorem in the margins, professional scientists, academics and citizen scientists can annotate equations, figures, ideas and write in the margins.
They have a tool called Margins, which allows researchers to upload, annotate, and share academic papers, and another neat one called Librarian, a Chrome extension for comments and annotations for arXiv papers.
Wireframing is a critical technique in product development. Most everyone in software does a good bit of it for communicating requirements to development teams and making iterative changes. For me, the process of wireframing is about figuring out what needs to be built as much as how. When weâre discussing new features or enhancements, rather than write specs or BDD stories or something like that, I go straight to a pen and paper or the iPad to sketch out options. You get a sense for how a UI needs to come together, and also for us visual thinkers, the new ideas really start to show up when I see things I can tweak and mold.
Weâve been using Moqups for a while on our product team to do quick visuals of new screens and workflows in our apps. Iâve loved using it so far â its interface is simple and quick to use, itâs got an archive of icons and pre-made blocks to work with, and has enough collaboration features to be useful without being overwhelming.
Weâve spent some time building out âmastersâ that (like in PowerPoint or Keynote) you can use as baseline starters for screens. It also has a feature called Components where you can build reusable objects â almost like templates for commonly-used UI affordances like menus or form fieldsets.
One of the slickest features is the ability to add interactions between mocks, so you can wire up simulated user flows through a series of steps.
Iâve also used it to do things like architecture diagrams and flowcharts, which it works great for. Check it out if you need a wireframing tool thatâs easy to use and all cloud-based.
As pointed out in this piece from Rahul Vohra, founder of Superhuman, most indicators around product-market fit are lagging indicators. With his company he was looking for leading indicators so they could more accurately predict adoption and retention after launch. His approach is simple: polling your early users with a single question â âHow would you feel if you could no longer use Superhuman?â
Too many example methods in the literature on product development orient around asking for user feedback in a positive direction â things like âhow much do you like the product?â, âwould you recommend to a friend?â Coming at it from the counterpoint of âwhat if you couldnât use itâ reverses this. It makes the user think about their own experience with the product, versus a disembodied imaginary user that might use it. It brought to mind a piece of the Paul Graham essay âStartup Ideasâ, if you go with the wrong measures of product-market fit:
The danger of an idea like this is that when you run it by your friends with pets, they donât say âI would never use this.â They say âYeah, maybe I could see using something like that.â Even when the startup launches, it will sound plausible to a lot of people. They donât want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.
Remote work is creeping up in adoption as companies become more culturally okay with the model, and as enabling technology makes it more effective. In the tech scene itâs common for companies to hire remote, to a point (as Benedict Evans joked: âweâre hiring to build a communications platform that makes distance irrelevant. Must be willing to relocate to San Francisco.â) Itâs important for the movement for large and influential companies like Stripe to take this on as a core component of their operation. Companies like Zapier and Buffer are famously â100% remoteâ â a new concept that, if executed well, gives companies an advantage against to compete in markets they might never be able to otherwise.
Marco Arment just shipped a new update to Overcast (the best iOS podcast app) with an awesome new feature that allows you to clip sections to share. Since its early days youâve been able to share a podcast episode with an anchor link to a timestamp, but this new clip sharing ability is a major improvement that should make podcast content easier to point people to and recommend.
This doesnât just allow you to create a clip to share, it even generates a nice embeddable video (landscape, square, or portrait) for posting through social channels.
Many times Iâve wanted to clip segments just to save for myself. This should be super useful for archiving interesting conversations.
We recently began a corporate sponsorship for the QGIS project, the canonical open source desktop GIS. I got back into doing some casual cartography work using QGIS back in December after a years-long hiatus of using it much (I donât get to do much actual work with data these days). Bill wrote up a quick post about how we rely on it every day in our work:
As a Mac shop, the availability of a sophisticated, high-quality, powerful desktop GIS that runs natively on MacOS is important to us. QGIS fits that bill and more. With its powerful native capability and its rich third-party extension ecosystem, QGIS provides a toolset that would cause some proprietary software, between core applications and the add-on extensions needed to replicate the full power of QGIS, to cost well more than a QGIS sponsorship.
The team over at Lutra Consulting, along with the rest of the amazing developer community around QGIS have been doing fantastic work building a true cross-platform application that stacks up against any other desktop GIS suite.
This post from Hiten Shah tells the backstory of one of the more interesting startups these days, Airtable. Like the title states, the objective of Airtable is to build a no-code system for data management and application-building. At a billion-dollar valuation now, theyâve clearly knocked it out of the park with their product-led growth strategy.
Even in the face of the 5,000 pound gorilla of Excel, theyâve broken things open and are beginning to attract mass-market users.
Microsoft had been the undisputed king of productivity software for 20 years by the time Airtable went into development in 2012. What Microsoftââand virtually every other software company in the worldââhad failed to do was create a new spreadsheet product that people wanted to use. The ubiquity of Microsoft Office guaranteed Excelâs âsuccess,â but it was stagnating as a product. Aside from some relatively minor performance updates and a handful of newer features, Excel looked and felt much the same as it did when it launched in the early â90s. Microsoftâs Access had fared a little better (for a 26-year-old product) but was still aimed squarely at database administrators who possessed the programming and scripting skills necessary to work with SQL databases.
I was a big del.icio.us user back in the day, pre- and post-Yahoo. For anyone unfamiliar, it was one of the first tools (before Twitter) for sharing web links and making bookmarks social.
I signed up for Pinboard around the time it launched. Creator Maciej CegĹowski had an interesting concept for making his service paid, a tactic that could allow it to generate enough revenue to be self-sustaining and avoid the acquisition & stagnation that del.icio.us suffered at the hands of Yahoo after they acquired it in 2005.
When it launched it cost around $3 to join, a one-time fee to get in the door that could fund development and hosting, but most importantly deter the spam that plagued del.icio.us over time. His idea was to increase the signup fee by a fraction of a cent with each new user, which functioned as a clever way to increase revenue, but to also incentivize those on the fence to get in early.
I stopped using any bookmarking tools for a while, in favor of using Instapaper to bookmark and read later mostly articles and things. But a couple of things pushed me back to Pinboard recently. First there are all the items I want to save and remember that arenât articles, but just links. Instapaper could certainly save the URL, but thatâs not really that serviceâs intent. Second is the fact that I donât even tend to use the in-app reading mode on Instapaper to read articles anyway; most of the time I just click through and read them on their source websites.
Since Iâm keeping track of and documenting more of the interesting things I run across here on this site, Pinboard helps to keep and organize them. Pinboardâs description as an âanti-social bookmarkingâ tool is an apt one, for me. I have all of my bookmarks set to private anyway. Iâm not that interested in using it as a sharing mechanism â got plenty of those already between this blog, Twitter, and others.
For mobile I bought an app called Pinner that works well to add pins right from the iOS share sheet, and also browse bookmarks. Iâm liking this setup so far and finding it useful for archiving stuff and using as a read-later tool for the flood of things I get through RSS and Twitter.
Since I got the Mavic last year, I havenât had many opportunities to do mapping with it. Iâve put together a few experimental flights to play with DroneDeploy and our Fulcrum extension, but outside of that Iâve mostly done photography and video stuff.
OpenDroneMap came on a scene a couple years ago as a toolkit for processing drone imagery. Iâve been following it loosely through the Twittersphere since. Most of my image processing has been done with DroneDeploy, since weâd been working with them on some integration between our platforms, but I was curious to take a look once I saw the progress on ODM. Specifically what caught my attention was WebODM, a web-based interface to the ODM processing backend â intriguing because itâd reduce friction in generating mosaics and point clouds with sensible defaults and a clean, simple map interface to browse resulting datasets.
The WebODM setup process was remarkably smooth, using Docker to stand-up the stack automatically. All the prerequisites you need are git, Python, and pip running to get started, which I already had. With only these three commands, I had the whole stack set up and ready to process:
Pretty slick for such a complex web of dependencies under the hood, and a great web interface in front of it all.
Using a set of 94 images from a test flight over a job site in Manatee county, I experimented first with the defaults to see what itâd output on its own. I did have a bit of overlap on the images, maybe 40% or so (which you need to generate quality 3D). I had to up the RAM available to Docker and reboot everything to get it to process properly, I think because my image set is pushing 100 files.
That project with the default settings took about 30 minutes. It generates the mosaicked orthophoto (TIF, PNG, and even MBTiles), surface model, and point cloud. Hereâs a short clip of what the results look like:
This is why open source is so interesting. The team behind the project has put together great documentation and resources to help users get it running on all platforms, including running everything on your own cloud server infrastructure with extended processing resources. I see OpenAerialMap integration was just added, so Iâll have to check that out next.