This piece from Anton Howes gets at one of the key insights about how innovation works: it doesn’t happen through sudden bursts of insight from thin air — it requires the combination of the right simmering ingredients and a person in search of solutions to specific problems:
Santorio’s claim, it seems, is safe. But in this lies an important lesson for all would-be inventors. The inverted flask experiment had been around for centuries, and even been understood since ancient times as being caused by hot and cold. So its application as a thermometer was extremely low-hanging fruit. The likelihood of it being interpreted as a temperature-measuring device might have increased somewhat in the mid-sixteenth century, when we find the first mentions of it being done using a glass flask rather than an opaque metal container. Yet even then, the visible rise and fall of the liquid in the open bucket, rather than the flask, could always have been noted and measured against a scale in much the same way. What Antonini’s letter also shows us is that even when a scale was applied to the experiment, an ingenious person who knew their cutting-edge science like he did could still fail to appreciate the potential of what they had done.
In this case, the “inverted flask” had existed for many year, and Santorio was actively searching for ways to measure temperature. Innovation requires the right mixture of “prior art” and the willful intent in search of solutions. Progress doesn’t happen automatically!
This paper describes an experiment to merge freeform document-style text and structured computations into a single interface.
We think a promising workflow is gradual enrichment from docs to apps: starting with regular text documents and incrementally evolving them into interactive software. In this essay, we present a research prototype called Potluck that supports this workflow. Users can create live searches that extract structured information from freeform text, write formulas that compute with that information, and then display the results as dynamic annotations in the original document.
The idea is to prioritize the open-endedness of documents with the benefits of some schema, to enable computations and structure. Documents are powerful for their flexibility to accommodate all kinds of information, but the trade-off is a static environment, whereas applications can be dynamic and responsive. Potluck gives a look at what happens when you get some of both worlds.
Try out some of the examples. It’s impressive and shows promise for the future of the low-code / build-your-own-software movement.
I talk all the time about trial and error. The freedom to let yourself make mistakes, and the skill to make sure they’re not too destructive, are superpowers. With every interesting innovation, company, or product, you’re seeing the late stage of a long chain of missteps and failure. As long as you have the right mindset, mistakes are learning.
We talk about this as a product team — short cycles, iteration, feedback loops — ways to navigate toward broader visions while surviving and building something increasingly useful along the way. I also talk about it with the kids. The more you practice hitting off the tee the better you’ll get at hitting the ball. The more you draw pictures the better you get at it. Practice through the frustration. I try to reinforce with them that everyone that’s great at something got their through an incredible volume of failure and shortcoming before the skill you see today.
If you’ve ever built anything physical, like woodworking, crafting, or DIY stuff around the house, you’ll be familiar with making mistakes, often costly ones. There’s no frustration quite like taking a furniture workpiece you’ve glued up from other parts, honed, mortised, and sanded and making a miter in the wrong place, or cutting it down to length too short. Hours and hours of work can vaporize in a second. I’ve made project mistakes like this so many times, and each time there’s a part of you that wants to put it all down and just go turn on Netflix. But great creators are made by their ability to recover from these mistakes — both in the tactical methods to fix them and the mental drive to “just fix it” and power through.
Mistakes are where most of the learning is in the creative process. It’s not only through the feedback loop of trial and error either. The more mistakes you make and navigate through, the better you get at accommodating and recovering from them.
My grandfather was a hobbyist woodworker for much of his life, cranking out hundreds of heirloom pieces over the years. If you ever asked him about making mistakes, he used to say “making mistakes means you’re doing things.” No person is immune from error. By definition, if you aren’t making mistakes, you aren’t really doing anything. Or maybe nothing interesting or challenging.
New Metaphors is a project to help spur creative thinking through metaphor. It’s a deck of cards you can use in exercises to help stimulate new perspectives on an existing idea:
A metaphor is just a way of expressing one idea in terms of another. This project is a nightmare. The city is a playground. You are a gem. Creating new metaphors could help us design new kinds of product, service, or experience, and even help us think about and understand the world differently.
New Metaphors (buy a printed pack, or download for free) is a set of 150 cards (two different kinds) and some fairly simple methods for running workshops, brainstorming (individually or in groups), discussions, and other creative activities.
I’m reminded of something from David Epstein’s Range, where he writes about the importance of analogies to creative, connective thinking. Astronomer Johannes Kepler was known to use analogies to reframe problems he was working on:
Kepler was facing a problem not just new to himself, but to all humanity. There was no experience database to draw on. To investigate whether he should be the first ever to propose “action at a distance” in the heavens (a mysterious power invisibly traversing space and then appearing at its target), he turned to analogy (odor, heat, light) to consider whether it was conceptually possible. He followed that up with a litany of distant analogies (magnets, boats) to think through the problem.
Most problems, of course, are not new, so we can rely on what Gentner calls “surface” analogies from our own experience. “Most of the time, if you’re reminded of things that are similar on the surface, they’re going to be relationally similar as well,” she explained. Remember how you fixed the clogged bathtub drain in the old apartment? That will probably come to mind when the kitchen sink is clogged in the new one.”
Argentina has become infamous for its decades-long struggles with inflation and economic instability. For an otherwise fairly well-off nation, it’s surprising to outsiders how deep the problem on this has been.
In this episode of EconTalk, Devon Zuegel talks about an article she wrote on this topic, after spending time there and investigating the problems for herself. What’s most surprising about all this is how pervasive a problem it is. Inflation touches everyone; everyone is hyper-aware of money issues and constantly thinking about techniques to avoid inflation’s negative impacts.
Often you hear product teams talking about their roadmap in investor-like terms. Managing their upcoming initiatives like a financial portfolio, distributing their bets among things of various categories, sizes, and risks — like a portfolio manager would do to hedge risk. Umang Jaipuriamakes the case that product teams should be inverting this logic: concentrating their bets (and therefore, money and energy) behind fewer things.
All product teams should have ONE bet they are making, and put all their effort into making that successful. The fact of the matter is that a product team is not an entire company. A product team does not need to “survive” against all odds and hedge their risk. A product team needs to build a successful product. Even if that means failing after a while and having to start anew or go into maintenance mode or join other product teams that need to scale (that dreaded phrase ‘get reorg-ed’).
He also references the world-class investor Stanley Druckenmiller, and his similar philosophy that’s helped him to generate ridiculous returns over his investing career through this type of extremely calculated, concentrated betting:
“Yeah it’s completely contrary to what they teach in business school which is if you’re highly diversified you have less risk than if you’re highly concentrated. I don’t believe that at all. As an investor when I think most people get in the most trouble is when they have stale longs or stale shorts. When you’ve got 15-20 percent of your asset base or sometimes in macro positions I’ll have two or three hundred percent. Believe me they’re not getting stale and you have to have ruthless discipline and you’re coming in every day just to quote Andy Grove “you could not be more paranoid” and you’re constantly reevaluating.”
As Druckenmiller has famously said, “It’s not whether you’re right or wrong it’s how much you make when you’re right and how much you lose when you’re wrong.”
Norway is in the planning stages on a tunnel for ships to bypass having to sail around the Stad peninsula, an infamously dangerous spot with high winds, rough waters, and foul weather. It’s a 2km pathway under the base of the peninsula. Based on rough map calculation, it’ll save ferries and other ships over 30 miles of rough sailing into the open Atlantic.
When you look at the fjord-laden coastline of Norway — a thousand miles of sliced up mountains and deep chasms — it’s sort of surprising that this hasn’t been attempted before.
Building products that address long-tail user needs (i.e. the wide variety of infrequent-but-sometimes-painful needs of specific users) requires somehow providing users an open-ended landscape to create a solution. It’s the promise of the entire “low-code” tool space. We want to create a playground with appropriate guardrails that lets users discover and build their own solutions. Since the tool-builder can’t possibly understand the intricate details of the long-tail of user problems, we want a solution to actually enable the emergence of solutions we didn’t predict or design for.
In this post, Kasey Klimes compares situations this sort of emergence-friendly design model to approaches like user-centered design:
Design for emergence is open-ended. There’s no room for surprise in high modern or user-centered design, unless the design is exapted for an unintended use (see “Design Exaptation” in the bottom right quadrant of the 2x2 above). Meanwhile, a key characteristic of design for emergence is that the end design may be something that the original designer never imagined. Whereas exaptation may indicate a design failure, this kind of surprise is an indication that the designer has succeeded in nurturing emergence.
Design for emergence is permissionless. It empowers people by way of its constitution even though it can never know what people will do with that power. In contrast to user-centered design, design for emergence invites the user into the design process not only as a subject of study, but as a collaborator with agency and control.
Every product has to consider its floor, ceiling, and walls. Meaning, how easy is it to get going (floor), how advanced can I get with it (ceiling), and what variety of things can I solve with it (walls). The best emergence-designed products have a low floor, wide walls, and a high ceiling.
This was an interesting post with background on the design of the original ARPANET protocols, and the layering architecture developed by its creators.
The IMPs (Interface Message Processors) were the key to interconnecting original four ARPANET sites with a mechanism to allow message interchange between systems that couldn’t speak the same language. They needed a “translator” to sit between each site’s system and the other sites to convert messages into universally-interpretable formats. The entire architecture of ARPANET was an interesting proto-network architecture that’s uniqueness (run by a single entity, BBN / ARPA) allowed it some rigidity in the root protocol design and hierarchy. As TCP/IP was being developed, it needed to support a “network of networks” (the Internet):
The ARPANET protocols were all later supplanted by the TCP/IP protocols (with the exception of Telnet and FTP, which were easily adapted to run on top of TCP). Whereas the ARPANET protocols were all based on the assumption that the network was built and administered by a single entity (BBN), the TCP/IP protocol suite was designed for an inter-net, a network of networks where everything would be more fluid and unreliable. That led to some of the more immediately obvious differences between our modern protocol suite and the ARPANET protocols, such as how we now distinguish between a Network layer and a Transport layer. The Transport layer-like functionality that in the ARPANET was partly implemented by the IMPs is now the sole responsibility of the hosts at the network edge.