Hereâs a useful way of thinking about the domains of our three branches of government, from Yuval Levinâs American Covenant:
It is only a slight exaggeration to say that the Congress is expected to frame for the future, the president is expected to act in the present, and the courts are expected to assess the past. These boundaries are not perfectly clean, of course.
How distinct this delineation of roles is between the branches of government is up for debate, but this is a useful way to think about the Framersâ
intention in designing the balanced separation of powers.
Legislators frame laws for the future
The president acts on them today
Judges compare whatâs happening today and planned for tomorrow against precedents set in the past
We tend to run into trouble when any of the branches strays outside its primary domain.
When the executive is designing Big Plans for the future, or when the judiciary is issuing punishments in the present, or (in what Iâd say is our worst problem today) the legislative isnât doing anything, we get into fraught territory.
While each of these has its primary focus on a particular time horizon, they arenât the sole arbiters of decision making with respect to their domain. Our complex array of checks allows each to assert influence over other areas. I just find this a useful compression of the general model of the system.
Charles Mann on the unseen, unappreciated wonders of modern infrastructure:
The great European cathedrals were built over generations by thousands of people and sustained entire communities. Similarly, the electric grid, the public-water supply, the food-distribution network, and the public-health system took the collective labor of thousands of people over many decades. They are the cathedrals of our secular era. They are high among the great accomplishments of our civilization. But they donât inspire bestselling novels or blockbuster films. No poets celebrate the sewage treatment plants that prevent them from dying of dysentery. Like almost everyone else, they rarely note the existence of the systems around them, let alone understand how they work.
The invisible, yet essential, layer of infrastructure heâs describing becomes extremely visible during something like an extended power outage during
a hurricane. Even with our communications still up, generators in our yards, and food in our refrigerators, even a few days without power are like an
eternity.
Western Electric was the captive equipment arm of the Bell System and produced the majority of the telephones and related equipment used in the U.S. for almost 100 years.
In 2016 we visited Aubeterre in south France. This was inside of the subterranean monolithic Church of Saint-Jean , hollowed out of the mountainside in the 7th century.
I just learned aboutbezoars andGoa stones (a man-made form of bezoar). Itâs a chunk of aggregated minerals, seeds, piths, and other indigestible material that forms in the intestinal system of animals. People would shave off small bits and add to tea and beverages for medicinal purposes.
The way humans discover things without understanding why they work is endlessly fascinating.
Love this description:
The stones made their way to England as well and an early mention is made in 1686 by Gideon Harvey who was sceptical of the curative value noting that they were confected from a â jumble of Indian Ingredients â by â knavish Makers and Traffickers.â â
If you go back to the Permian, youâd find the Appalachians, Massif Central, Atlas Mountains, and Scottish Highlands were all part of a single range cutting through the Pangean supercontinent.
This post appeared in issue #36 of my newsletter, Res Extensa, where I write about the intersection of product design, bottoms-up systems, innovation, and what we can learn from the history of technology. Iâd love it if you subscribed.
The older I get, the more I appreciate two fundamental skills in every line of work:
A respect for and ability to assess trade-offs, and
Knowing how and when to make decisions
Notice it doesnât say what decision to make. Effective decision-making means knowing which resources to bring to bear, who to involve, and how to zoom out on the upsides and downsides (number 1). And just as critically, itâs knowing when a decision is necessary. On most things, the faster decision is better. Whether we acknowledge it or not, most decisions are two-way doors: we can recover from the downsides. But surprisingly often itâs about knowing when we donât really need a decision right now. Not deciding is itself a decision.
Austin Yang says it this way:
What most people arenât aware of is that indecision in itself is a decision. You are essentially choosing to stick with the status quo for the time being.
A key factor to how we think about a decision is what action follows. âIndecisionâ only feels bad if we keep harping on the subject, if we continue deliberating even after we decided to stick with the status quo.
He raises another all-too-common problem where the follow-on action makes the decision problematic: the âfake decisionâ:
There is one type of decision that doesnât get talked about enough. These are decisions that get made, but never acted on. I call them âfake decisions.â Technically speaking, they cannot be called decisions at all. They are simply feel-good exercises to fool yourself into thinking that something has changed.
Economic historian Anton Howes writes a newsletter devoted to the topic: what gave rise to the âimproving mentalityâ? For some reason â a combination of timing, the weird semi-isolated, evolved culture of England, access to natural resources, political freedom â the British Isles was a petri dish of innovation in the 18th and 19th centuries.
Anton tells the story of Robert Bakewell, a cattle farmer that set out, through experimentation, trial and error, to improve his cattle herd.
Bakewellâs story could have been an unremarkable one. He was born, farmed, and died at Dishley, much like his father before him. But Bakewell, unlike most people, caught the improving mentality, or attitude â the one thing all inventors, both then and now, have in common â which had him viewing everything around him in terms of its capacity for betterment. The improving mentality was a reframing the status quo as a problem to solve. A habit of optimisation. A compulsion to perfect.
Through dozens of adjustments and a constant drive to improve, his cows were special. But it took active effort, experimentation, and drive to improve. He selectively bred his cattle to have larger backs, with the more expensive sirloin and filets, and smaller mid and lower sections, with their cheaper cuts.
Bakewellâs cows and sheep became extraordinarily valuable when sold for meat, though he soon discovered he could make even more money by leasing out the young males of his breeds to other farmers so that they could improve their own â âbut never as good as that of Mr Bakewell who has both the male and the femaleâ. Recognising how essential it was that he not lose his competitive advantage, he even set up his own abattoir and sold only dead meat, for fear that an unscrupulous butcher might be tempted to breed from the live animals sent to slaughter.
From Dwarkesh Patelâs excellent podcast, this is a masterclass in how learning works. I love how Andy is always essentially thinking out loud, as heâs in conversation, about the topic. Even a guy with his pedigree realizes that a subject like this is a confoundingly complex, with no âbestâ way of understanding it.
One of the biggest learnings from Andy is how to be purposeful when you read. You should ask questions of the text! I know this technique and still too often I read something knowing â sometimes consciously, usually subconciously â that I donât truly understand what I just read. Just take a look at this session where Andy reads and makes notes on a paper in real-time, and youâll see this in action.
The obvious benefit to working quickly is that youâll finish more stuff per unit time. But thereâs more to it than that. If you work quickly, the cost of doing something new will seem lower in your mind. So youâll be inclined to do more.
The converse is true, too. If every time you write a blog post it takes you six months, and youâre sitting around your apartment on a Sunday afternoon thinking of stuff to do, youâre probably not going to think of starting a blog post, because itâll feel too expensive.
Also at work, the better you get, the better you better get:
It is a truism, too, in workplaces, that faster employees get assigned more work. Of course they do. Humans are lazy. They want to preserve calories. And itâs exhausting merely thinking about giving work to someone slow. When youâre thinking about giving work to someone slow, you run through the likely quagmire in your head; you visualize days of halting progress. You imagine a resourceâthis slow personâtied up for awhile. Itâs wearisome, even in the thinking. Whereas the fast teammateâwell, their time feels cheap, in the sense that you can give them something and know theyâll be available again soon. You arenât âusing them upâ by giving them work. So you route as much as you can through the fast people. Itâs ironic: your companyâs most valuable resourcesâbecause they finish things quicklyâare the easiest to consume.
Feedback loops, both positive and negative, are a helluva thing.
Architect Christopher Alexander articulated the notion of âpattern languagesâ in his 1977 book of the same name. You devise a library of rules of thumb for designing (in his case, living spaces â neighborhoods, buildings, rooms) that work well together. Rather than rigid top-down design with hard specs, a well-thought-out pattern language gives the designer room to be creative while still working from thoughtful, functional constraints. An effective pattern language is a hierarchical scheme of composable building blocks: in Alexanderâs original language, âCozy Half-Enclosed Cornerâ works well with âLight on Two Sidesâ, or with âSeat Spots in Light and Darkâ.
Pattern languages are meant to be sets of general principles. No neighborhood conforms to âThe Pattern Languageâ as a standard.
In the world of software, Ted Nelsonâs Project Xanadu is both an epic failure and wild success. It started as a proposal in 1960, and still doesnât exist.
But as Maggie Appleton points out here, many of the projectâs proposed âpatternsâ have been embraced and implemented in the distributed ecosystem of Internet specifications and software features over the past few decades:
People are building Xanadu without knowing what Xanadu is. Which is the essence of a good pattern language; true patterns evolve naturally within systems, and are found rather than crafted.
We should think of Xanadu as a successful pattern language, rather than a failed software project.
The past couple weeks Iâve been experimenting tons with ChatGPT and Midjourney, figuring out how to make practical use of this new wave of products.
Tinkering around, I asked ChatGPT:
Write about some creative anachronistic historical events.
'Viking Age Space Telescopes', according to Midjourney
And hereâs a brief sampling of what I got:
Aztec Airborne Armada (1400s): The Aztecs develop an array of helium-filled airships, forming an airborne armada to defend their empire and launch surprise attacks on neighboring civilizations.
Socratesâ Philosophical Podcast (399 BCE): Instead of written dialogues, Socrates engages in thought-provoking discussions through an ancient form of podcasting, spreading his philosophical ideas through the airwaves of Athens.
Marco Poloâs Transcontinental Railroad (1271-1295): Marco Poloâs extensive travels result in a collaborative effort among the nations he encountered to construct a massive transcontinental railroad network connecting Europe, Asia, and Africa centuries ahead of its time.
Appetite comes when you eat. Nibble and your appetite will grow.
Appetite can be the hunger for any kind of thing, not just food. Some days I wish I had the appetite to write, to read, to exercise, or even go outside.
Procrastination is the state of waiting for motivation to come. Paradoxically, the most reliable way to create motivation is to start doing the thing.
In Herculaneum, twenty meters of hot mud and ash bury an enormous villa once owned by the father-in-law of Julius Caesar. Inside, there is a vast library of papyrus scrolls.
The scrolls are carbonized by the heat of the volcanic debris. But they are also preserved. For centuries, as virtually every ancient text exposed to the air decays and disappears, the library of the Villa of the Papyri waits underground, intact.
Seales and his team have been working for years to develop a process to decode the scrolls without physical manipulation. Because theyâre carbonized (think paper-thin charcoal), handling them pulverizes them into shards and dust. The latest AI boom shows promise to enable methods to finally decode the writing on these documents, using imaging techniques and CV.
Nat talked about this in depth during his interview a few months back with Dwarkesh Patel.
Write about some creative anachronistic historical events.
The results donât disappoint. Every single one is gold, big-budget film caliber material we got here. Scroll down and enjoyâŚ
Marco Polo's Transcontinental Railroad, according to Midjourney
Leonardoâs Electric Canvas (1492): Leonardo da Vinci, known for his ingenious inventions, unveils the worldâs first electrically powered canvas projector, allowing him to showcase his artwork in vibrant colors and dynamic animations, centuries ahead of its time.
Napoleonâs Moon Landing (1801): In a bid to establish a new world order, Napoleon Bonaparte orchestrates a daring expedition to the Moon using steam-powered spacecraft, setting the stage for lunar colonization and a galactic French empire.
Medieval Internet Symposium (1220): Scholars from across the medieval world gather for an âInternet Symposium,â where they discuss ideas through telepathic communication devices, resembling a rudimentary version of the internet.
Genghis Khanâs Airship Campaigns (1206-1227): Genghis Khanâs vast empire extends beyond land as his horde launches airborne campaigns using intricately designed airships, enabling swift conquests across continents with unprecedented speed.
Renaissance Era Robotics (1565): As a culmination of Renaissance curiosity, artisans and scientists collaborate to create a series of clockwork automatons that exhibit human-like intelligence and serve as precursors to modern robotics.
Cleopatraâs Underwater Expedition (50 BCE): Cleopatra, fascinated by the sea, leads a team of engineers and divers on an underwater archaeological expedition to explore the sunken city of Atlantis, unraveling ancient mysteries beneath the waves.
World War I: Age of Steampunk (1914-1918): The First World War takes on a different tone with both sides utilizing advanced steam-powered machinery, elaborate mechanical exosuits, and ornate weaponry, in a steampunk-inspired alternate history.
Marco Poloâs Transcontinental Railroad (1271-1295): Marco Poloâs extensive travels result in a collaborative effort among the nations he encountered to construct a massive transcontinental railroad network connecting Europe, Asia, and Africa centuries ahead of its time.
Industrial Revolution of Magic (1700s): The Industrial Revolution ushers in a new era of magic, where steam-driven wands power factories and alchemical engines replace conventional machines, driving technological progress through mystical means.
Joan of Arcâs Space Exploration (1429): Inspired by visions of the cosmos, Joan of Arc leads a celestial expedition using a fantastical flying fortress, journeying through the stars to bring harmony to distant planets.
Da Vinciâs Cyberspace Designs (1500): Leonardo da Vinci sketches out intricate plans for a virtual reality world using a combination of mirrors, lenses, and mechanical devices, offering a glimpse into a digital realm centuries ahead of its time.
Socratesâ Philosophical Podcast (399 BCE): Instead of written dialogues, Socrates engages in thought-provoking discussions through an ancient form of podcasting, spreading his philosophical ideas through the airwaves of Athens.
Ancient Egyptian Steam-Powered Pyramids (2500 BCE): The ancient Egyptians harness steam power to build their pyramids, using massive steam engines to lift and transport colossal stone blocks with astonishing precision.
Shakespearean Space Odyssey (1601): William Shakespeareâs play âSpace Odysseyâ tells the story of intrepid explorers venturing beyond Earth, grappling with cosmic conflicts and celestial dramas in a performance that takes theater to the stars.
Gutenbergâs Digital Printing Press (1450): Johannes Gutenberg invents a digital version of the printing press, allowing for instant creation and dissemination of books and manuscripts, revolutionizing information distribution long before the digital age.
Aztec Airborne Armada (1400s): The Aztecs develop an array of helium-filled airships, forming an airborne armada to defend their empire and launch surprise attacks on neighboring civilizations.
Ancient Greek Teleportation Symposium (300 BCE): Greek philosophers convene a âTeleportation Symposium,â discussing the ethics and implications of instantaneous travel across vast distances, shaping the course of human exploration.
Einsteinâs Quantum Social Media Theory (1915): Albert Einstein introduces a theory of âQuantum Social Media,â predicting that interconnected networks of thoughts and ideas will shape the fabric of reality, long before the development of modern social platforms.
Viking Age Space Telescopes (800): Vikings construct massive, ornate space telescopes atop their ships, enabling them to explore distant planets and navigate the cosmos with extraordinary accuracy during their seafaring expeditions.
Cavemanâs Cave-Painted Animation (10,000 BCE): Prehistoric artists create animated sequences on cave walls, employing ingenious mechanisms to bring their paintings to life and share stories through early forms of motion pictures.
If you donât follow Cultural Tutor on Twitter, Iâd highly recommend. One of the few threadposters who consistently sends me down unrequested-but-fascinating rabbit holes. Architecture, classical history, music, art â always something interesting.
And if you like the social media feed, check out Areopagus, his weekly newsletter that goes deeper on a few topics each week.
From Classical Wisdom, a list of terms from Ancient Greek we should restore to use. A couple of goodies:
3. Phronesis (Greek: ĎĎĎνΡĎáżĎ)
Phronesis is a type of wisdom or intelligence. It is more specifically a type of wisdom relevant to practical action, implying both good judgement and excellence of character and habits, or practical virtue. As such, it is often translated as âpractical wisdomâ, and sometimes as âprudence.â Thomas McEvilley has proposed that the best translation is âmindfulnessâ
12. Episteme (Greek: áźĎΚĎĎΎΟΡ) and Techne (Greek: ĎÎĎνΡ)
Episteme can refer to knowledge, science or understanding, and which comes from the verb áźĎÎŻĎĎÎąĎθιΚ, meaning âto know, to understand, or to be acquainted withâ. The word âepistemologyâ is derived from episteme.
Meanwhile Techne is often translated as âcraftsmanshipâ, âcraftâ, or âartâ. While it resembles epistÄmÄ in the implication of knowledge of principles, techne differs in that its intent is making or doing as opposed to disinterested understanding. However, Plato regularly used the two terms interchangeably, and to the ancients, techne and episteme simply mean knowing and âboth words are names for knowledge in the widest sense.â
Recently Iâve been thoroughly enjoying Tom Hollandâs Dominion, his epic history of Christianity and its influence on western culture. Itâs one of the most interesting (and well-written) works of history Iâve read in some time. Holland approaches the subject from a historical and classicist perspective, versus a religious or theological one, which is unique for many religious histories. And he does so very respectfully of the faith itself, not with a cold, matter-of-fact historianâs eye.
Because itâs been great so far, Holland has a few other books Iâve added to the reading backlog, including works on the Persians and the fall of the Roman Empire.
Through reading more about him I also discovered his podcast The Rest Is History, hosted by him and Dominic Sandbrook, another historian mostly focused on modern history. Their show is excellent, and every episode so far has been a pleasure. My binging streak started from the early episodes, but Iâve jumped around a good bit to different topics Iâm more curious about.
This piece from Anton Howes gets at one of the key insights about how innovation works: it doesnât happen through sudden bursts of insight from thin air â it requires the combination of the right simmering ingredients and a person in search of solutions to specific problems:
Santorioâs claim, it seems, is safe. But in this lies an important lesson for all would-be inventors. The inverted flask experiment had been around for centuries, and even been understood since ancient times as being caused by hot and cold. So its application as a thermometer was extremely low-hanging fruit. The likelihood of it being interpreted as a temperature-measuring device might have increased somewhat in the mid-sixteenth century, when we find the first mentions of it being done using a glass flask rather than an opaque metal container. Yet even then, the visible rise and fall of the liquid in the open bucket, rather than the flask, could always have been noted and measured against a scale in much the same way. What Antoniniâs letter also shows us is that even when a scale was applied to the experiment, an ingenious person who knew their cutting-edge science like he did could still fail to appreciate the potential of what they had done.
In this case, the âinverted flaskâ had existed for many year, and Santorio was actively searching for ways to measure temperature. Innovation requires the right mixture of âprior artâ and the willful intent in search of solutions. Progress doesnât happen automatically!
It springs rather from my conviction that, if our civilization is to survive, we must break with the habit of deference to great men. Great men may make great mistakes; and as the book tries to show, some of the greatest leaders of the past supported the perennial attack on freedom and reason.
Popper opposes the historicist Great man theory, where we attribute outsized impact on the world to singular people. Here he reminds us that deference to great men encourages hubris and overconfidence. We should remember that some of the worst assaults on the freedom and open societies are also the doings of these âgreat menâ.
With the ongoing energy crisis happening in Europe, Anton Howes has some interesting ideas on how the Euro nations could convert the short-term pains of fossil fuel shortfall into a long-term surplus based on a mix of renewables and non. In the article he explores why energy commodities globalized, some extremely quickly, creating a global-scale marketplace for resources (Egyptian oil is competing with Venezuelan, for example), but renewables like solar and wind have not. Most energy from, say, solar is consumed very close to where itâs produced, due in large part to the lack of infrastructure for transport. Renewable energy canât be transported in a fixed and stable form without much cheaper batteries, running up against the limits of chemistry and physics, rather than the ability to build pipelines and terminals.
With a globalized market, producers are âprice takersâ â the competitive space is so wide that itâs a race to the bottom. Maintaining margins is a function of your production efficiency and infrastructure to move resources around. In more local or regional markets, like how renewables are consumed today, producers can actually be âprice makersâ. Because of the lack of accessible competition, producers get to set prices (to a degree):
But the un-globalisable nature of renewable energy may even be an opportunity, even if right now itâs a problem. In a globalised market, exporters of a commodity face competition the world over, and are thus unable to affect global prices themselves. They are what economists call price takers, rather than price makers. Even when lots of producers band together in a cartel, like the oil-producing countries of OPEC, in the medium-term they can struggle to influence world prices. They must still compete with alternatives like coal, renewables and natural gas, as well as with non-OPEC oil producers, which have been on the rise. In a globalised market, no single country or company has enough power to influence global prices. Itâs when markets are merely regional that some countries can have the opportunity to have enough market share to influence prices themselves â thus Russia was able to be an energy price maker for Europe this year, because its withdrawn natural gas supplies were so large and could not immediately or cheaply be replaced by alternatives or from further afield.
Todayâs crisis situation could serve as a wake-up call to the EU to embrace an opportunity:
And it looks like Europe has one of the clearest opportunities to be the region to get there first â because of its current problems. As the supply of natural gas is still not quite globalised and the current shortages hurt Europe the most, its uniquely high natural gas prices provide especially sharp incentive for all countries in the region to invest in renewable capacity. This is, of course, already happening within each country, as Italy for example installs many more solar panels, and France looks at expanding its nuclear capacity. But I donât just mean the incentive for individual countries to invest in themselves. I also mean each countryâs incentive to invest in the rest of the region as well â much like the Netherlands in the sixteenth century invested in the agriculture of Poland, Scandinavia, Germany and England in order to feed both itself and Renaissance Italy. Grain at the time was still only traded regionally, so it made perfect sense to expand the entire continentâs supply wherever there was plenty of land to spare.
Sam Arbesman shared this post in his newsletter â highlights from Maxisâs annual letter (the makers of SimCity, the Sims, SimEarth, SimTower, etc.).
As much as it seems like the simulation tech Maxis developed over the years was all about top-down, âgod modeâ design â especially in SimCity â theyâre just as impressive for the emergent gameplay they pioneered. SimLife was literally about watching evolution play out, followed up by an even-more-advanced experiment with evolution in Spore.
Historian David McCullough died last week at age 89. If youâve never read his work, itâs some of the best, most readable and engaging history you can find. Iâve read a few of his books over the years, like The Great Bridge (about the building of the Brooklyn Bridge), 1776 (the Revolutionary War), and his biography John Adams. Looking back on his bibliography, all of his others are on my reading list.
Itâs always unfortunate to lose such a critical voice in American culture, but at least his books will stand the test of time and be read by generations.
Hereâs a quote from John Adams, pulled from McCulloughâs biography, with which he clearly resonated:
âThe longer I live, the more I read, the more patiently I think and the more anxiously I inquire, the less I seem to know. Do justly. Love mercy. Walk humbly. This is enough.â
I just finished Robert Macfarlaneâs Underland, a book about all things âunderworldâ â catacombs, cave exploration, underground rivers, tree root networks, and geologic time. He ties these stories together with historical backgrounds of each place, globetrotting from the Slovenian Dolomites to Greenland to Norwegian ocean caves. Itâs an excellent read. Highly recommended if you like nature writing and narrative nonfiction!
In the final chapter he visits the west coast of Finland, specifically the Onkalo spent nuclear fuel repository, a 500m deep network of man-made caves designed to house up to 6,500 tons of spent nuclear fuel rods for 100,000 years. Here and at other locations like Yucca Mountain, mankind is inserting its own strata into the geologic record. The author describes the lengths the architects go to to protect future generations from the lethal material buried within. To reach the deadly uranium cache, youâd bore through layers and layers â granite, gneiss, bentonite, copper, iron, zirconium. Layers of protective housings to keep in the radioactive waste.
But the strangest aspect of the story involves a collection of Finnish folklore called the Kalevala:
The Kalevala is a long poem of many voices and many stories which â like the Iliad and the Odyssey â grows out of diverse and deep-rooted traditions, from Baltic song to Russian storytelling. It existed chiefly as a mutable oral text for more than a thousand years, until in the nineteenth century the Kalevala was collected, edited and published by the Finnish scholar Elias LĂśnnrot, giving us the mostly fixed version we now have.
Thereâs a story in the Kalevala about a hero called VäinämĂśinen, who has an underground encounter eerily related to whatâs happening today at Onkalo:
Partway through the poem, VäinämĂśinen is given the task of descending to the underland. Hidden in the Finnish forests, he is told, is the entrance to a tunnel that leads to a cavern far underground. In that cavern are stored materials of huge energy: spells and enchantments which, when spoken, will release great power. To approach this subterranean space safely VäinämĂśinen must protect himself with shoes of copper and a shirt of iron, lest he be damaged by what it contains. Ilmarinen forges them for him. Clad in these insulating metals VäinämĂśinen approaches the tunnel mouth, which is disguised by aspens, alders, willows and spruce. He cuts down the trees to reveal the entrance. He enters the tunnel and finds himself in a deep âgraveâ, a âdemon lairâ. He has stepped, he realizes, into the throat of a buried giant called Vipunen whose body is the land itself.
Vipunen warns VäinämĂśinen not to bring to the surface what is buried in his caverns. He speaks of the âgrievous painâ of excavation. Why have you entered âmy guiltless heart, my blameless bellyâ, Vipunen asks, âto eat and to gnaw / to bite, to devourâ? He warns VäinämĂśinen that he will end up visiting terrible violence upon humans if he continues on his course, that he will become âa windborne disease / wind-borne, water driven / shared out by the gale / carried by chill airâ. He threatens to imprison VäinämĂśinen by means of a containment spell so powerful that it is unlikely ever to be broken.
How does a centuries-old Finnish myth sound like it presages something dangerous buried underground in the 21st century? Thereâs even reference to copper as protection, and carrying a âwindborne disease.â Ancient myths contain more practical knowledge than we think. Itâs no wonder theories abound about aliens visiting earth in the past, or about advanced civilizations being around 50,000 years ago.
This was the first episode Iâve listened to of Patrick OâShaughnessyâs new podcast âBusiness Breakdownsâ. He and Alex Rampell dive deep on the history of Visa and its unique business model. Alex saw Visaâs business first-hand after his company TrialPay was acquired in 2015.
So much good background here on how it started as a local credit card program in Fresno and evolved into the network backbone between banks.
They also mentioned a book on the history of the credit card (and other financial innovations of the same era) called A Piece of the Action, which sounds super interesting.
Steven Sinofsky is writing a book on his time at Microsoft and the rise of the PC. He joined Microsoft during its ascendancy in 1989, starting as a software engineer and moving up to leading product on Office for most of his time with the company.
So far the first few parts of the book are excellent, as expected. Sinofsky was instrumental in so many of Microsoftâs key product businesses. Heâs written so much great stuff on his blog over the years.
Iâm eating up each chapter as he puts them up. Hereâs a great bit about his offer to come work at Microsoft in 1989, and a call from Bill Gates himself working the college recruitment phones:
âHi, Steve, this is Bill Gates.â
âHello. Thank you for calling, and so sorry for the confusion. I thought a friend of mineâŚâ
âSo, David gave me this list of like ten people and Iâm supposed to call all of them and convince them to work at Microsoft. You should come work at Microsoft. Do you have any questions?â (I always thought this was the best part of the callâhim telling me he was just cranking through a list. Transparency.)
âIâm definitely excited and thinking about it. I donât really have any questions.â
âWell, why havenât you accepted yet? You have a good offer.â
âIâm considering other things. I have been really interested in government service.â
âGovernment? Thatâs for when youâre old and stupid.â
(No, really, he said that.)
âAt Microsoft we have amazing things going on in multimedia. Have you seen all the things we are doing with CD-ROMs and video? We are going to make a whole encyclopedia on a CD-ROM, 650 megabytes with videos, maps, quizzes, and more.â
âI havenât. I use a Macintosh and workstations. I used MS-DOS at my summer job and Windows 1.0, but it was pretty slow.â
âWell, Microsoft makes more money on Macintoshes than Apple does because of our appsâour word prosser [sic], Word, is super good. OS/2 runs in protect mode, which the Mac does not do. Do you have any more questions?â
âNot really.â
âIâm glad we got to talk. The offer is super good. Bye.â
Julian Lehr is onto something here. All modern organizations are plagued with a problem of managing internal documentations. We have ample tools and keep squishing the problem from one place to another: wikis, search, tasks â itâs a game of whack-a-mole to find the right version of a document. He ponders at what size it makes sense to invest in a âdigital librarianâ:
A friend at Stripe recently suggested â half-jokingly â that we should hire a librarian to organize all our internal data and documentation. The more I think about it, the more I like the idea. Perhaps every company should hire a Chief Notion Officer once it hits 100 employees??
In Erik Larsonâs book The Splendid and the Vile, he tells the story of Britain during the Blitz of 1940-41.
In May of 1940, a strategic lapse by the Germans allowed the British to evacuate 330,000 Allied soldiers from the French coast in the famous Dunkirk evacuation. An assemblage of 800 mostly-ragtag vessels were able to slip those hundreds of thousands through air and u-boat attack to safety across the Channel.
Thereâs an anecdote in the book that Iâd never thought about before, with respect to Britainâs response as they prepared for what they thought would be an inevitable amphibious invasion.
Hereâs Larson:
One thing Churchill did not address in his speech was an underappreciated element of the Dunkirk evacuation. To those who cared to look, the fact that more than three hundred thousand men had managed to cross the channel in the face of concerted aerial and ground attack carried a darker lesson. It suggested that deterring a massive German invasion force might be more difficult than British commanders had assumed, especially if that force, like the evacuation fleet at Dunkirk, was composed of many hundreds of small ships, barges, and speedboats.
Wrote General Edmund Ironside, commander of Britainâs Home Forces, âIt brings me to the fact that the Bosches may equally well be able to land men in England despite [RAF] bombing.â
He feared, in effect, a reverse Dunkirk.
With how successful the British were with a haphazard troop movement with mostly civilian boats, imagine what Germany couldâve accomplished moving a powerful (and much larger) invasion force in a dispersed fashion along the Kent coast? Moving inland through rural areas to regroup and move on London? If Germany knew in the summer of â40 how ineffective the Blitz air raids would be, maybe it wouldâve happened?
Vicki Boykis on the impossibility of true breadth and depth of technical expertise:
What used to distinguish senior people from junior people was the depth of knowledge they had about any given programming language and operating system.
What distinguishes them now is breadth and, I think, the ability to discern patterns and carry them across multiple parts of a stack, multiple stacks, and multiple jobs working in multiple industries. We are all junior, now, in some part of the software stack. The real trick is knowing which part that is.
Iâve written lately about the nature of innovation, and this is a good addition from Works in Progress. The mixture of basic scientific research and the tinkering of inventors is not a mechanical, proportional relationship. Impactful innovative results from foundational discoveries could happen immediately, or could be separated by decades. And often we create new inventions without even understanding how they work:
The impact of science on invention is long-term and often impossible to foresee. There are some times, certainly, when scientific pursuits have obvious applications: when Robert Koch identified the bacteria responsible for tuberculosis, he must have known that this would someday help us prevent or cure the disease. But when Bohr peered into the structure of the atom, or when Rutherford and Curie investigated the nature of radiation, it is doubtful that they expected their work to lead to nuclear power or MRI scans.
Investments in science, then, if motivated by long-term progress, cannot be prioritized by immediate practical impact. It requires, in Bushâs words, âthe free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.â
Back in the summer, Tanner Greer wrote an excellent post on our current cultural stagnation and inability to get things done. âOn Cultures That Buildâ (my thoughts) made the case that, rather than pulling ourselves together and getting to work to invent, create, and solve problems, the standard approach is the âappeal to managementâ (one of my favorite aphorisms of 2020). He follows it up here with a look at Battle Cry of Freedom, a civil war history.
With the election of Abraham Lincoln they could point to one who had risen from a log cabin to the White House. âI am not ashamed to confess that twenty five years ago I was a hired laborer, mauling rails, at work on a flat-boatâjust what might happen to any poor manâs son!â Lincoln told an audience at New Haven in 1860. But in the free states a man knows that âhe can better his condition . . . there is no such thing as a freeman being fatally fixed for life, in the condition of a hired laborer.â
A pointer somewhere on Twitter led to this post from the Slate Star Codex archives, discussing a paper that supposedly debunks the Fermi paradox:
Imagine we knew God flipped a coin. If it came up heads, He made 10 billion alien civilization. If it came up tails, He made none besides Earth. Using our one parameter Drake Equation, we determine that on average there should be 5 billion alien civilizations. Since we see zero, thatâs quite the paradox, isnât it?
No. In this case the mean is meaningless. Itâs not at all surprising that we see zero alien civilizations, it just means the coin must have landed tails.
Investor Esther Dyson published this piece in her Release 1.0 newsletter in 19941. Itâs a look forward at what the market for content and digital goods with the rise of the internet.
These were the days of CompuServe and AOL, when you had to pay by the hour for access to the net. Software was still sold in a box, still on diskettes, and effectively all media was still consumed in print. Even search engines were in their very early days.
Dyson is prescient here, with some amazingly accurate predictions about how value would shift to different parts of the value chain for digital works:
Intellectual property that can be copied easily likely will be copied. It will be copied so easily and efficiently that much of it will be distributed free in order to attract attention or create desire for follow-up services that can be charged for.
The way to become a leading content provider may be to start by giving your content away. This âgenerosityâ isnât a moral decision: itâs a business strategy.
She references what eventually would become the open source movement, SaaS companies, and subscription publishing. An astonishingly forward-looking view of how intellectual property would evolve, and also assumes an internet explosion that wasnât obviously inevitable at the time.
I still would love to see a publication (looking at you Increment) that curates and republishes historically-important articles from the history of computers.
The back issues are all available from OâReilly, as well. An excellent historical archive of the early days of the internet era. âŠ
One of the key insights coming out of the progress studies movement seems like a simple idea on the surface, but itâs an important core thesis: that progress is not an inevitability. We donât see new inventions, innovations, and improvements to quality of life by accident. Itâs the result of deliberate effort by people in searching for new life improvements. Using names like âMooreâs Lawâ perhaps makes it sound like computer chip improvements âjust happen,â but researchers at Intel or TSMC would beg to differ on how automatic those developments were.
For at least the last 150 years, steady, expansive progress has been the default. Since the Industrial Revolution, scientific discovery has marched forward, and since the days of the Enlightenment, science and progress have been generally accepted as net benefits to humanity1.
I think what we see today isnât so much a reversal on this position, but perhaps a sense of passivity and taking progress for granted. That scientific advancement just happens to us without deliberate effort.
Jason Crawford had some interesting thoughts on this subject on Roots of Progress, first posing a question on where progress comes from:
Most of the arguments in response supporting the second case fell into two categories:
Failure of imagination â âI canât imagine any big breakthroughs, so they must be flukes or strokes of luckâ or
Materialism â âProgress happens through exploitation of resources, so will peter out as we run out of physical materielâ
Exploring innovationâs inner workings is helpful in understanding the issues with these two arguments.
In his phenomenal book How Innovation Works, Matt Ridley describes innovation as a gradual, bottom-up, evolutionary process of thousands of small steps forward. We tend to look back on the history of progress and point to pillar breakthroughs like Orville and Wilbur at Kitty Hawk, Edisonâs lightbulb, Marconiâs radio, or Pasteurâs vaccines, treating each as a Big Bang moment of inspiration that happened in one fell swoop. Ridley tells us this is a flaw in human reasoning; we love narratives and stories, so we spice up the reality of how these inventions came to be. The truth on the ground was much more gradual and dispersed in each of these cases. Hundreds of precursor steps had to happen, proffered by hundreds of different individuals dispersed around the globe. The Big Bang invention story takes as given the source branches lower on the tree that sprouted these successor innovations.
Itâs not as if innovators arenât actively pursuing discovery, that solutions just fell in their lap while sitting in their living rooms. The Wright Brothers knew that they were trying to get a flying machine off the ground. What they didnât predict, though, was the impact that flight would have on global economics, war, trade, recreation, and every other dimension of modern life. The same is typically true of other instances of progress. From our position in the 21st century, it seems obvious that flight would have massive ramifications for the global order. But this is easy too see in hindsight. As Ridley puts it:
Technology is absurdly predictable in retrospect, wholly unpredictable in prospect.
I think Jason raises a good point on why we have trouble imagining where the next breakthrough will come from:
The historical reason is that the big breakthroughs of the past were not easy to imagine or predict before they happened. In a different context, Eliezer Yudkowsky points out that even the creators of inventions such as the airplane or the nuclear reactor felt that their breakthroughs were fifty years out, or even impossible, shortly before they happened. Now is no different.
One of the biggest factors to this trick of hindsight is that innovation in situ is a gradual phenomenon. Only in hindsight do we look back on it as a âeurekaâ process of going from 0 to 1 in a flash of inspiration. I think if you reframe your understanding of progress and innovation around a steady, gradual march of deliberate advances, you begin to see why waiting around for it to happen in big bursts is an incorrect model.
Another insight from Ridley is that innovation is rooted in trial and error. In a world where weâve become hyper-concerned with risk and protecting against downsides (just look at our expanding regulatory complex â only growing, never shrinking), we slow ourselves down from making the errors necessary for progress. Edison, a man that turned innovation into a product in itself, had this to say on his process, acknowledging error as a baked-in prerequisite for discovery:
Iâve not failed, Iâve just found 10,000 ways it wonât work.
Regarding the point on material constraints, itâs worth reading into the concept of dematerialization. The best work on the topic I know of is Andrew McAfeeâs More From Less, which dives deep on this topic of how much modern progress is able to not only continue but in most cases accelerate, all while using fewer resources than once required. Take the simple aluminum can: the first ones weighed 85 grams. Modern refinement and manufacturing processes have reduced that to 11. The book is filled with cases like this of dematerialization outpacing our increase in consumption. Itâs not a universal law of innovation, but rather a pattern that we see with continued progress2.
Getting back to Jasonâs original question â I believe that progress does have substantive causes. But between innovationâs gradual nature and the sea of trial and error, itâs hard to notice it while itâs happening. While youâre sitting in the present without the clarity of hindsight, it can feel like progress comes from flukes and strokes of luck3. But the deliberate effort and small victories add up (and compound) over time to enormous progress.
One of the goals of the progress studies movement is to expose what the sources of innovation are, to teach people how innovations came to be. And itâs important to recognize that innovation is intentional. It happens because we choose to work on making our lives better.
A final quote from the beginning of How Innovation Works (emphasis mine):
Innovation, like evolution, is a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance â and that happen to be useful.
For most of human history, the unknown was treated as mystical and divine, rather than something that could be analyzed, understood, and deliberately improved. âŠ
To be clear, serendipity does play a role. Teflon, famously, was accidentally discovered by Roy Plunkett in his time at DuPont. He was attempting to create a new refrigerant, and ended up coating the inside of a pressurized bottle with the slick material. But he was putting in work and seeking a discovery, he just ended up with a different one. As one of my favorite quotes from explorer Roald Amunsen goes: âVictory awaits him who has everything in order â luck, people call it.â âŠ
Historian Anton Howes on the push/pull dynamics between monarchs and parliaments, and the gradual building of state capacity in 16th century Britain.
Itâs easy to imagine that governments were always as bureaucratic as they are today. Certain policies, like the widespread granting of monopolies in the seventeenth century, or the presence of a powerful landed aristocracy, seem like archaic products of a past that was simply more corrupt. The fact that governments rarely got involved with healthcare or education before the mid-nineteenth century seems the product of a lack of imagination, or perhaps yet another product of our ancestorsâ venality â simply what happens when you put the war-hungry knights and nobles in charge.
In reality, rulers of the age had little central authority they could wield in building the stateâs capacity. Sure they could will certain things into or out of law, but if they wanted to build a capability of the state to deliver goods or services, they were largely at the mercy of subjects.
In a world without bureaucracy, when state capacity was relatively lacking, itâs difficult to see what other options monarchs would have had. Suppose yourself transported to the throne of England in 1500, and crowned monarch. Once you bored of the novelty and luxuries of being head of state, you might become concerned about the lot of the common man and woman. Yet even if you wanted to create a healthcare system, or make education free and universal to all children, or even create a police force (London didnât get one until 1829, and the rest of the country not til much later), there is absolutely no way you could succeed.
Gradually, convening parliaments (which werenât always continuous, but called and dismissed on whims) was discovered to be a mediating function to delegate capacity needs like tax collection and other locale-specific functions.
It really is amazing how recent certain innovations are, that true state capacity in the modern sense isnât more than a couple centuries old period, and really that capacity has expanded enormously even in the postwar era.
Corporate research was a big deal in the mid-20th century. In this piece, Ben Southwood inspects why we no longer have modern equivalents to research centers like Xerox PARC or Bell Labs.
An interesting point here on what might be demotivating large organizations to invest too much in deep research:
Another possible answer is that non-policy developments have steadily made spillovers happen faster and more easily. Technology means faster communication and much more access to information. An interconnected and richer world doing more research means more competitors. And while all of these are clearly good, they reduce the technology excludability of new ideas, even if legal excludability hasnât changed, and so may have reduced the returns to innovation by individual businesses.
Anton Howes looks back to the 1500s and connects Sebastian Cabotâs planned search for a Northeast Passage to China to the birth of the first joint-stock corporation.
One of my favorite games hit its 20th birthday. Two decades ago Deus Ex was an amazing achievement in open narrative, emergent game design.
In this piece, Rock Paper Shotgun gathered up a bunch of the original team from John Romeroâs Ion Storm studio to look back on the process of creating the game:
Harvey Smith (lead designer): We were very influenced by three games: Thief, System Shock 2 and Half-Life. There was a lot of discussion around whether it was more elegant to get through a level without being spotted and killing everybody. But there was also political discussion about whether there was inherent bias in a game that gives you a mission to kill a bunch of enemies without questioning it. And of course, there is.
On private emotions being thrown into the public sphere:
People escape the Dunbar world for obvious reasons: life there appears prosaic and uninspiring. They find a digital interface and, like Alice in Through the Looking-Glass, enter a new realm that glitters with infinite possibilities. Suddenly, you can flicker like a spark between the digital and the real. The exhilarating sensation is that you have been taken to a high place and shown all the kingdoms of the world: âThese can be yours, if. . . .â If your video goes viral. If you gain millions of followers. If you compose that devastating tweet that will drive Donald Trump from the White House. There is, however, an entrance fee. Personal identity must be discarded.
Deirdre McCloskey on the boom of progress over the past 200 years:
The Great Enrichment came from human ingenuity emancipated. Ordinary people, emboldened by liberalism, ventured on extraordinary projectsâthe marine chronometer, the selective breeding of cotton seed, the band saw, a new chemistryâor merely ventured boldly to a new job, the New World, or going west, young man. And, crucially, the bold adventurers, in parallel with liberations in science, music, and geographical exploration, came to be tolerated and even commended by the rest of society, first in Holland in the 17th century and then in Britain in the 18th.
A partnership between Esri, Facebook, and the OpenStreetMap community to polish up and release datasets readily compatible with OSM (tagging and licensing).
I linked a couple weeks ago to Stephen Kotkinâs discussion with Lex Fridman. That was so interesting to me I went out looking for other interviews and lectures of his on YouTube and found this great one from Dartmouth in 2017, the centennial of of the Russian Revolution.
Scott Alexander wrote this review of Julian Jaynesâs The Origin of Consciousness in the Breakdown of the Bicameral Mind, which presents a fascinating idea about when the âtheory of mindâ emerges in human history. When did human beings realize their emotions, beliefs, and knowledge were internal to themselves, and not spiritual injections from the gods or voices of disembodied forces?
The book makes the claim that there was a turning point in theory of mind perception during the Bronze Age, between 1500 and 750 BC.
As you go about your day, you hear a voice that tells you what to do, praises you for your successes, criticizes you for your failures, and tells you what decisions to make in difficult situations. Modern theory-of-mind tells you that this is your own voice, thinking thoughts. It says this so consistently and convincingly that we never stop to question whether it might be anything else.
If you donât have theory of mind, what do you do with it? Children donât have theory of mind, at least not very much of it, and more than half of them have imaginary friends. Jaynes has done some research on the imaginary friend phenomenon, and argues that a better term would be âhallucinatory friendâ â children see and hear these entities vividly. The atheoretical mind is a desperate thing, and will comply with any priors you give it to make sense of its experiences. If that prior is that the voice in your head is a friend â or god â it will obediently hallucinate a friend or god for you, and modulate its voice-having accordingly.
A 2017 commencement address from Mihir Desai, critiquing the phenomenon of infinite optionality and lack of commitment pushed by modern universities:
Iâve lost count of the number of students who, when describing their career goals, talk about their desire to âmaximize optionality.â Theyâre referring to financial instruments known as options that confer the right to do something rather than an obligation to do something. For this reason, options have a âHeads I win, tails I donât loseâ characterâwhat those in finance lovingly describe as a ânonlinear payoff structure.â When you hold an option and the world moves with you, you enjoy the benefits; when the world moves against you, you are shielded from the bad outcome since you are not obligated to do anything. Optionality is the state of enjoying possibilities without being on the hook to do anything.
In this piece from a few years ago, historian Anton Howes wrote about about what drives innovation. Is it part of human nature to pursue innovation? Or is it not a naturally occurring phenomenon? He makes the case that innovation is not inevitable:
The more I study the lives of British innovators, the more convinced I am that innovation is not in human nature, but is instead received. People innovate because they are inspired to do so â it is an idea that is transmitted. And when people do not innovate, it is often simply because it never occurs to them to do so. Incentives matter too, of course. But a person needs to at least have the idea of innovation â an improving mentality â before they can choose to innovate, before they can even take the costs and benefits of innovation into account.
Since Iâve been recently reading RenĂŠ Girardâs work on mimetic theory, I wonder if there isnât a layer here thatâs related to mimicry and how humans are driven to copy one another. Does âinnovation cultureâ rather have little to do with what is or isnât in our nature? The question is: what makes people explore and seek out new ways of doing things, versus accepting their lot, content with the status quo of life as it is?
Of course incentives can drive inventive behavior; if a person perceives a payoff for thinking up a new way to move goods, smelt copper, or build tools, theyâll be driven to tread onto new ground and look for different ways to do things. Theyâll experiment, theyâll try, theyâll fail, and theyâll sometimes succeed, having all their losses earned back and then some.
Is there a way to look at this through a Girardian lens? Incentives have been around forever, since humans could watch, absorb their surroundings, and see what is and isnât working for members of their immediate families, tribes, and communities. But then why was there so little technological innovation until the last 500 years? And most of that since the Industrial Revolution?
Maybe localized incentive wasnât enough. Maybe the town blacksmith coming up with a slightly different tool or technique was not enough evidence for his compatriots to adopt his improvements. At small scales without the reach to a wider network, resistance to change could be harder to overcome. Maybe there had to be enough critical mass of incentive as evidence of how to improve tools, processes, and systems? Perhaps with innovation being rewarded at large enough scale, combined with wider knowledge through new communication channels (printing press, the telegraph) is what triggered the industrial age.
Weâve clearly shifted to a culture (today itâs now global) where innovation is known, highly valued, promoted, and encouraged.
Part of me believes that Howes is right, that the status quo-as-equilibrium norm is the default state of the world. But that still doesnât explain the massive phase shift at around 1700. If acceptance of normalcy is human nature, what made us suddenly break out on a hockey-stick run of innovative change for the last 300 years?
I wonder if it wasnât a function of critical mass â an aggregation of enough successful innovation happening over short spans of time, in a few select geographies, combined with communication tools to spread the word, and enhanced by the human tendency to mimic others. The viral spread of knowledge and our mimetic desires suddenly broke us out of stasis into a new world where invention is now all around us, encouraged and rewarded much more than it was for the previous 20,000 years.
I delved back into the archives of these mixtapes from The Rub put together, originally back in 2007. When these came out I downloaded the mp3s and for a bunch of them I actually edited them into individual tracks so I could get them on my iPod more easily. Still have them in the iTunes library after all these years.
This one from 1982 is particularly great, but the whole series is fantastic.
I ran across this interview with physicist David Deutsch, with his thoughts on Brexit. A lot of great stuff here on resilience, error correction, individualism vs. collectivism, Karl Popper, and Britainâs first-past-the-post system.
On Roots of Progress, Jason Crawford is now diving into the history of agriculture, with an interesting change to his process about writing on the history of technological discovery.
In this series, heâs approaching it with âthe garage door upâ â writing in the open shorter-form posts as he studies things like the stages of agriculture, where enclosures come from, and other concepts.
My goals are: to bring to the surface more of my half-formed thoughts, by forcing myself to write about them; to create a new type of content for you, my audience; to model good epistemic norms; and to get early pointers, references, feedbackâand pushback.
Again, this is an experiment. Risks: lowering signal-to-noise ratio; overwhelming some parts of my audience with too much content. If you donât want to read a bunch of shorter, more informal posts, feel free to skim/skip them and just read my occasional long-form comprehensive summaries, which I will continue to write every few weeks or so.
Morgan Housel draws similarities between the current crisis and global response to World War II:
Pandemics kill people and recessions ruin people. Saying they have silver linings is a step too far.
But I wonder if the best map we have that tells us what to expect next is the kind of extreme cooperation, solidarity, and empathy we last saw in the 1940s.
And I wonder if weâll look back at COVID-19 as one of the worst things to happen to us, yet triggering something positive that couldnât be achieved any other way.
Physicians hang diplomas in their waiting rooms. Some fishermen mount their biggest catch. Downstairs in Westborough, it was pictures of computers.
Over the course of a few decades dating beginning in the mid-40s, computing moved from room-sized mainframes with teletype interfaces to connected panes of glass in our pockets. At breakneck speed, we went from the computer being a massively expensive, extremely specialized tool to a ubiquitous part of daily life.
During the 1950s â the days of Claude Shannon, John von Neumann, and MITâs Lincoln Lab â a âcomputerâ was a batch processing system. Models like the EDVAC were really just massive calculators. It would be another decade before the computer would be thought of as an âinteractiveâ tool, and even longer before that idea became mainstream.
The 60s saw the rise of IBM its mainframe systems. Moving from paper tape time clocks to tabulating machines, IBM pushed their massive resources into the mainframe computer market. S/360 dominated the computer industry until the 70s (and further on with S/370), when the minicomputer emerged as an interim phase between mainframes and what many computer makers were pursuing: a personal, low-cost computer.
The emergence of the minicomputer should be considered the beginning of the personal computer revolution. Before that, computers were only touched by trained operators â they were too complex and expensive for students, secretaries, or hobbyists to use directly. Minis promised something different, a machine that a programmer could use interactively. In 1964, DEC shipped the first successful mini, the PDP-8. From then on, computer upstarts were sprouting up all over the country getting into the computer business.
The DEC PDP-8
One of those companies was Data General, a firm founded in 1968 and the subject Tracy Kidderâs book, The Soul of a New Machine. A group of disaffected DEC engineers, impatient with the companyâs strategy, left to form Data General to attack the minicomputer market. Founder Edson de Castro, formerly the lead engineer on the PDP-8, thought there was opportunity that DEC was too slow to capitalize on with their minis. So DG designed and brought to market their first offering, the Nova. It was an affordable, 16-bit machine designed for general computing applications, and made DG massively successful in the growing competitive landscape. The Nova and its successor sold like gangbusters into the mid-70s, when DEC brought the legendary VAX âsuperminiâ to market.
DECâs announcement of the VAX and Data Generalâs flagging performance in the middle of that decade provide the backdrop for the book. Kidderâs narrative takes you inside the company as it battles for a foothold in the mini market not only against DEC and the rest of the computer industry, but also with itself.
The VAX was set to be the first 32-bit minicomputer, an enormous upgrade from the prior generation of 16-bit machines. In 1976, Data General spun up a project codenamed âFountainhead,â their big bet to develop a VAX killer, which would be headquartered in a newly-built facility in North Carolina. But back at their New England headquarters, engineer Tom West was already at work leading the Eclipse team in building a successor. So the company ended up with two competing efforts to create a next-generation 32-bit machine.
Data General's Eclipse S230
The book is the story of Westâs team as they toil with limited company resources against the clock to get to market with the âEagleâ (as it was then codenamed) before the competition, and before Fountainhead could ship. As the most important new product for the company, Fountainhead had drawn away many of the best engineers who wanted to be working on the companyâs flagship product. But the engineers that had stayed behind werenât content to iterate on old products, they wanted to build something new:
Some of the engineers who had chosen New England over FHP fell under Westâs command, more or less. And the leader of the FHP project suggested that those staying behind make a small machine that would solve the 32-bit, logical-address problem and would at the same time exhibit a trait called âsoftware compatibility.â
Some of those who stayed behind felt determined to build something elegant. They designed a computer equipped with something called a mode bit. They planned to build, in essence, two different machines in one box. One would be a regular old 16-bit Eclipse, but flip the switch, so to speak, and the machine would turn into its alter ego, into a hot rodâa fast, good-looking 32-bit computer. West felt that the designers were out to âkill North Carolina,â and there wasnât much question but that he was right, at least in some cases. Those who worked on the design called this new machine EGO. The individual initials respectively stood one step back in the alphabet from the initials FHP, just as in the movie 2001 the name of the computer that goes berserkâHALâplays against the initials IBM. The name, EGO, also meant what it said.
What proceeded was a team engaged in long hours, nights and weekends, and hard iteration on a new product to race to market before their larger, much better funded compatriots down south. As West described it to his team, it was all about getting their hands dirty and working with what they had at hand â the definition of the scrappy upstart:
West told his group that from now on they would not be engaged in anything like research and development but in work that was 1 percent R and 99 percent D.
The pace and intensity of technology companies became culturally iconic during the 1990s with the tech and internet boom in that decade. The garage startup living in a house together working around the clock to build their products, a signature of the Silicon Valley lifestyle. But the seeds of those trends were planted back in the 70s and 80s, and on display with the Westborough team and the Eagle (which eventually went to market as the Eclipse MV/80001). Kidder spent time with the team on-site as they were working on the Eagle project, providing an insiderâs perspective of life in the trenches with the âHardy Boysâ (who made hardware) and âMicrokidsâ (who wrote software). He observes the teamâs engineers as they horse-trade for resources. This was a great anecdote, a testament to the autonomy the young engineers had to get the job done however they could manage:
A Microkid wants the hardware to perform a certain function. A Hardy Boy tells him, âNo wayâI already did my design for microcode to do that.â They make a deal: âIâll encode this for you, if youâll do this other function in hardware.â âAll right.â
If youâve ever seen the TV series Halt and Catch Fire, this book seems like a direct inspiration for the Cardiff Electric team in that show trying to break into the PC business. The Eagle team could represent any of the scrappy startups from the 2000s.
Itâs a surprisingly approachable read given its heavy focus on engineers and the technical nature of their work in designing hardware and software. The book won the Pulitzer in 1982, and has become a standard on the shelves of both managers and engineers. The Soul of a New Machine sparked a deeper interest for me in the history of computers, which has led to a wave of new reads Iâm just getting started on.
In those days, you could always count on business products to have sufficiently boring names. âŠ
This is a presentation from Fernando CorbatĂł from the 1990 ACM Turing Award lectures. CorbatĂł was one of the creators of both the Compatible Time-Sharing System (CTSS) at MIT and Multics, the operating system that influenced Unix.
Fernando CorbatĂł at MIT
He describes the challenges in those programs with their novel approaches: you always encounter failure states when pushing the edge. His takeaways in his experience building âambitious systemsâ:
First, the evolution of technology supports a rich future for ambitious visions and dreams that will inevitably involve complex systems.
Second, one must always try to learn from past mistakes, but at the same time be alert to the possibility that new circumstances require new solutions.
And third, one must remember that ambitious systems demand a defensive philosophy of design and implementation. Or in other words, âDonât wonder if some mishap may happen, but rather ask what one will do about it when it does occur.â
I love these stories from the early days of computers. Already the creators were encountering problems similar to those we still deal with today:
One important consequence of developing CTSS was that for the first time users had persistent on-line storage of programs and data. Suddenly the issues of privacy, protection and backup of information had to be faced. Another byproduct of the development was that because we operated terminals via modems, remote operation became the norm. Also the new-found freedom of keeping information on-line in the central file system suddenly made it especially convenient for users to share and exchange information among themselves.
And there were surprises too. To our dismay, users who had been enduring several hour waits between jobs run under batch processing, were suddenly restless when response times were more than a second. Moreover many of the simplifying assumptions that had allowed CTSS to be built so simply such as a one level file system, suddenly began to chafe. It seemed like the more we did, the more users wanted.
Boy, does that sound familiar!
Hereâs a version from the MIT website with his slides included.
In 1972 at Xerox PARC, Butler Lampson wrote this memo to the Xerox leadership making the case to produce the Alto computer from Chuck Thackerâs original design. Amazing to think how significant this piece of communication was in the subsequent progress on personal computers and the internet.
At this stage in â72, the ARPANET had only been live a couple years, and Bob Metcalfeâs ethernet design was still in the future. And the closest thing to a personal computer at this point was Wes Clarkâs LINC, designed during his time at Lincoln Lab. Two of the critical points Lampson uses as justification were on both networking potential and personal computing:
Distributed computing. We can very easily put in an Aloha-like point-to-point packet network between Altoâs, using a coax as the ether (or microwave with a repeater on a hill for home terminals). We can then do a large variety of experiments with dozens of machines. It is easy to try experiments which depend on the independence of the participants as well as those which use specialized components which must cooperate to accomplish anything. In particular, we can set up systems in which each user has his own files and communications is done solely for the interchange of sharable information, and thus shed some light on the long-standing controversy about the merits of this scheme as against centralized files.
Personal computing. If our theories about the utility of cheap, powerful personal computers are correct, we should be able to demonstrate them convincingly on Alto. If they are wrong, we can find out why. We should, for example, be able to satisfy heavy Lisp users such as Warren and Peter with an Alto. This would also take a big computing load away from Maxc. It should also be quite easy to simulate the hardware configuration of other proposed personal computers (e.g., different memory hierarchies) and thus to validate those designs. This is important because more compact machines will require a much larger investment in engineering development and more precise optimization of the memory system.
The âMaxcâ refers to PARCâs emulation of the PDP-10.
Pivotal moments can come down to a single well-articulated idea.
Today on the nerdy computer history feed, weâve got a 1982 video from Bell Labs: The UNIX System: Making Computers More Productive.
Most of the video has Brian Kernighan explaining the structure of UNIX and why itâs different from its contemporary operating systems. I should do more work with the keyboard in my lap and my feet on the desk.
Navigating a Linux shell looks almost identical to this today, 50 years later.
I liked this quote John Mashey, a computer scientist who worked on UNIX at Bell:
Software is different from hardware. When you build hardware and send it out, you may have to fix it because it breaks, but you donât demand, for example, that your radio suddenly turn into a television. And you donât demand that a piece of hardware suddenly do a completely different function, but people do that with software all of the time. Thereâs a continual demand for changes, enhancements, new features that people find necessary once they get used to a system.
Continuing my dive into the history of computers, I ran across this extended, detailed article covering the development and boom of the minicomputer industry.
An technical piece on restoring Alan Kayâs Xerox Alto he donated to Y Combinator. Amazing piece of technology history, and inspired so many future developments in computing â graphical user interfaces, WYSIWIG text editing, bitmapped graphics, the mouse, and Ethernet for connectivity.
Xerox built about 2000 Altos for use in Xerox, universities and research labs, but the Alto was never sold as a product. Xerox used the ideas from the Alto in the Xerox Star, which was expensive and only moderately successful. The biggest impact of the Alto was in 1979 when Steve Jobs famously toured Xerox and saw the Alto and other machines. When Jobs saw the advanced graphics of the Alto, he was inspired to base the user interfaces of the Lisa and Macintosh systems on Xeroxâs ideas, making the GUI available to the mass market.
The specification for Ethernet was proposed in 1973 by Bob Metcalfe as a medium to connect the expanding network of computers at Xerox PARC. This was a schematic he drew as part of the memo proposing the technology to connect the machines together:
PARC was installing its own Xerox Alto, the first personal computer, and EARS, the first laser printer. It needed a system that would allow additional PCs and printers to be added without having to reconfigure or shut down the network. It was the first time that computers were small enough for hundreds to be in the same building, and the network had to be fast to drive the printer.
Metcalfe circulated his plan in a memo titled âAlto Ethernet.â It contained a rough schematic drawing and suggested using coaxial cable for the connections and using data packets like Hawaiiâs AlohaNet or the Defense Departmentâs Arpanet. The system was up and running Nov. 11, 1973.
Itâs amazing how simple many foundational technologies start out: a simple comms medium meant to connect their computers to a shared printer. Now the same tech is the backbone of almost every local network.
This tool lets you plunge back into computing history and read the RFCs published over the years since the early days of the ARPANET. Iâve been reading Where Wizards Stay Up Late, which tells the interesting story behind how RFCs came to be the medium for proposing standards on the internet.
A few notable ones:
Steve Crockerâs RFC 1 on IMP software (ARPANETâs âInterface Message Processorsâ, basically dedicated computers for network communication queueing)
A great annotated Twitter thread from Steven Sinofsky, who was leading the launch of Windows 7 coincident with when the iPad was announced.
19/ The iPad and iPhone were soundly existential threats to Microsoftâs core platform business. Without a platform Microsoft controlled that developers sought out, the soul of the company was âmissing.â
20/ The PC had been overrun by browsers, a change 10 years in the making. PC OEMs were deeply concerned about a rise of Android and loved the Android model (no PC maker would ultimately be a major Android OEM, however). Even Windows Server was eclipsed by Linux and Open Source.
21/ The kicker for me, though, was that keyboard stand for the iPad. It was such a hack. Such an obvious âobjection handler.â But it was critically important because it was a clear reminder that the underlying operating system was ârealââŚit was not a âphone OSâ.
A fun story from Jimmy Maher about the 1991 partnership with IBM that moved Apple from the Motorola 88000 chips to PowerPC. It was a savvy deal that kept the Macintosh (and Apple) alive and kicking long enough to bridge into their transition back to Steve Jobsâs leadership, and the eventual transition of the Mac lineup to Intel in 2006.
While the journalists reported and the pundits pontificated, it was up to the technical staff at Apple, IBM, and Motorola to make PowerPC computers a reality. Like their colleagues who had negotiated the deal, they all got along surprisingly well; once one pushed past the surface stereotypes, they were all just engineers trying to do the best work possible. Appleâs management wanted the first PowerPC-based Macintosh models to ship in January of 1994, to commemorate the platformâs tenth anniversary by heralding a new technological era. The old Project Cognac team, now with the new code name of âPiltdown Manâ after the famous (albeit fraudulent) âmissing linkâ in the evolution of humanity, was responsible for making this happen. For almost a year, they worked on porting MacOS to the PowerPC, as theyâd previously done to the 88000. This time, though, they had no real hardware with which to work, only specifications and software emulators. The first prototype chips finally arrived on September 3, 1992, and they redoubled their efforts, pulling many an all-nighter. Thus MacOS booted up to the desktop for the first time on a real PowerPC-based machine just in time to greet the rising sun on the morning of October 3, 1992. A new era had indeed dawned.
The downturn in revenue IBM plummeted into in the early 90s during the Wintel explosion was stunning. Just look at these numbers:
In 1991, when IBM first turned the corner into loss, they did so in disconcertingly convincing fashion: they lost $2.82 billion that year. And that was only the beginning. Losses totaled $4.96 billion in 1992, followed by $8.1 billion in 1993. IBM lost more money during those three years alone than any other company in the history of the world to that point; their losses exceeded the gross domestic product of Ecuador.
Google Maps just had its 15th birthday. This post from one of the original team on Maps back in 2005, Elizabeth Reid, reflects on a history of the product from its first iteration.
On Feb 8, 2005, Google Maps was first launched for desktop as a new solution to help people âget from point A to point B.â Today, Google Maps is used by more than 1 billion people all over the world every month.
It was the early days of Web 2.0, and Googleâs launch of the Maps API was one of the keys to the âmashupâ movement that sparked a new wave of startups.
As a geographer, Google Maps and Earth are tools I use every day, both for fun and work. Maps is likely up there with Wikipedia at the top of my web visit history over the last 15 years.
Hard to believe they launched Street View all the way back in 2007. Today it still surprises me how much data thatâs been collected, and the amazing feeling of zooming into any place to see it from the ground level.
As Iâve been reading more into the history of technology1, specifically computers and the Internet, Iâll go on side trails through Wikipedia or the wider ânet back to many of the source papers that were the seeds of certain innovations.
Iâve read about the IBM 700 series of mainframes, Vannevar Bushâs seminal piece on a âmemexâ device (precursor idea to hypertext), and Claude Shannonâs original work on information theory.
The latest gold mine Iâve found is on YouTube. I created a âTech Historyâ playlist where Iâve been logging clips and documentaries on various bits of computer history. Click the icon top-right to see all the videos in the list.
This is an essay from EconTalkâs Russ Roberts with some takeaways from Adam Smithâs The Theory of Moral Sentiments. Itâs a book on topics like altruism and what drives human morality, and itâs clear that much of what drives us are those unmeasurable things: satisfaction, respect, honor, meaning. When economists try and design policy only taking into account things that can be measured, a massive piece of the problem is missing. Roberts, on the unmeasurable:
But not everything that is important can be quantified. I worry that as economists, we too often are like the drunk at 1 am looking for his keys under the glare of a streetlight. You go over to help and when you fail to find the keys you ask the drunk if heâs sure if he lost them here. Oh no, he responds. Iâm not sure where I lost them. But the lightâs better here.
Our natural desire to focus where the light is brightest, where the data are available, where things can be quantified, can cause us to miss the most important issues to be taken into account.
On the impacts of minimum wage increases, something for which we have data on both total wages and net job gains and losses, but not so much on other non-quantifiable effects:
By cutting off the bottom rungs of the employment ladder for the least skilled among us, you get rid of the feeling a person has of agency, of pride, of excellence, mastery, and a sense of belonging. Youâve made that person feel unlovely and unloved. Itâs the wrong way to help poor people it seems to me. But almost no one talks about these intangible impacts of the minimum wage on the people who struggle the most to become productive workers.
There isnât a variable for dignity in the data set.
J.C.R. Lickliderâs seminal 1960 paper on what would eventually become the personal computer.
Man-computer symbiosis is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.
Venkatesh Rao has assembled a most compelling explanation for how the internet polarization machine works:
The semantic structure of the Internet of Beefs is shaped by high-profile beefs between charismatic celebrity knights loosely affiliated with various citadel-like strongholds peopled by opt-in armies of mooks. The vast majority of the energy of the conflict lies in interchangeable mooks facing off against each other, loosely along lines indicated by the knights they follow, in innumerable battles that play out every minute across the IoB.
Almost none of these battles matter individually. Most mook-on-mook contests are witnessed, for the most part, only by a few friends and algorithms, and merit no overt notice in either Vox or Quillette. Beyond a local uptick in cortisol levels, individual episodes of mook-on-mook violence are of no consequence.
I have a working draft post on this topic for sometime in the future. This is one of my favorites from the Stratechery archives â on corporate cultures and how they impact company strategy:
As with most such things, culture is one of a companyâs most powerful assets right until it isnât: the same underlying assumptions that permit an organization to scale massively constrain the ability of that same organization to change direction. More distressingly, culture prevents organizations from even knowing they need to do so.
The Slate Star Codex review of Turchin and Nefedovâs Secular Cycles, which seeks to understand patterns in technological and social development, and underlying causes for expansion and stagnation periods.
Iâm currently reading the fantastic book The Dream Machine, a history of the creation of personal computers, and a biography of this man, JCR Licklider. This is a talk from an ACM conference in 1986 where he discusses his work on interactive computing. A wonderful little bit of history here.
This is a neat clip from Walt Disneyâs Disneyland TV series. Wernher von Braun explains the future technology thatâll take us to the Moon, in 1955, several years before the Mercury program even began.
Maksim Stepanenkoâs notes on Will and Ariel Durantâs The Lessons of History. Iâve got this one on the shelf, and these nuggets make me want to pick it up now to read.
While working on some Lego sets with the kids, I wanted to know if some extra parts we had were from the sets they got, since everything had gotten mixed up. Since itâd been years since I had any Legos, I thought there might be databases out there to lookup parts by number. Lo-and-behold I found this one where you can input an individual part ID, and also find out what other sets contain the same part. A tool that I wouldâve eaten up as a kid cataloging our Legos and searching for âcustomâ parts.
You may have thought the entire 14th century was prettybad, or maybe 1918 with its flu pandemic and millions of war casualties, but how about the 6th:
A mysterious fog plunged Europe, the Middle East, and parts of Asia into darkness, day and nightâfor 18 months. âFor the sun gave forth its light without brightness, like the moon, during the whole year,â wrote Byzantine historian Procopius. Temperatures in the summer of 536 fell 1.5°C to 2.5°C, initiating the coldest decade in the past 2300 years. Snow fell that summer in China; crops failed; people starved. The Irish chronicles record âa failure of bread from the years 536â539.â Then, in 541, bubonic plague struck the Roman port of Pelusium, in Egypt. What came to be called the Plague of Justinian spread rapidly, wiping out one-third to one-half of the population of the eastern Roman Empire and hastening its collapse, McCormick says.
That sort of worldwide famine caused by devastating volcanic eruptions wouldâve been impossible to deal with. And the Plague of Justinian was no small thing either, thought to have killed up to 25% of the global population.
The Remembrance of Earthâs Past trilogy (translated by Ken Liu and featured here) is one of the best sci-fi works there is, regardless of origin or era. I also read and enjoyed Liuâs Paper Menagerie collection of short stories. I didnât realize how involved he was personally in bringing so much new material here, and introducing so many Chinese authors to wider audiences:
He has found sci-fi stories in unusual corners of the internet, including a forum for alumni of Tsinghua University. Chinese friends send him screenshots of stories published on apps that are hard to access outside of China. As an emissary for some of Chinaâs most provocative and boundary-breaking writers, Liu has become much more than a scout and a translator. Heâs now a fixer, an editor and a curator â a savvy interpreter who has done more than anyone to bridge the imagination gap between the worldâs current, fading superpower and its ascendant one.
His job as a translator, given the sensitivities of the material and the players involved, is a complex one:
âItâs a very tricky dance of trying to get the message that theyâre trying to convey out, without painting the writers as dissidents,â Liu told me over coffee one day, as we sat in the kitchen of his home in Massachusetts. âA lot of Chinese writers are very skilled at writing something ambiguously, such that there are multiple meanings in the text. I have to ask them, how explicit do you want me to be in terms of making a certain point here, because in the original itâs very constrained, so how much do you want me to tease out the implications youâre making? And sometimes we have a discussion about exactly what that means and how they want it to be done.â
Weâve not scratched the surface much on Slackâs Shared Channels feature, but where we have it definitely makes staying plugged in with important tangential networks (like customers and partners) dead simple and much more engaging.
This network analysis uses some interesting visualizations to show the topology of the network, with its subnetworks creating a connection graph of communication pipes.
Also on an hourly basis, these mini-networks from the outer ring get sucked into the internal mega-network, as connections are formed between organizations on the inside and the outside. The overall result is a roiling sea of proto-networks surrounding an ever-expanding network of super-connected teams.
One of the great things about YouTube is being able to find gems of history like Doug Engelbartâs âMother of All Demosâ presentation from 1968. How amazing it mustâve been to see something like this live, 50 years ago:
The live demonstration featured the introduction of a complete computer hardware and software system called the oN-Line System or, more commonly, NLS. The 90-minute presentation essentially demonstrated almost all the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor (collaborative work). Engelbartâs presentation was the first to publicly demonstrate all of these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.
I love this story about Access and how itâs still hanging on with a sizable user base after almost a decade of neglect by its parent. It goes to show you that there are still gaps in the market for software being filled by 10 year-stale applications. Getting users to unlearn behaviors is much harder than giving them an Airtable, Webflow, or Fulcrum â too much comfort and muscle memory to overcome.
Its long lifespan can be attributed to how it services the power user, as well as how simple it is to create a relational database with so few steps (âit just worksâ):
Power users can be a dangerous group to help. With a little knowledge, you can make a very powerful weapon for shooting yourself in the foot. But there is a serious untapped potential here. Give a technical person a way to solve their problems that doesnât involve writing pages of code, and they can make a difference â automating small tasks, managing their own islands of data, and helping to keep their local environment organized and effective.
Today, there remains a hunger for codeless or code-light tools. Motivated people want to do their jobs without paying expensive professionals for every semicolon. But so far the only offerings weâve given them are a VBA macro language from a generation ago and pricey tools like PowerApps that only work if your business signs up for a stack of Microsoft cloud products.
The low/no-code universe of applications is expanding in the last couple years. Weâre riding this wave ourselves and have been betting on it since 2011. In the market for data tools, thereâs a spectrum of user capability: from the totally non-technical on one end (Excel might be a stretch for these folks) to the developer user on the other (whereâs my SQL Server?). What stories like this demonstrate is that that âpower userâ middle is a lot wider than most of us have predicted.
Some top-notch baseball geekery, with Jason Snell comparing the graphics overlays from Fox, MLB Network, and ESPNâs telecasts. Iâve thought about this, too, but have to give it to the ESPN one, with Fox right up there.
Scott Alexanderâs review is an excellent in-depth look at this book on meditation. Iâm still making my way through it, but itâs definitely a fantastic soup-to-nuts guide so far.
From an objective observer in the 16th century, what site would have been the best bet to predict the flowering of the Industrial Revolution, based on contemporaneous evidence?
In fact, England in 1550 was not even close to being Europeâs preeminent naval power. It was Hispania, not Britannia, who ruled the waves. Even on maps made in England and for the use of the English government, the ocean off the west coast of England and to the south of Ireland was labelled The Spanish Sea. The foreign maps agreed. The North Sea, too, was the Oceanus Germanicus, or German Sea. It gives an idea of who controlled what. And England of course came close to catastrophe in 1588, when the Spanish decided to launch an invasion â it was largely only stopped by the weather. Despite having always been on an island, English policymakers only seriously began to appreciate Britainâs geographical potential for both defence and commerce in the late sixteenth century.
It took until the mid-17th century for promise to start taking hold in England. By then itâs growth and expansion had begun overtaking its neighbors.
Why does it take so long for new technologies with seemingly-obvious positive benefits to get adopted? This example on the speed with which the polio vaccine was adopted and administered are incredible, but an outlier:
The polio vaccine is an outlier in the history of new technology because of the speed at which it was adopted. It is perhaps the lone exception to the rule that new technology has to suffer years of ignorance before people take it seriously. I donât know of anything else like it.
You might think it was quickly adopted because it saved lives. But a far more important medical breakthrough of the 20th century â penicillin â took almost 20 years from the time it was discovered until it was used in hospitals. Ten times as many died in car accidents as from polio in the early 1950s, but it took half a century for seat belts to become a standard feature in cars.
The point of the post is to highlight why things usually go in the other direction, with new technologies taking years or decades to adopt. Sometimes not even until the next generation of children grows up with it. Wouldnât it be great to see more positive progress embraced like the cure for polio?
Since Iâve been following the progress studies movement and Jason Crawfordâs Roots of Progress blog, it was cool to see video of his talk on the history of steel from a San Francisco meetup a few weeks ago.
Through a Twitter thread I ran across this running catalog of resources on the history of the tech industry â books, articles, movies, and more. A definitive list of content. There are some great recommendations here that Iâd never heard of, especially in the books and podcasts sections.
Iâve got a copy of The Dream Machine that Iâm planning on digging into next, a history of personal computing and biography of JCR Licklider.
Iâm a historian of innovation. I write mostly about the causes of Britainâs Industrial Revolution, focusing on the lives of the individual innovators who made it happen. Iâm interested in everything from the exploits of sixteenth-century alchemists to the schemes of Victorian engineers. My research explores why they became innovators, and the institutions they created to promote innovation even further.
This connects nicely with the recent âprogress studiesâ movement.
This is another great one from last year on Jason Crawfordâs Roots of Progress project, in which he dives into advancements in human progress. In this post he covers a brief background on cement, one of the oldest of mankindâs technological discoveries:
Stone would be ideal. It is tough enough for the job, and rocks are plentiful in nature. But like everything else in nature, we find them in an inconvenient form. Rocks donât come in the shape of houses, let alone temples. We could maybe pile or stack them up, if only we had something to hold them together.
If only we couldâbear with me now as I indulge in the wildest fantasyâpour liquid stone into molds, to create rocks in any shape we want! Orâas long as Iâm dreamingâwhat if we had a glue that was as strong as stone, to stick smaller rocks together into walls, floors and ceilings?
This miracle, of course, exists. Indeed, it may be the oldest craft known to mankind. You already know itâand you probably think of it as one of the dullest, most boring substances imaginable.
From reading this I added Concrete Planet to the reading list. Iâm overdue for some more history of technology reading this year.
On the power of starting with no baggage, sunk costs, or past poor decisions:
Part of the reason the economy recovered slowly after the financial crisis was that businesses, spooked by the recession, relentlessly reviewed their costs. âI just see business after business after business which has rationalized so that it can protect its balance sheet and earning power while utilizing fewer peopleâ Charlie Munger said in 2010.
So began a new life for the concept of zero-based budgeting.
Zero-based budgeting is the idea that each yearâs budget should be created from scratch, rather than using the previous yearâs budget as a baseline. Jimmy Carter made it famous during the 1970s â the last time business and government budgets were as strained as they were in recent years. The number of large companies using zero-based budgeting has increased 50% since 2008, according to Deloitte. Campbell Soup, Kellogg, and Kraft now use it.
Itâs based on a profoundly sane idea: Things change and evolve, so the phrase âthis is how weâve always done itâ should be replaced with âwhat do we need right now?â A full reset, unburdened by the past.
You canât (and shouldnât) always reboot at the first sign of trouble. We should measure the cost-benefit of the full reset and use it when possible and time and space permit:
There is a counter to this: The secret to investing is enduring uncomfortable situations, so selling at the first hint of imperfection is usually regretted. But its opposite â an unshakeable anchor to past decisions â is perhaps worse. The saying, âOur favorite holding period is forever,â should be replaced with, âOur favorite holding period is until the facts change.â
I saw this Nightline interview clip with Steve Jobs from a recent Steven Sinofsky post.
In this clip is his famous âbicycle for the mindâ quote about the personal computer.
This is a 21st century bicycle that amplifies a certain intellectual ability that man has. And I think that after this process has come to maturity, the effects that itâs going to have on society are going to far outstrip even those of the petrochemical revolution has had.
Hard to believe Jobs was this prescient at age 26, when computers were still considered to be hobbyist toys.
This is a fascinating article from a 2004 issue of Engineering & Science that investigates the âTunnel of Samosâ, constructed on the eponymous Greek island 2500+ years ago.
One of the greatest engineering achievements of ancient times is a water tunnel, 1,036 meters (4,000 feet) long, excavated through a mountain on the Greek island of Samos in the sixth century B.C. It was dug through solid limestone by two separate teams advancing in a straight line from both ends, using only picks, hammers, and chisels. This was a prodigious feat of manual labor. The intellectual feat of determining the direction of tunneling was equally impressive. How did they do this? No one knows for sure, because no written records exist. When the tunnel was dug, the Greeks had no magnetic compass, no surveying instruments, no topographic maps, nor even much written mathematics at their disposal. Euclidâs Elements, the first major compendium of ancient mathematics, was written some 200 years later.
Since the engineers of the time dug the tunnel from both directions simultaneously, meeting in the middle, it required advanced knowledge of surveying and mathematics techniques to get the geometry just right. An inaccurate elevation, direction, or location by even a couple of degrees or inches would mean widely missing the mark to join the two tunnels. The author investigates a few possible methods for how the ancient Greeks couldâve completed the feat. A few techniques and simple tools wouldâve been required, but a research team was able to recreate a potential method. Amazing that with such primitive technology they couldâve completed such a monumental project.
Roots of Progress has an interesting deep dive on why it took so long for a (relatively) simple invention of the bicycle, even in a time when the principles of a bicycleâs components were well understood for a long time. Thereâs an interesting inventory of potential hypotheses about why it took until the late 1800s.
Early iterations of human-powered transport looked like inventors trying to replicate the carriage, with devices that looked like âhorseless carriagesâ, someone providing power, another person steering. The first breakthrough toward something that looked like a modern bicycle (at least in form factor) was from German inventor Karl von Drais, modeling his design on the horse rather than the carraige:
The key insight was to stop trying to build a mechanical carriage, and instead build something more like a mechanical horse. This step was taken by the aforementioned Karl von Drais in the early 1800s. Drais was an aristocrat; he held a position as forest master in Baden that is said to have given him free time to tinker. His first attempts, beginning in 1813, were four-wheeled carriages like their predecessors, and like them failed to gain the support of authorities.
It seems cultural and economic factors make the most sense as explanations, versus technological or environmental ones:
In light of this, I think the deepest explanation is in general economic and cultural factors. Regarding economic factors, it seems that there needs to be a certain level of surplus to support the culture-wide research and development effort that creates inventions.
Like most breakthroughs viewed with the advantage of hindsight (and in this case 150 years of it), the invention seems so obvious we struggle to imagine why the people of the 18th or 19th centuries wouldnât have worked on the problem. Combine the non-obviousness with a lack of cultural motivation and an unfriendly economic environment, and itâs not surprising it took so long.
I tried this out the other night on a run. The technique makes some intiutive sense that itâd reduce impact (or level it out side to side anyway). Surely to notice any result youâd have to do it over distance consistently. But Iâve had some right knee soreness that I donât totally know the origin of, so thought Iâd start trying this out. I found it takes a lot of concentration to keep it up consistently. Iâll keep testing it out.
A neat historical, geographical story from BLDGBLOG:
Briefly, anyone interested in liminal landscapes should find Snellâs description of the Drowned Lands, prior to their drainage, fascinating. The Wallkill itself had no real path or bed, Snell explains, the meadows it flowed through were naturally dammed at one end by glacial boulders from the Ice Age, the whole place was clogged with ârank vegetation,â malarial pestilence, and tens of thousands of eels, and, whatâs more, during flood season âthe entire valley from Denton to Hamburg became a lake from eight to twenty feet deep.â
Turns out there was local disagreement on flood control:
A half-century of âwarâ broke out among local supporters of the dams and their foes: âThe dam-builders were called the âbeaversâ; the dam destroyers were known as âmuskrats.â The muskrat and beaver war was carried on for years,â with skirmishes always breaking out over new attempts to dam the floods.
Hereâs one example, like a scene written by Victor Hugo transplanted to New York State: âA hundred farmers, on the 20th of August, 1869, marched upon the dam to destroy it. A large force of armed men guarded the dam. The farmers routed them and began the work of destruction. The âbeaversâ then had recourse to the law; warrants were issued for the arrest of the farmers. A number of their leaders were arrested, but not before the offending dam had been demolished. The owner of the dam began to rebuild it; the farmers applied for an injunction. Judge Barnard granted it, and cited the owner of the dam to appear and show cause why the injunction should not be made perpetual. Pending a final hearing, high water came and carried away all vestige of the dam.â
This is something we launched a few months back. Thereâs nothing terribly exciting about building SSO features in a SaaS product â itâs table stakes to move up in the world with customers. But for me personally itâs a signal of success. Back in 2011, imagining that weâd ever have customers large enough to need SAML seemed so far in the future. Now weâre there and rolling it out for enterprise customers.
We took the kids over to Kennedy Space Center on Saturday on the way up to Jacksonville. A quick stopover in Titusville Friday night then morning over at the Cape.
I always loved visiting KSC when I was younger. We had the opportunity to go and see multiple launches over the years, including a couple of Space Shuttle launches. Visiting again brought back memories since theyâve got several things there that havenât changed much over the years. On the way in you get to walk through the Rocket Garden, which showcases some of the rockets of NASAâs past programs including the Redstone and Atlas rockets from Mercury, the Titan from Gemini, and the Saturn 1B from early Apollo lying prone in the back. Of course the kids loved the chance to sit inside the Mercury capsule.
Thereâs a new (since my last visit) exhibit called âJourney to Marsâ where theyâve got a nice presentation on the past and future of Mars exploration missions. The highlights here were the replicas of Sojourner, Spirit, Opportunity, and Curiosity. Itâs amazing in that display to see the increasing size and complexity of robotics weâve been able to successfully land and use on Mars. Makes you wonder what weâll do next.
One of the other classics from when we used to visit as kids is the astronaut memorial. Thereâs nothing terribly spectacular about it, but itâs an important reminder of the sacrifices made throughout the nationâs research, experimentation, and launch attempts to explore space. We had to go check this out for nostalgiaâs sake. I still find it impressive and it reminds me of the times when our manned space program was active and astronauts were like mini celebrities.
The last part we checked out was the new Space Shuttle exhibit with the old Atlantis orbiter on display. It reminded me of how advanced the shuttle really was for its time, as a technology of the early 80s, and also how large the orbiters were. In hindsight the whole thing seemed pretty impractical, but the set of dozens of exhibits serve as a reminder of all the amazing things that were enabled by routine trips to orbit by the Space Shuttle. It was cool to get to stand up close to something that went into space 33 times.
For our next visit Iâd love to go out to the Saturn V exhibit, to which you have to take a bus tour. Maybe itâd be a better time then for the kids to get out on the deeper tour that takes you to the VAB and the launch pads. I always loved getting to see that stuff.
Another great piece from Morgan Housel, this one on what we can learn from history.
My personal favorite from the list: Progress happens too slowly for people to notice; setbacks happen too fast for people to ignore.
If you want to measure the progress of medicine, looking at the last year will do you little good. Any single decade wonât do much better. But looking at the last 50 years will show something extraordinary â the age-adjusted death rate per capita from heart disease has declined more than 70% since 1965, according to the National Institute of Health. A 70% decline in heart-disease death is enough to save something like half a million American lives per year. Picture the population of Atlanta saved every year. But since that progress happened so slowly, it captures less attention than quick, sudden losses like terrorism or plane crashes. We could have a Hurricane Katrina five times a week, every week â imagine how much attention that would receive â and it would not offset the number of annual lives saved by the decline in heart disease in the last 50 years.
This post is part 3 in a series about my history in product development. Check out the intro in part 1 and all about our first product, Geodexy, in part 2.
Back in 2010 we decide to halt our development of Geodexy and regroup to focus on a narrower segment of the marketplace. With what weâd learned in our go-to-market attempt on Geodexy, we wanted to isolate a specific industry we could focus our technology around. Our tech platform was strong, we were confident in that. But at the peak of our efforts with taking Geodexy to market, we were never able to reach a state of maturity to create traction and growth in any of the markets we were targeting. Actually targeting is the wrong word â truthfully that was the issue: we werenât âtargetingâ anything because we had too many targets to shoot at.
We needed to take our learnings, regroup on what was working and what wasnât, and create a single focal point we could center all of our effort around, not just the core technology, but also our go-to-market approach, marketing strategy, sales, and customer development.
I donât remember the specific genesis of the idea (I think it was part internal idea generation, part serendipity), but we connected on the notion of field data collection for the property inspection market. So we launched allinspections.
That industry had the hallmarks of one ripe for us to show up with disruptive technology:
Low current investment in technology â Most folks were doing things on paper with lots of transcribing and printing.
Lots of regulatory basis in the workflow â Many inspections are done as a requirement by a regulatory body. This meant consistent, widespread needs that crossed geographic boundaries, and an âalways-onâ use case for a technology solution.
Phased workflow with repetitive process and âdecision treeâ problems â a perfect candidate for digitizing the process.
Very few incumbent technologies to replace â if there were competitors at all, they were Excel and Acrobat.
Smartphones ready to amplify a mobile-heavy workflow â Inspections of all sorts happen in-situ somewhere in the field.
While the market for facility and property inspections is immense, we opted to start on the retail end of the space: home inspections for residential real estate. There was a lot to like about this strategy for a technology company looking to build something new. We could identify individual early adopters, gradually understand what made their business tick, and index on capability that empowered them. There was no need immediately to worry about selling to massive enterprise organizations, which wouldâve put a heavy burden on us to build âbox-checkingâ features like hosting customization, access controls, single sign-on, and the like. We used a freemium model which helped attract early usage, then shifted to a free trial one later on after some early traction.
Overall the biggest driver that attracted us to residential was the consistency of the work. While anyone whoâs bought property is familiar with the process of getting a house inspected before closing. That sort of inspection is low volume compared to those associated with insurance underwriting. Our first mission was this: to build the industry-standard tool for performing these regulated inspections in Florida â wind mitigation, 4-point, and roof certification. These were (and still are) done by the thousands every day. They were perfect candidates for us for the reasons listed above: simple, standard, ubiquitous, and required1. There was a built-in market for automating the workflow around them and improving the data collected, which we could use as a beachhead to get folks used to using an app to conduct their inspections.
Our hypothesis was that we could apply the technology for mobile data collection weâd built in Geodexy and âverticalizeâ it around the specialty of property inspection with features oriented around that problem set. Once we could spin up enough technology adoption for home inspection use cases at the individual level, we could then bridge into the franchise operations and institutions (even the insurance companies themselves) to standardize on allinspections for all of their work.
We had good traction in the early days with inspectors. It didnât take us long before we connected with a half-dozen tech-savvy inspectors in the area to work with as guinea pigs to help us advance the technology. Using their domain expertise in exchange for usage of the product, we were able to fast-forward on our understanding of the inspection workflow â from original request handling and scheduling, to inspecting on-site, then report delivery to customer. Within a year we had a pretty slick solution and 100 or so customers that swore by the tool for getting their work done.
But it didnât take us long to run into friction. Once weâd exhausted the low-hanging fruit of the early adopter community, it became harder and harder to find more of the tech savvy crowd willing to splash some money on something new and different. As you might expect, the community of inspectors we were targeting were not technologists. Many of these folks were perfectly content with their paperwork process and enjoyed working solo. Many had no interest in building a true business around their operation, not interested in growing into a company with multiple inspectors covering wider geographies. Others were general contractors doing inspections as a side gig, so it wasnât even their core day to day job. With that kind of fragmentation, it was difficult to reach the economies of scale we were looking for to be able to sell something at the price point where we needed to be. We had some modest success pursuing the larger nationwide franchise organizations, but our sales and onboarding strategy wasnât conducive to getting those deals beyond the small pilot stage. It was still too early for that. We wanted to get to B2B customer sizes and margins, but were ultimately still selling a B2C application. Yes, a home inspector has a business that we were selling to, but the fundamentals of the relationship share far more in common with a consumer product relationship than a corporate one.
By early 2012 weâd stalled out on growth at the individual level. A couple opportunities to partner with inspection companies on a comprehensive solution for carriers failed, partially for technical reasons, but also immaturity of our existing market. We didnât have a reference base sizable enough to jump all the way up to selling 10,000 seats without enormous burden and too much overpromising on what we could do.
We shut down operations on allinspections in early 2012. We had suspected this would have to happen for a while, so it wasnât a sudden decision. But it always hurts to have to walk away from something you poured so much time and energy into.
I think the biggest takeaway for me at the time, and in the early couple years of success on Fulcrum, was how relatively little the specifics of your technology matter if you mess up the product-market fit and go-to-market steps in the process. The silver lining in the whole affair was (like many things in product companies) that there was plenty to salvage and carry on to our next effort. We learned an enormous amount about what goes into building a SaaS offering and marketing it to customers. Coming from Geodexy where we never even reached the stage of having a real âcustomer successâ process to deal with, allinspections gave us a jolt in appreciation for things like identifying the âaha momentâ in the product, increasing usage of a product, tracking usage of features to diagnose engagement gaps, and ultimately, getting on the same page as the customer when it comes to the final deliverable. It takes working with customers and learning the deep corners of the workflow to identify where the pressure points are in the value chain, the things that keep the customer up at night when they donât have a solution.
And naturally there was plenty of technology to bring forward with us to our next adventure. The launch of Fulcrum actually pre-dates the end of allinspections, which tells you something about how we were thinking at the time. At the time we werenât thinking of Fulcrum as the ânext evolutionâ of allinspections necessarily, but we were thinking about going bigger while fixing some of the mistakes made a year or two prior. While most of Fulcrum was built ground-up, we brought some code but a whole boatload of lessons learned on systems, methods, and architecture that helped us launch and grow Fulcrum as quickly as we did.
Retrospectives like this help me to think back on past decisions and process some of what we did right and wrong with some separation. That separation can be a blessing in being able to remove personal emotion or opinion from what happened and look at it objectively, so it can serve as a valuable learning experience. Sometime down the road Iâll write about this next evolution that led to where we are today.
Since the mid-2000s, all three of these inspection types are required for insurance policies in Florida. âŠ
I loved this piece, a history of the spreadsheet from Steven Levy originally written in 1984.
Itâs a great retrospective that demonstrates how much impact spreadsheets had on business, even though we now consider them a fact of life and a given foundation of working with numbers on computers:
Ezra Gottheil, 34 is the senior product-design planner at Lotus. He shows up for work in casual clothes, and his small office is cluttered with piles of manuals and software. When I visited Gottheil he gave me a quick introduction to electronic speadsheeting. Computer programs are said to use different âmetaphorsâ to organize their task; a program might use the metaphor of a Rolodex, or a file cabinet. When you âbootâ almost any spreadsheet program into your personal computer, you see little more than some letters running across the top of the display screen and some numbers running down the side. This serves to indicate the grid of a ledger sheet, the metaphor used by Lotus and other best-selling spreadsheets like VisiCalc, Multiplan, and SuperCalc. The âcursor,â a tiny block of light on the screen that acts like a kind of electronic pencil, can be moved (by a touch of the computer keyboard) to any cell on the spreadsheet in order to âinputâ numbers or formulas. By placing in the cells either figures of formulas that adjust figures according to different variables, it is possible to duplicate the relationships between various aspects of a business and create a âmodel.â The basic model for a restaurant, for example, world include expenses such as salaries, food and liquor costs, and mortgage or rent payments; revenues might be broken down into âbarâ and âfood,â perhaps even further by specific dishes. Every week, the figures would be updated, the formulas reworked if necessary (perhaps the price of the olive oil had risen) and the recalculated model provides an accurate snapshot of the business.
Here we sit 30 years later and the basics of the spreadsheet and fundamental means of interaction have hardly changed.
Itâs interesting that now tools like Observable are using some of the same principles of interactivity that weâve all used in spreadsheets for decades and applying them to code â edit code or data and watch as your output dynamically changes.
One of my favorite tech figures, a16zâs Steven Sinofsky, gives a history of âClippyâ, the helpful anthropomorphic office supply from Microsoft Office. As the product leader of the Office group in the 90s, he gives some interesting background to how Clippy came to be. I found most fascinating the time machine look back at what personal computing was like back then â how different it was to develop a software product in a world of boxed software.
Everyone makes fun of it today, but Clippy did presage the world of AI-powered âassistantâ technology that everyone is getting familiar with today.
I started with the first post in this series back in January, describing my own entrance into product development and management.
When I joined the company we were in the very early stages of building a data collection tool, primarily for internal use to improve speed and efficiency on data project work. That product was called Geodexy, and the model was similar to Fulcrum in concept, but in execution and tech stack, everything was completely different. A few years back, Tony wrote up a retrospective post detailing out the history of what led us down the path we took, and how Geodexy came to be:
After this experience, I realized there was a niche to carve out for Spatial Networks but Iâd need to invest whatever meager profits the company made into a capability to allow us to provide high fidelity data from the field, with very high quality, extremely fast and at a very low cost (to the company). I needed to be able to scale up or down instantly, given the volatility in the project services space, and I needed to be able to deploy the tools globally, on-demand, on available mobile platforms, remotely and without traditional limitations of software CDs.
Tonyâs post was an excellent look back at the business origin of the product â the âwhyâ we decided to do it piece. What I wanted to cover here was more on the product technology end of things, and our go-to-market strategy (where you could call it that). Prior to my joining, the team had put together a rough go-to-market plan trying to guesstimate TAM, market fit, customer need, and price points. Of course without real market feedback (as in, will someone actually buy what youâve built, versus say they would buy it one day), itâs hard to truly gauge the success potential.
Back then, modern web frameworks in use today were around, but there were very few and not yet mature, like Rails and itâs peers. Itâs astonishing to think back on the tech stack we were using in the first iteration of Geodexy, circa 2008. That first version was built on a combination of Flex, Flash, MySQL, and Windows Mobile1. It all worked, but was cumbersome to iterate on even back then. This was not even that long ago, and back then that was a reasonable suite of tooling; now it looks antiquated, and Flex was abandoned and donated to Apache Foundation a long time ago. We had success with that product version for our internal efforts; it powered dozens of data collection projects in 10+ countries around the world, allowing us to deliver higher-quality data than we could before. The mobile application (which was the key to the entire product achieving its goals) worked, but still lacked the native integration of richer data sources â primarily for photos and GPS data. The former could be done with some devices that had native cameras, but the built-in sensors were too low quality on most devices. The latter almost always required an external Bluetooth GPS device to integrate the location data. It was all still an upgrade from pen, paper, and data transcription, but not free from friction on the ground at the point of data collection. Being burdened by technology friction while roaming the countryside collecting data doesnât make for the smoothest user experience or prevent problems. We still needed to come up with a better way to make it happen, for ourselves and absolutely before we went to market touting the workflow advantages to other customers.
In mid-2009 we spun up an effort to reset on more modern technology we could build from, learning from our first mistakes and able to short-circuit a lot of the prior experimentation. The new stack was Rails, MongoDB, and PostgreSQL, which looking back from 10 years on sounds like a logical stack to use even today, depending on the product needs. Much of what we used back then still sits at the core of Fulcrum today.
What we never got to with the ultimate version of Geodexy was a modern mobile client for the data collection piece. That was still the early days of the App Store, and I donât recall how mature the Android Market (predecessor to Google Play) was back then, but we didnât have the resources to start of with 2 mobile clients anyway. We actually had a functioning Blackberry app first, which tells you how different the mobile platform landscape looked a decade ago2.
Geodexyâs mobile app for iOS was, on the other hand, an excellent window into the potential iOS development unlocked for us as a platform going forward. In a couple of months one of our developers that knew his way around C++ learned some Objective-C and put together a version that fully worked â offline support for data collection, automatic GPS integration, photos, the whole nine yards of the core toolset we always wanted. The new wave of platform with a REST API, online form designer, and iOS app allowed us to up our game on Foresight data collection efforts in a way that we knew would have legs if we could productize it right.
We didnât get much further along with the Geodexy platform as it was before we refocused our SaaS efforts around a new product concept thatâd tie all of the technology stack weâd built around a single, albeit large, market: the property inspection business. Thatâs what led us to launch allinspections, which Iâll continue the story on later.
In an odd way, itâs pleasing to think back on the challenges (or things we considered challenges) at the time and think about how they contrast with today. We focused so much attention on things that, in the long run, arenât terribly important to the lifeblood of a business idea (tech stack and implementation), and not enough on the things worth thinking about early on (market analysis, pricing, early customer development). Part of that I think stems from our indexing on internal project support first, but also from inexperience with go-to-market in SaaS. The learnings ended up being invaluable for future product efforts, and still help to inform decision making today.
As painful as this sounds we actually had a decent tool built on WM. But the usability of it was terrible, which if you can recall the time period was par for the course for mobile applications of all stripes. âŠ
This is a great breakdown of the different elements of LiDAR technology, looking at three broad areas: beam direction, distance measurement, and frequencies. They compare the tech of 10 different companies in the space to see how each is approaching the problem.
Taking off of the Wikibooks project, this team is aiming to generate books from Wikipedia content using ML techniques.
Given the advances in artificial intelligence in recent years, is there a way to automatically edit Wikipedia content so as to create a coherent whole that is useful as a textbook? Enter Shahar Admati and colleagues at the Ben-Gurion University of the Negev in Israel. These guys have developed a way to automatically generate Wikibooks using machine learning. They call their machine the Wikibook-bot. âThe novelty of our technique is that it is aimed at generating an entire Wikibook, without human involvement,â they say.
In the spirit of yesterdayâs post on the Earth of the past, this interactive map lets you browse back in time to see what oceans and landmasses looked like all the way back to 750 million years ago. Try typing in your address to see if youâd have been a resident of Gondwana or Laurasia if you took your time machine back to the Triassic.
When I read Annals of the Former World some years back, the hardest thing to wrap my head around with geologic time was the sheer scale of what â100 million yearsâ looks like. No matter how many of the comparisons, scale bars, or timelines I see, itâs still mind-blowing to think about continents converging, separating, and reconverging repeatedly throughout history.
Every year since the pre-Stone Age area, visualized as a time lapse on a map.
This is amazing and puts into context what was developing where over time. I know when I read the history of one culture, like Ancient Greece, itâs hard to keep in the mind what was happening elsewhere in the world during the same time period. This video could be a good reference point to pull up to get a sense of what happened during, before, and after any period in human history.
Itâs also hard to believe that in 3000 BC the global population estimate was only 30 million people, or roughly the population of modern Nepal.
Iâm making my way through The Federalist, which has been on my reading list forever, and for which I had my interest rekindled last year reading Alexander Hamilton.
For those that donât know, itâs a collection of essays written by the trio of Hamilton, James Madison, and John Jay to convince the populace of the need to ratify the then-draft US Constitution.
Up to Federalist No. 25, the focus is on a) the utility and importance of the âunion of statesâ as a concept worth pursuing and cementing and b) the insufficiency of the Articles of Confederation to do the job of maximizing the combined strength of the states (for various reasons outlined in the essays).
One of the biggest takeaways so far, somewhat unexpected to me, is the depth of research done by the authors to make their case. They draw on a rich historical record and present research to identify pros and cons of whatâs been tried in past institutions, whatâs worked, and what hasnât. On the history of the Swiss Confederacy:
The connection among the Swiss cantons scarcely amounts to a confederacy; though it is sometimes cited as an instance of the stability of such institutions. They have no common treasury; no common troops even in war; no common coin; no common judicatory; nor any other common mark of sovereignty. They are kept together by the peculiarity of their topographical position; by their individual weakness and insignificancy; by the fear of powerful neighbors, to one of which they were formerly subject; by the few sources of contention among a people of such simple and homogeneous manners; by their joint interest in their dependent possessions; by the mutual aid they stand in need of, for suppressing insurrections and rebellions, an aid expressly stipulated and often required and afforded; and by the necessity of some regular and permanent provision for accommodating disputes among the cantons.
Looking to historical evidence to validate or reject aspects of governing models helped guide us to the right approach for our new government; the record of trial and error is immensely helpful if you respect and understand the context. The varied governance structures of history allowed the Federalists to make a strong case for centralization (but just the right amount of it). The Founders also sought to maximize freedom of individuals and the states they thought crucial to a stable system.
I make no apology for having dwelt so long on the contemplation of these federal precedents. Experience is the oracle of truth; and where its responses are unequivocal, they ought to be conclusive and sacred.
âExperience is the oracle of truth.â Model new systems around what has previously worked, make adjustments, and ensure the system is an âanti-fragileâ one that responds and gets stronger over time.
I recently finished reading Spycraft, Robert Wallace and Keith Meltonâs chronicle of the CIAâs spy tech divisions, specifically OTS (Office of Technical Services), the division responsible for creating technical espionage gear. Things like eavesdropping devices, dead drop containers, secret writing, disguises, and document forgery.
The story of OTS is fascinating and full of all sorts of straight-out-of-the-movies espionage games and tactics. The book is chock full of anecdotes of crazy operations from the groupâs inception with OSS during World War II, through the years of the Cold War. For evidence, look no further than the Agencyâs project codenamed âAcoustic Kittyâ, a harebrained scheme to implant a listening device into a stray cat for listening in on meetings of an Asian statesman who had a penchant for cats that wandered in and out of the meeting areas.
The book tells stories of operations in Moscow involving several famous Soviet spies, and the field tradecraft and technical tools that CIA case officers used to communicate with their agents. With its focus on technical devices of the Cold War era, much of the book describes audio taps and clandestine audio recording.
Toward the end of the book, thereâs an extensive section reviewing the principles of tradecraft â things like the use of disguises, assessment of recruitment targets, covert communications, and concealment devices â in the context of technology and devices involved.
The book does occasionally stray into the weeds a little, getting somewhat dry in parts, but itâs well worth the read for those interested in history and espionage.
Iâve been reading a lot lately about sociocultural geography â about how people interact with their environments and with one another across space and time. This topic is more relevant than ever with todayâs borderless conflicts, asymmetric warfare, and technology behind the scenes leveling the playing field for groups at all levels. On a journey across the internet reading and watching various things about human geography, I stumbled upon this fantastic piece by Adam Curtis on his BBC blog.
It tells the story and background of counterinsurgency doctrine from its inception in revolutionary communist China and Indochina to implementation in modern-day Iraq and Afghanistan. Fascinating stuff.
The post begins with some background on David Galula, the French military theorist popularly credited as the father of counterinsurgency warfare. During his time as a military attachĂŠ in China during the 1940s, he observed the tactics of Maoâs communist guerrillas, taking to heart the tactics used by the communists against the Kuomintang â in short, they turned the population to their side.
The meat of the articleâs background on the history of counterinsurgency is seen in several documentary clips about the actions of the French government during Algeriaâs War of Independence in the late 50s and early 60s. Galula and the French instituted an experimental âvillage reeducationâ program in the Aures Mountains region (a refuge for opposition forces), with French soldiers living and working with the locals. Questioning and interrogation of the now-moderately-friendly villagers rapidly devolved into torture and cruelty.
If you could persuade the local people to come over to your side - then that would leave the insurgents who lived among the people drastically weakened. And that meant you could destroy them.
But to do that you had to identify the insurgents - and that meant getting information from your new âfriendsâ the local villagers. But sometimes they didnât want to give that kind of information, possibly because they were frightened, or they might even be an insurgent themselves, just pretending to be a villager.
And that led to the French soldiers finding ways to persuade the villagers to tell them who was an insurgent. It was called torture.
The first true American experiment with counterinsurgency tactics happened in the midst of Vietnam. Galulaâs theories along with the work of a couple of economists (including Samuel Popkinâs The Rational Peasant) produced a hybrid approach to fighting in the jungles of Vietnam that fused Galulaâs traditional battle for âhearts and mindsâ with âselective incentivesâ (i.e. money for information from villagers). After a number of village âpacificationâ experiments, the CIAâs Phoenix Program was put into place to not only identify friend from foe, but to target and kill the enemy. And just as in Algeria, the plan mutated into what some former participants describe as a âfull blown torture and assassination campaignâ.
The article wraps up with a clip of Petraeus in Baqubah, Iraq during the 2007 surge, reviewing the fruits of our revival of the counterinsurgency. The net long-term effects of the modern COIN approach remain to be seen, but letâs hope it doesnât metastasize into the horrific programs of previous conflicts.
I think we still have a long way to go perfecting the right balance of support, direct involvement, and advisement â and in the messy, protracted, and stateless conflicts of today, we certainly wonât get anywhere with a standoff approach. Getting down in the trenches is a requirement.
For further reading, take a look an original research work from David Galula published by the RAND Corporation (originally published in 1963), analyzing the pacification campaign in Algeria. A couple other works Iâll be checking out along these same lines are David Kilcullenâs Counterinsurgency, and also the film The Battle of Algiers, which Iâve always wanted to see, and Curtis mentions in his article.
Gentle reader, thou must learne the Alphabet, to wit, the order of the Letters as they stand, perfectly without booke, and where every Letter standeth: as b neere the beginning, n about the middest, and t toward the end. Nowe if the word, which thou art desirous to finde, begin with a then looke in the beginning of this Table, but if with v looke towards the end. Againe, if thy word beginne with ca looke in the beginning of the letter c but if with cu then looke toward the end of that letter. And so of all the rest. &c.