Some technologies are unpredicted, but evolve. Others are predicted donât seem to materialize (or not yet). Then there are those that are expected AND
appear. The unexpected tend to be the most disruptive â no oneâs had the chance to prepare.
But the expected, if they do finally arrive, have been ruminated on for a long time. When we eventually realize the expected, weâre more prepared
socially for their impacts. Though often weâre wrong about their societal impacts until they show up.
Kevin Kelly writes about this in the context of AI, a technology long-predicted, but always with a bent
toward the negative. Toward the destructive social consequences of creating artificial beings.
Artificial beings â robots, AI â are in the Expected category. They have been so long anticipated that there has been no other technology or invention as widely or thoroughly anticipated before it arrived as AI. What invention might even be second to AI in terms of anticipation? Flying machines may have been longer desired, but there was relatively little thought put into imagining what their consequences might be. Whereas from the start of the machine age, humans have not only expected intelligent machines, but have expected significant social ramifications from them as well. Weâve spent a full century contemplating what robots and AI would do when it arrived. And, sorry to say, most of our predictions are worrisome.
Hereâs the example list from Arthur C. Clarkeâs 1963 book, Profiles of the Future:
Charles Mann on the unseen, unappreciated wonders of modern infrastructure:
The great European cathedrals were built over generations by thousands of people and sustained entire communities. Similarly, the electric grid, the public-water supply, the food-distribution network, and the public-health system took the collective labor of thousands of people over many decades. They are the cathedrals of our secular era. They are high among the great accomplishments of our civilization. But they donât inspire bestselling novels or blockbuster films. No poets celebrate the sewage treatment plants that prevent them from dying of dysentery. Like almost everyone else, they rarely note the existence of the systems around them, let alone understand how they work.
The invisible, yet essential, layer of infrastructure heâs describing becomes extremely visible during something like an extended power outage during
a hurricane. Even with our communications still up, generators in our yards, and food in our refrigerators, even a few days without power are like an
eternity.
Western Electric was the captive equipment arm of the Bell System and produced the majority of the telephones and related equipment used in the U.S. for almost 100 years.
A group of researchers just won the Scroll Prize, a project to read the ancient Herculaneum papyri, burned and buried in the eruption of Vesuvius in 79 AD.
Using AI and computer vision techniques they were able to discern text from the rolled, charred, and brittle papyrus. An unbelievable feat.
This interview was one of the best overviews and deep dives on the current state of AI / machine learning Iâve heard yet. Daniel was at Apple in the early work on machine learning in iOS, and Nat Friedman was CEO of GitHub during their development of the excellent Copilot product.
Nat on the previously-predicted tendency toward centralization in AI:
The centralization/decentralization thing is fascinating because I also bought the narrative that AI was going to be this rare case where this technology breakthrough was not going to diffuse through the industry and would be locked up within a few organizations. There were a few reasons why we thought this. One was this idea that maybe the know-how would be very rare, thereâd be some technical secrets that wouldnât escape. What we found instead is that every major breakthrough is incredibly simple and itâs like you could summarize on the back of an index card or ten lines of code, something like that. The ML community is really interconnected, so the secrets donât seem to stay secret for very long, so that oneâs out, at least for most organizations.
Daniel on the importance of the right interface for widening AI applications:
Weâre in this new era where new user interfaces are possible and itâs somewhere in between the spectrum of a GUI and a voice or text user interface. I donât think itâll be text just because in the domain of images, sure, all mistakes are actually features, great, but the issue that you have is in real domains, like you mentioned legal, tax, where productive work is made, mistakes are bad. The issue with text is of one observation we always had from Apple is unlike a GUI, the customer does not understand the boundaries of the system. So unless, to Natâs point, if you have AGI and itâs smarter than a human, great. Up until that point, you need something that has this feature that the GUI has, which is amazing. The GUI only shows you buttons you can press on, it doesnât have buttons that donât work, usually.
David Carboni makes a great point in this piece: successful powerhouse businesses, paragons of scaling up (your Netflixes, Googles, Ubers, et al), could never build the disruptive, fast-moving products that made them successful from their positions today:
Admired and respected as towering giants of our digital world, our hero companies emanate an almost mythical quality. The scale, power and inspiration they command are the stuff of legend. Glib statements about âbusinessâ distort their stories into gaudy two-dimensional caricatures whilst organisations seeking Digital Transformation aspire to emulate what they see in this theatre. Paradoxically our heroes would be the first to point out they wouldnât be able to build themselves as they stand today.
So much of whatâs required to actually scale to Google or Netflix level is fundamentally unknown and actually nonexistent when they ran into their scaling frictions. Due to their unique needs to deliver their products to hundreds of millions of simultaneous users, Netflix builds Chaos Monkey, Google creates Kubernetes. Thereâs nothing wrong with these tools; they solve problems that are nearly one-of-a-kind for businesses that are in a class of very few.
The envy of their ability to scale, and an overconfidence that âIâll need this one day, tooâ, tempts startups to build for scale way too early. But the cause-effect on their success is misattributed. These megascalers didnât get to hypergrowth because they built deployment automations, CI/CD magic, or microserviceâd their architecture. They did those things because their early quick-and-dirty, unscalable experimentation helped them find generational product-market fit first.
From where they sit today, their inventions for scale would be active impediments to disruptive innovation.
Success is a messy business, exploratory, trying, failing, scratching your head, learning something new, trying to think different.
Geospatial analytics company Descartes Labs recently sold to private equity, in what former CEO Mark Johnson calls a âfire sale.â This post is his perspective on the nature of the business over time, their missteps along the way in both company identity and fundraising, and some of the shenanigans that can happen as stakeholders start to head for the exits.
Not knowing much about Descartesâ actual business, either the original vision of the product or its actual delivery over the years, I donât have much specific perspective to offer. But this story is a recurring theme in the world of spatial, earth observation, and analytics startups that have come and gone over the past 10 years or so. These businesses are built on extremely capital-intensive investments in satellites, space-based sensors, and data, which are major hurdles that cause many of them to get sideways in their fundraising structures very early in their business journeys.
The early years of a startup are always extremely volatile, with pivots and adjustments happening along the way as the company navigates the idea maze, looking for product-market fit. I think the heavy capital required up front compels funders to expect too much too soon in the product development process. Thereâs a chicken-and-egg problem â the PMF search in these kinds of businesses costs many millions. If youâre building a SaaS project management tool, you can wander around looking for fit for years with only a few people and limited seed money. But in satellite startups, the runway you need to do product-market experimentation is enormously expensive. Large enough funding pools also saddle the business with aggressive expectations for customer counts, growth, and revenue. With revenue targets set but no repeatable PMF, many of these startups do whatever they can to find dollars, which often leads to doing what are effectively custom services deals for a single or few customers. Thatâs necessary to make money of course, and itâs not valueless for product validation. But itâs too narrow to function as true PMF. Stay in this awkward state too long and you end up stuck down the wrong hallways of the idea maze. Youâll never find the fitness you need to build a lasting business. Billwrote a great post on this recently, about this identity struggle between being a solutions, services, or product company.
The best thinking on the topic of EO and satellite data companies is my friend Joe Morrisonâs newsletter, âA Closer Lookâ. He leads product for Umbra, a startup specializing in SAR. Heâs done a lot more thinking than me on this topic and has thoughtful takes on the satellite and geo market in general.
New forms of technology tend not to materialize from thin air. The nature of innovation takes existing known technologies and remixes, extends, and co-opts them to create novelty.
Gordon Brander refers to it in this piece as âexapting infrastructure.â As in the case of the internet, it wasnât nonexistent one day then suddenly connecting all of our computers the next. It wasnât purposely designed from the beginning as a way for us to connect our millions of computers, phones, and smart TVs. In fact, many types of computers and the things we do with them evolved as a consequence of the expansion of the internet, enabled by interconnection to do new things we didnât predict.
Former railroad corridors are regularly reused as cycling trails
âExaptationâ is a term of art in evolutionary biology, the phenomenon of an organism using a biological feature for a function other than it was adapted for through natural selection. Dinosaurs evolved feathers for insulation and display, which were eventually exapted for flight. Sea creatures developed air bladders for buoyancy regulation, later exapted into lungs for respiration on land.
In the same way, technologies beget new technologies, even seemingly-unrelated ones. In the case of the internet, early modems literally broadcast information as audio signals over phone lines intended for voice. Computers talked to each other this way for a couple decades before we went digital native. We didnât build a web of copper and voice communication devices to make computers communicate, but it could be made to work for that purpose. Repurposing the existing already-useful network allowed the internet to gain a foothold without much new capital infrastructure:
The internet didnât have to deploy expensive new hardware, or lay down new cables to get off the ground. It was conformable to existing infrastructure. It worked with the way the world was already, exapting whatever was available, like dinosaurs exapting feathers for flight.
Just like biological adaptations, technologies also evolve slowly. When weâre developing new technologies, protocols, and standards, weâd benefit from less greenfield thinking and should explore what can be exapted to get new tech off the ground. Enormous energy is spent trying to brute force new standards ground-up when we often would be better off bootstrapping on existing infrastructure.
Biology has a lot to teach us about the evolution of technology, if we look in the right places.
Identity management on the internet has been broken for years. We all have 800 distinct logins to different services, registered to different emails with different passwords. Plus your personal data exists in a morass of data silos, each housing a different slice of your personal information, each under a different ToS, subject to differing privacy regulations, and ultimately not owned by you. You sign up for a user account on a service in order for it to identify you uniquely, providing functionality tailored to you. Service providers getting custody of your personal data is a side-effect thatâs become an accepted social norm.
In this piece, Jon Stokes references core power indicators in public finance like capital ratios or assets under management that help tell us when an institution is getting too big:
As a society, we realized a long time ago that if we let banking go entirely unregulated, then we end up with these mammoth, rickety entities that lurch from crisis to crisis and drag us all down with them. So when we set about putting regulatory limits on banks, we used a few simple, difficult-to-game numbers that we could use as proxies for size and systemic risk.
The âusers tableâ works as an analogous metric in tech: the larger the users table gets (the more users a product has), the more centralized and aggregated their control and influence. Network effects, user lock-in, and power over privacy policies expand quadratically with the scope of the user base.
As Stokes points out, web3 tech built on Ethereum will gradually wrest back control of the users table with a global, decentralized replacement controlled by no-one-in-particular, wherein users retain ownership of their own identity:
Hereâs whatâs coming: the public blockchain amounts to a single, massive users table for the entire Internet, and the next wave of distributed applications will be built on top of it.
Dapps on Ethereum are so satisfying to use. The flow to get started is so smooth â a couple of clicks and youâre in. Thereâs no sign up page, no way for services to contact you (presumably unless they build something to do so and you opt-in to giving your information). Most of my dapp usage has been in DeFi, where you visit a new site, connect your wallet, and seconds later you can make financial transactions. Itâs wild.
The global users table decentralizes the authentication and identity layers. You control your identity and your credentials, and grant access to applications if you choose.
Take the example of a defi application like Convex. When I visit the app, I first grant the service access to interact with my wallet. Once Iâm signed in, I can stake tokens I own, or claim rewards from staking pools Iâve participated in proportional to my share of the pool. All of the data that represents my balances, staking positions, and earned rewards lives in the smart contracts on the Ethereum blockchain, not in Convexâs own databases. Services like this will always need to maintain their own application databases for aspects of their products. But the critical change with the global users table is that the user interaction layer exists on-chain and not in a siloâd database, with custody completely in the hands of the person with the keys to the wallet.
If more services use the dapp model and build on the public, on-chain global users table, what will the norms become around extending that table with additional metadata? With some systems like ENS (the Ethereum Name Service, decentralized DNS), subdomains and other addresses associated with an ENS address are properties written on the blockchain directly. This makes sense for something like name services, where theyâre public by design. But other use cases will still require app developers to keep their own attributes associated with your account that donât make sense on the public, immutable blockchain. I may want GitHub to know my email address for receiving notifications from the app, but I may not want that address publicly attributed to my ETH address.
Web3 is so new that we havenât figured out yet how all this shakes out. The most exciting aspect is how it overturns the custody dynamics of user data. Even though this new world moves the users table out of the hands of individual companies, everyone will benefit (users and companies) over the long-term. Hereâs Stokes again:
If you want to build a set of network effects that benefit your company specifically, it wonât be enough to simply cultivate a large users table or email list â no, youâll have to offer something on-chain that others are also incentivized to use, so that the thing youâre uniquely offering spreads and becomes a kind of currency.
Incentives for app developers will realign in a way that produces more compelling products and a better experience for users.
Byrne Hobart wrote this piece in the inaugural edition of a16zâs new publication, Future. On bubbles and their downstream effects:
Bubbles can be directly beneficial, or at least lead to positive spillover effects: The telecom bubble in the â90s created cheap fiber, and when the world was ready for YouTube, that fiber made it more viable. Even the housing bubble had some upside: It created more housing inventory, and since the new houses were quite standardized, that made it great training data for âiBuyingâ algorithms â the rare case where the bubble is low-tech but the consequences are higher-tech. But, even so, thereâs always the question of price: how can you tell when itâs worth the hype?
Thereâs something special that happens when you allow your kids to treat hobbies like serious endeavors instead of playtime or games. Paul Grahamâs latest:
Instead of telling kids that their treehouses could be on the path to the work they do as adults, we tell them the path goes through school. And unfortunately schoolwork tends be very different from working on projects of oneâs own. Itâs usually neither a project, nor oneâs own. So as school gets more serious, working on projects of oneâs own is something that survives, if at all, as a thin thread off to the side.
Itâs a bit sad to think of all the high school kids turning their backs on building treehouses and sitting in class dutifully learning about Darwin or Newton to pass some exam, when the work that made Darwin and Newton famous was actually closer in spirit to building treehouses than studying for exams.
My interests in history and tech trace straight back to my time in high school building computers to play Civilization II. Personal projects have long term benefit if nurtured.
On the heels of finishing Schellingâs collection of essays on game theory, I read this piece from Vitalik Buterin on legitimacy, a force that underpins any successful coordination game, of which the world of cryptocurrencies and DAOs are prime examples.
In almost any environment with coordination games that exists for long enough, there inevitably emerge some mechanisms that can choose which decision to take. These mechanisms are powered by an established culture that everyone pays attention to these mechanisms and (usually) does what they say. Each person reasons that because everyone else follows these mechanisms, if they do something different they will only create conflict and suffer, or at least be left in a lonely forked ecosystem all by themselves. If a mechanism successfully has the ability to make these choices, then that mechanism has legitimacy.
I just got a new Mac Mini with the M1 Apple silicon.
The experience so far is stunning performance compared to my previous 16â MacBook Pro. I was using an i9 with 16GB RAM, and this Mini blows it out of the water on responsiveness (and every other category).
A little reading on user experiences with the M1 had me interested in upgrading to any machine with the latest SoC. One of my main drivers was the noise and heat generated by the MBP, which is just in constant turbo mode with whatever my usage behavior is. It never stops running full tilt basically, so I needed to get away from that. My office is in the corner of the house and doesnât get great HVAC coverage with the door closed, so between that and the west-facing windows, the heat-radiating laptop canât have helped.
With the M1 Mini and a nice USB-C dock with a built-in fan that it sits on top of, I havenât heard a sound from the machine at all.
11/10 so far. Itâs wild that such an affordable, portable desktop machine has owned everything pre-M1 in performance.
Morgan Mahlock wrote recently about the promise of Stripe Press, Stripeâs book publishing outfit:
Within the legacy publishing industry, Stripeâs young publishing press is refreshing - it is Sutherlandâs electric cover art on a dusty, tired bookshelf. An Authoritative Look at Book Publishing Startups in the United States by Thad McIlroy states, âBook publishing has never been a technology-adept industry; indeed it is historically technology-averse. This is a challenge for the (minority of) startups targeting existing publishing companies.â Stripe Press is different because it was born from a technology company. It is a strategic asset because it allows Stripe to shape and share influential knowledge with its interconnected ecosystem of entrepreneurs, businesses, authors, and technologists.
Her post gives a good summary of why Stripe Press is exciting for the book publishing industry. The catalog only sits today at 10 titles, but I believe 4 those were released this year. The pace has been increasing, but they keep elevating the quality bar.
Theyâre not only attracting original works like Nadia Eghbalâs excellent Working in Public (2020), but also breathing new life into notable books from the past. Both Martin Gurriâs Revolt of the Public (published in 2014, one of my favorites this year) and Donald Brabenâs Scientific Freedom (2007), to name two examples, saw relatively small initial publishing runs. The editorial staff over at Stripe is doing amazing work to bring these books back into wider circulation using a spotless curatorial eye for the noteworthy and influential.
Stripe Press is, of course, producing excellent books for us to read, and giving authors writing about technology a channel for getting their work out there. But itâs also a marketing channel for Stripe.
Content marketing, my favorite of the marketings
I have a soft spot for quality content. The best content marketing doesnât feel anything like marketing. Its value is so deep you donât even think about what youâre giving in return to its creator.
The tech companies of the last 20 years didnât invent content marketing, though our scene talks about it more than any other. Even Ben Franklin used a content marketing play when he published Poor Richardâs Almanack as a way to promote his printing business.
The scene is now full of companies that embrace the multichannel returns they can drive through quality, helpful content. A few favorites of mine:
Not only does Stripe do a stellar job at the traditional CM channels â blog, help guides, developer documentation, email â they went farther than anyone and became a book publisher1.
What differentiates Stripe as a publishing house from the HarperCollinses or Hachettes is that itâs not their core business, but a component that drives other parts of the business. Direct sales revenue is only 1 channel of value theyâre deriving from putting this catalog in print. Stripe sees their Press group as a content marketing strategy, especially to raise global interest in technology, pushing their mission to âraise the GDP of the internet.â At the most tactical level, the Press catalog increases interest in tech, creates more founders, who then start companies that become Stripe customers.
Stripe flywheels
I linked a while back to Max Olsonâs excellent post Advantage Flywheels, which presents a great framework for analyzing the causal loops that power businesses. Irrespective of Press, Stripeâs built a fantastic advantage with feedback loops combining in powerful ways. Using Maxâs same architecture of flywheel archetypes, I took a stab at drawing out what Stripeâs machinery looks like, with its products in blue:
At its core, Stripe serves developers who build applications which expand in usage and generate financial transactions.
Spinning off from those central inputs and outputs are several flywheels that create momentum that feeds back into the core business. Radar does fraud detection, which improves with masses of transaction data. Billing and Sigma are tools that improve finance management and reporting. Atlas helps founders incorporate and get started, thereby generating more customers for Payments, Issuing, and more. Thatâs where I see book publishing fitting into the machine: as a mechanism to expand the TAM for internet businesses.
Press is unique in this regard for a tech content strategy. Normally something like a blog, video channel, or newsletter would be tied more directly to the âmore developersâ nexus, but for Stripe, book publishing is playing a longer game. Even though this feedback loop has a long time delay (publishing a book wonât make a new founder overnight), I believe itâs a powerful one. The best strategies serve more than one function; Press is a brand builder, a recruiting tool, a direct revenue driver (from book sales), and most importantly, a way to increase the number of people interested in technology over the long term. Founder Patrick Collison himself described this exact strategy in response to a Hacker News thread:
The vast majority of Stripe employees (and there are now more than 1,000) work on our core functionality today. But we see our core business as building tools and infrastructure that help grow the online economy. (âIncrease the GDP of the internet.â) When we think about that problem, we see that one of the main limits on Stripeâs growth is the number of successful startups in the world. If we can cheaply help increase that number, it makes a lot of business sense for us to do so. (And, hopefully, doing so will create a ton of spillover value for others as well.)
Stripeâs long been known for itâs writing culture, so I suppose itâs also not surprising that a company of readers and writers would want to make books.
When you pop the hood on a strong business like Stripe, youâre always likely to find interesting systems dynamics â multiple outputs feeding other inputs. Itâs fascinating that an old, traditional business like publishing could be done in a novel way like this. Theyâre positioned to bring in new innovations for authors (and readers) that they havenât scratched the surface on yet; itâs still just paper books. If thereâs room for innovation in writing books, Stripe will find it.
I have to wonder here how much the Collisonbrothersâ bibliophilia plays a role in the decision to launch a publishing house. Canât be coincidental. ↩
Ben Thompson follows up his 2017 piece with an update on the state of bundling strategies from some of the big tech and media companies.
I liked this description of where Disney+ fits into Disneyâs overall strategy with the service:
While Disneyâs hand was certainly forced by the COVID pandemic, the companyâs overall goal is to maximize revenue per customer via its highly differentiated IP; to that end, just as Disney+ is a way to connect with customers and lure them to Disney World or a Disney Cruise, it is equally effective at serving as a platform for shifting the theater window to customersâ living rooms.
Corporate research was a big deal in the mid-20th century. In this piece, Ben Southwood inspects why we no longer have modern equivalents to research centers like Xerox PARC or Bell Labs.
An interesting point here on what might be demotivating large organizations to invest too much in deep research:
Another possible answer is that non-policy developments have steadily made spillovers happen faster and more easily. Technology means faster communication and much more access to information. An interconnected and richer world doing more research means more competitors. And while all of these are clearly good, they reduce the technology excludability of new ideas, even if legal excludability hasnât changed, and so may have reduced the returns to innovation by individual businesses.
Jerry Brito writes about the growth of independent writing on Substack, prompted by a Mike Solana tweet:
From a technical perspective, Substack does not belong on Solanaâs list next to Bitcoin and Signal. Signal is a company, but they have almost no information about their usersâno names, no messages. Bitcoin is not a company, but instead a permissionless decentralized network, and âitâ canât decide who can use it or for what. Substack, on the other hand, is a centralized service that permissions whoâs allowed on and what they can do, and it is subject to official and market pressures.
Comparisons to YouTube or Twitter are closer than to BTC or Signal, for sure. But even with Substack being a centralized platform, the risks are lower in the text or email medium; thereâs high portability to move to other platforms at will. If you can move your content and your subscriber list, you can bring your audience. The primary advantages Substack has are that are hard to replicate (today) on your own hosted system are the publishing tools and monetization layer (though not impossible). Trying to disintermediate YouTube yourself would be hard, and transporting your Twitter network isnât possible. SMTP, hypertext, and DNS are still open.
The problem with âbest tool for the jobâ thinking is that it takes a myopic view of the words âbestâ and âjob.â Your job is keeping the company in business, god damn it. And the âbestâ tool is the one that occupies the âleast worstâ position for as many of your problems as possible.
It is basically always the case that the long-term costs of keeping a system working reliably vastly exceed any inconveniences you encounter while building it. Mature and productive developers understand this.
Matt Haughey went nuts on a custom lighting setup for his home office. I ran across this searching for some wirelessly controllable LEDs for my office bookshelf. Mine wonât be this crazy, but I wish I had the patience to do something like this.
This is the second episode of the âTorch of Progressâ series that the Progress Studies for Young Scholars program is putting on, hosted by Jason Crawford. Tyler Cowen is unbelievably prolific in projects heâs got going on, so itâs great to see him making the time for things like this.
Read more here from last year on the progress studies movement.
On Roots of Progress, Jason Crawford is now diving into the history of agriculture, with an interesting change to his process about writing on the history of technological discovery.
In this series, heâs approaching it with âthe garage door upâ â writing in the open shorter-form posts as he studies things like the stages of agriculture, where enclosures come from, and other concepts.
My goals are: to bring to the surface more of my half-formed thoughts, by forcing myself to write about them; to create a new type of content for you, my audience; to model good epistemic norms; and to get early pointers, references, feedbackâand pushback.
Again, this is an experiment. Risks: lowering signal-to-noise ratio; overwhelming some parts of my audience with too much content. If you donât want to read a bunch of shorter, more informal posts, feel free to skim/skip them and just read my occasional long-form comprehensive summaries, which I will continue to write every few weeks or so.
Today on the nerdy computer history feed, weâve got a 1982 video from Bell Labs: The UNIX System: Making Computers More Productive.
Most of the video has Brian Kernighan explaining the structure of UNIX and why itâs different from its contemporary operating systems. I should do more work with the keyboard in my lap and my feet on the desk.
Navigating a Linux shell looks almost identical to this today, 50 years later.
I liked this quote John Mashey, a computer scientist who worked on UNIX at Bell:
Software is different from hardware. When you build hardware and send it out, you may have to fix it because it breaks, but you donât demand, for example, that your radio suddenly turn into a television. And you donât demand that a piece of hardware suddenly do a completely different function, but people do that with software all of the time. Thereâs a continual demand for changes, enhancements, new features that people find necessary once they get used to a system.
Continuing my dive into the history of computers, I ran across this extended, detailed article covering the development and boom of the minicomputer industry.
Discovering Readwise a few months ago caused me to resurrect my long-dormant Instapaper account. Instapaper was my go-to âread laterâ service, but I also used it as a general bookmark archive. After a while Iâd fallen into only using it for the latter, which then made me go back to Pinboard since the single function of bookmark tagging is its specialty. Iâm still using Pinboard heavily to archive interesting things, but Iâve found a new use for Instapaper with Readwiseâs integration.
Readwiseâs main feature is to sync all of the highlighted passages from your Kindle (via your Amazon account) and sent you a daily digest of 5 highlights from previous reads, with the goal of increasing retention of things you read. For any high-volume reader, youâre well-familiar with the problem of forgetting most of what you read, certainly any details beyond the basic gist of a book.
I didnât know how much I wanted a tool for this until I started using it.
With its Instapaper integration, itâll sync articles and their highlights into your Readwise archive, which then can be included in your daily reminder digests. Over the years Iâve toyed with tools like Evernote or Google Keep for clipping quotes or passages from web content, but none of them stuck for me or were that useful. The information going into an archive solves only part of the problem. What you want is a way to remember and reference those bits you clip from the web.
A related feature Readwise supports that Iâve used a few times now is archiving Twitter threads. Replying on a thread with @readwiseio save thread will store those posts in your Readwise account and include them in your daily highlight reviews alongside Kindle and article content. It works best for threads of things that are time-insensitive like ones on history, advice, business strategy, etc.
The Instapaper support has filled a gap in making bookmarking of articles more useful when you can play back interesting things you read that are worth remembering.
Ben Thompsonâs Stratechery is one of the must-read newsletters out there. Iâve been a subscriber and avid reader for 4 years now, and I think Iâve read every post heâs published since then. Lately Iâm finding I get behind on keeping up with his pace of output on the members-only Daily Update feed. So it was exciting to see the launch of this new channel where heâs creating a podcast version of the Daily Update for subscribers.
For the past week Iâve been listening to the posts rather than reading, which has made it much easier to content with the inbox flow. Even though there are typically diagrams and charts in his posts, heâs embedding those as chapter art inside the podcast episodes, so you still even have access to the visual aids while listening.
This is a great addition to an already-valuable subscription to Stratechery. Itâd be great to see other content producers experiment like this with alternative formats to make their material more widely accessible.
The specification for Ethernet was proposed in 1973 by Bob Metcalfe as a medium to connect the expanding network of computers at Xerox PARC. This was a schematic he drew as part of the memo proposing the technology to connect the machines together:
PARC was installing its own Xerox Alto, the first personal computer, and EARS, the first laser printer. It needed a system that would allow additional PCs and printers to be added without having to reconfigure or shut down the network. It was the first time that computers were small enough for hundreds to be in the same building, and the network had to be fast to drive the printer.
Metcalfe circulated his plan in a memo titled âAlto Ethernet.â It contained a rough schematic drawing and suggested using coaxial cable for the connections and using data packets like Hawaiiâs AlohaNet or the Defense Departmentâs Arpanet. The system was up and running Nov. 11, 1973.
Itâs amazing how simple many foundational technologies start out: a simple comms medium meant to connect their computers to a shared printer. Now the same tech is the backbone of almost every local network.
A great annotated Twitter thread from Steven Sinofsky, who was leading the launch of Windows 7 coincident with when the iPad was announced.
19/ The iPad and iPhone were soundly existential threats to Microsoftâs core platform business. Without a platform Microsoft controlled that developers sought out, the soul of the company was âmissing.â
20/ The PC had been overrun by browsers, a change 10 years in the making. PC OEMs were deeply concerned about a rise of Android and loved the Android model (no PC maker would ultimately be a major Android OEM, however). Even Windows Server was eclipsed by Linux and Open Source.
21/ The kicker for me, though, was that keyboard stand for the iPad. It was such a hack. Such an obvious âobjection handler.â But it was critically important because it was a clear reminder that the underlying operating system was ârealââŠit was not a âphone OSâ.
A fun story from Jimmy Maher about the 1991 partnership with IBM that moved Apple from the Motorola 88000 chips to PowerPC. It was a savvy deal that kept the Macintosh (and Apple) alive and kicking long enough to bridge into their transition back to Steve Jobsâs leadership, and the eventual transition of the Mac lineup to Intel in 2006.
While the journalists reported and the pundits pontificated, it was up to the technical staff at Apple, IBM, and Motorola to make PowerPC computers a reality. Like their colleagues who had negotiated the deal, they all got along surprisingly well; once one pushed past the surface stereotypes, they were all just engineers trying to do the best work possible. Appleâs management wanted the first PowerPC-based Macintosh models to ship in January of 1994, to commemorate the platformâs tenth anniversary by heralding a new technological era. The old Project Cognac team, now with the new code name of âPiltdown Manâ after the famous (albeit fraudulent) âmissing linkâ in the evolution of humanity, was responsible for making this happen. For almost a year, they worked on porting MacOS to the PowerPC, as theyâd previously done to the 88000. This time, though, they had no real hardware with which to work, only specifications and software emulators. The first prototype chips finally arrived on September 3, 1992, and they redoubled their efforts, pulling many an all-nighter. Thus MacOS booted up to the desktop for the first time on a real PowerPC-based machine just in time to greet the rising sun on the morning of October 3, 1992. A new era had indeed dawned.
The downturn in revenue IBM plummeted into in the early 90s during the Wintel explosion was stunning. Just look at these numbers:
In 1991, when IBM first turned the corner into loss, they did so in disconcertingly convincing fashion: they lost $2.82 billion that year. And that was only the beginning. Losses totaled $4.96 billion in 1992, followed by $8.1 billion in 1993. IBM lost more money during those three years alone than any other company in the history of the world to that point; their losses exceeded the gross domestic product of Ecuador.
The Kindle launched in 2007, making ebooks accessible as a format not only because of a compelling device, but also a marketplace for content. Suddenly most books were available instantly for $10 a piece. No more trips to the store, expensive hardcovers and paperbacks, and importantly, no more paper taking up shelf space. As much as I love the Kindle, I have a growing list of gripes about the experience. Like with John Gruberâs recent post on the iPad, criticism comes from a place of love for the platform, and a disappointment with how little innovation thereâs been over 13 years.
I still prefer the paperback format for pure experience, but the practicality of Kindle nearly always wins out. With Readwise Iâve gotten so used to heavily highlighting in my books, and itâs too much work to annotate in paper format when Iâve then got to transfer them somewhere else to ever see those notes again.
Iâd used the Kindle iOS app since the beginning, but didnât buy a Kindle device until 2015 (the Paperwhite, third-generation). I use both the app and the device every single day, so over time Iâve built up a back log of feature requests and documented shortcomings. Thereâs great opportunity for Amazon to make some amazing improvements.
But first, letâs start with the things Amazonâs done right.
What Amazon has gotten right
Whispersync â After acquiring Audible in 2008 (audiobooks) and Goodreads in 2013 (social network for readers), theyâve added some integration between the platforms. Whispersync started as their cloud service for syncing progress between devices for ebooks. A few years ago they extended this to sync progress between the text and audio versions, if you own both. For times when Iâve read books that I have on both platforms, this is a fantastic feature. Works pretty reliably, and is a neat technology.
X-Ray â I first saw this on Prime Video. The best description of X-Ray is that itâs like the old âPop-Up Videoâ show on VH1, which would show âdid you know?â style annotations on top of music videos. In video it allows you to see, in real-time, which actors are on screen and quickly look up their filmographies and whatnot. X-Ray for Kindle is similar: it breaks down common terms and keywords, themes, and subjects, with ways to navigate to those parts of the book.
One-tap purchasing â This is always a delightful process. Search for a book (or see one recommended) and in one tap itâs downloading. Iâve bought dozens of books on a whim this way.
Highlighting & annotation â Iâve been an avid book highlighter for years. Readwise now raises the value of annotations 10x. In the Kindle iOS app, the share sheet on a highlighted passage also lets you save a slick shareable screenshot of your highlight on social media.
Audible narration â This is more technically cool than practical. If you own audio and text versions, you can download the audio inside of the Kindle mobile app. When playing the narration, it moves the text along with it. Iâve never used this in practice, but itâs impressive.
Plenty of things to love. But now time for my personal recommendations.
Requests for the Kindle platform
Tighter social integration from Goodreads â Both the Kindle device and mobile apps now have connection to your account on Goodreads. They can see your âto-readâ list, can mark things as read or currently reading, and can sync progress. But they havenât done much of anything with the social aspects of Goodreads. Iâd like to do things like enable seeing highlights my friends made in a book, and maybe an ability to put comments on those highlights just directed to specific friends. It could spark conversation around book topics you might not know had mutual resonance between you and a friend. Goodreads in general hasnât gotten a lot of love since Amazon made the acquisition, but itâs integration with the live reading experience is one of the biggest places to expand into. Itâd make the service more purposeful and engaging.
Progress adjustments â When reading books on multiple platforms, itâs possible for your âfurthest readâ progress to get out of whack (for example, if you flip ahead to look at a footnote, more on those in a second). Then the waterline for where youâve reached in the book gets baked and is impossible to adjust. Itâd be nice to have a quick interface to enter the desired furthest read point that resyncs everywhere.
Better footnotes â If youâve read many nonfiction books (or a heavy footnoter like DFW), youâve been annoyed by the inconsistency in how footnotes are formatted in books. Most of the time, tapping a footnote zooms you to the end of the book. Theyâve recently added contextual back buttons to return where you were from the footnote, but if you flip around pages near the footnote, itâs possible to end up resetting your furthest progress point to 98%, where the footnotes are at the end. Some books (feels like a minority) have more functional overlay footnotes. When you tap those links a small popover appears at the bottom with the footnote text without leaving the page. This is even an improvement over most paper books. The former problem with footnotes at the end of the ebook is actively much worse than page-flipping in paper formats.
More consistent formatting â This one may be largely out of Amazonâs control; I donât know much about the process of authoring ePub/mobi files. But Amazon could certainly help more to provide an âIDEâ for authors and publishers to use best practices for the platform when converting their works into ebook format. It seems like after 13 years thereâd be much less of this inconsistency than I see from book to book. Footnotes are screwy, progress measurement is all over the place. Some books mark the 100% point at the end of the main text, some at the full end of the file (after the index/glossary). Page numbers are also an inconsistent mess.
Deep linked references â The one that Iâm the most interested in. Imagine this: you tap a citation link that displays a popover on the screen, then tapping a particular citation could deep link into an interactive âclipâ from the source materialâs ebook format, also showing links to add that source to your wishlist, or even buy for your library. It could even let you highlight from books you donât yet own, and create a separate shelf of books on your device of referenced works you might be interested in reading in full. Over the years theyâve added both dictionary and Wikipedia lookup on selected text. I see this as a similar way to bridge into related, adjacent content. Would benefit readers and, if well executed, Amazon and publishers by more widely referring users to other works.
Semantic web of references â If citations and references were deeply linked, you could also build a reference graph. If Iâm reading Tom Sowellâs A Conflict of Visions, I could pull up a tab that shows all works referenced within, and also all works that reference it. Go both ways with it. Picking through bibliographies is frequently how new things get added to my reading list. This would give readers an exposed graph of related works or authors they may find interesting.
Book lending â This is probably a long shot, but itâd be neat to be able to temporarily âlendâ access to a book to, say, a friend on Goodreads, with a âreturnâ date you could customize that revokes access and returns to you. Perhaps you could cap the limit to 60 days or something. It could give the social reading experience more of that feeling of sharing knowledge and reading experiences with friends. It could also show your highlights and annotations, like someone reading a highlighted hardcover book you lend them.
Reading metrics â When did I start a book? When did I finish? How many days did it take to read? How many pages did I read each day? Data nerds like me would eat this up. Probably not of mass market interest, understandably. You could add gamification here, but Iâd be reticent about that since the purity of reading doesnât need any more distractions out there to keep you from deep immersion in something. Twitter and Instagram are already doing a great job at stealing usersâ attention away from books.
Have any active Kindle users out there formulated their own lists like this? Iâd love to hear othersâ ideas. Maybe with enough of a conversation about them, Amazon could respond positively.
Iâve been doing a lot of thinking lately on our strategic objectives â where we are today, where we want to be in a few years, and the tactics in between to navigate us to a long-term maximum (and hopefully avoid compelling, but ultimately sacrificial local maxima). One of the most efficient ways to set up a business for successful long-term goals is to shrewdly align the go-to-market in ways that go around your competitors entirely, versus having to compete head-to-head.
Germane to this topic is this piece Iâd bookmarked at some point from Gwern, an excellent deep analysis of an idea Joel Spolsky wrote about back in 2002: that smart technology companies seek to commoditize their complements. If you havenât read any of Gwernâs essays before, youâre in for a treat. One of the most information-dense writers out there.
In general, a companyâs strategic interest is going to be to get the price of their complements as low as possible. The lowest theoretically sustainable price would be the âcommodity priceââthe price that arises when you have a bunch of competitors offering indistinguishable goods. So:
Smart companies try to commoditize their productsâ complements.
If you can do this, demand for your product will increase and you will be able to charge more and make more.
This idea is explored more in Peter Thielâs Zero to One, which posits that capitalism and competition are opposites: one generates profit, the other destroys it. If you can commoditize everything surrounding your single chokepoint advantage, you can build a business with a deep, wide moat.
Related: check out Ben Thompsonâs writing on âsmiling curves,â where the profits in a value chain are unevenly distributed to the layers in a stack.
Google Maps just had its 15th birthday. This post from one of the original team on Maps back in 2005, Elizabeth Reid, reflects on a history of the product from its first iteration.
On Feb 8, 2005, Google Maps was first launched for desktop as a new solution to help people âget from point A to point B.â Today, Google Maps is used by more than 1 billion people all over the world every month.
It was the early days of Web 2.0, and Googleâs launch of the Maps API was one of the keys to the âmashupâ movement that sparked a new wave of startups.
As a geographer, Google Maps and Earth are tools I use every day, both for fun and work. Maps is likely up there with Wikipedia at the top of my web visit history over the last 15 years.
Hard to believe they launched Street View all the way back in 2007. Today it still surprises me how much data thatâs been collected, and the amazing feeling of zooming into any place to see it from the ground level.
As Iâve been reading more into the history of technology1, specifically computers and the Internet, Iâll go on side trails through Wikipedia or the wider ânet back to many of the source papers that were the seeds of certain innovations.
Iâve read about the IBM 700 series of mainframes, Vannevar Bushâs seminal piece on a âmemexâ device (precursor idea to hypertext), and Claude Shannonâs original work on information theory.
The latest gold mine Iâve found is on YouTube. I created a âTech Historyâ playlist where Iâve been logging clips and documentaries on various bits of computer history. Click the icon top-right to see all the videos in the list.
Benedict Evans does a talk each year assessing the state of the tech industry, macro trends, and where we are the technology adoption lifecycle for big, trendy technologies like VR and AI.
This yearâs deck from the Nasdaq event in Davos covers some interesting ground. He has sober takes on things like regulation, the âbreak up big techâ movement, privacy, and also how we analyze particular companies that cross borders from bits to atoms like WeWork, Uber, and others.
In this video interview from the event, he answers the question about âwhat is a tech company?â in an interesting way:
Sometimes when people say âis that a tech company?â theyâre actually saying, âshould that be valued like a tech company?â, and that really means âis that a high growth, high margin company with defensible margins?â
J.C.R. Lickliderâs seminal 1960 paper on what would eventually become the personal computer.
Man-computer symbiosis is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.
Venkatesh Rao has assembled a most compelling explanation for how the internet polarization machine works:
The semantic structure of the Internet of Beefs is shaped by high-profile beefs between charismatic celebrity knights loosely affiliated with various citadel-like strongholds peopled by opt-in armies of mooks. The vast majority of the energy of the conflict lies in interchangeable mooks facing off against each other, loosely along lines indicated by the knights they follow, in innumerable battles that play out every minute across the IoB.
Almost none of these battles matter individually. Most mook-on-mook contests are witnessed, for the most part, only by a few friends and algorithms, and merit no overt notice in either Vox or Quillette. Beyond a local uptick in cortisol levels, individual episodes of mook-on-mook violence are of no consequence.
I have a working draft post on this topic for sometime in the future. This is one of my favorites from the Stratechery archives â on corporate cultures and how they impact company strategy:
As with most such things, culture is one of a companyâs most powerful assets right until it isnât: the same underlying assumptions that permit an organization to scale massively constrain the ability of that same organization to change direction. More distressingly, culture prevents organizations from even knowing they need to do so.
The Slate Star Codex review of Turchin and Nefedovâs Secular Cycles, which seeks to understand patterns in technological and social development, and underlying causes for expansion and stagnation periods.
Iâm currently reading the fantastic book The Dream Machine, a history of the creation of personal computers, and a biography of this man, JCR Licklider. This is a talk from an ACM conference in 1986 where he discusses his work on interactive computing. A wonderful little bit of history here.
âTech dominationâ, monopolies, regulation â lots of concepts, fears, and proposed remedies are all getting confused these days in tech. Benedict Evans had this piece of sober analysis to peel apart the differences between companies being rich, dominant in their product space, or dominant in the wider industry.
The tech industry loves to talk about âmoatsâ around a business - some mechanic of the product or market that forms a fundamental structural barrier to competition, so that just having a better product isnât enough to break in. But there are several ways that a moat can stop working. Sometimes the King orders you to fill in the moat and knock down the walls. This is the deus ex machina of state intervention - of anti-trust investigations and trials. But sometimes the river changes course, or the harbour silts up, or someone opens a new pass over the mountains, or the trade routes move, and the castle is still there and still impregnable but slowly stops being important. This is what happened to IBM and Microsoft. The competition isnât another mainframe company or another PC operating system - itâs something that solves the same underlying user needs in very different ways, or creates new ones that matter more.
An interesting detailed analysis on SpaceXâs Starlink project, which intends to put tens of thousands of microsatellites in orbit to provide a blanket of global internet connectivity.
Starlinkâs world-spanning internet will bring high quality internet access to every corner of the globe. For the first time, internet availability will depend not on how close a particular country or city comes to a strategic fiber route, but on whether it can view the sky. Entrepreneurs the world over will have unfettered access to the global internet irrespective of their own variously incompetent and/or corrupt government telco monopolies. Starlinkâs monopoly-breaking capacity will catalyze enormous positive change, bringing, for the first time, billions of humans into our future global cybernetic collective.
I donât follow international markets closely enough to keep up with this, but interesting to see this take on Hong Kongâs relative stagnation in recent years, especially as compared to other nearby mainland China cities like Shenzhen and Guangzhou:
Despite the transformation the global economy has undergone, Hong Kongâs business landscape remains largely unchanged â the preserve of a small body of property developers and conglomerates, most of them tycoon-owned, who rose to prominence long before the handover. Indeed, one of the most striking things of the cityâs history for nearly three decades has been its failure to produce a single major new business.
Responsibility can be attributed to the Basic Law, the mini-constitution that has guided Hong Kongâs governance since its return to China. Passed by Chinaâs parliament seven years before the handover, it came with a built-in bias aimed at preserving Hong Kongâs late-colonial features: a low-tax, capitalist economy, externally very open but domestically protectionist, and overseen by an executive-led government with little formal accountability.
Interesting thoughts in Dan Wangâs annual letter. On China, trade, and tech.
These are not trivial achievements. But neither are they earth-shattering successes. Consider first the internet companies. I find it bizarre that the world has decided that consumer internet is the highest form of technology. Itâs not obvious to me that apps like WeChat, Facebook, or Snap are doing the most important work pushing forward our technologically-accelerating civilization. To me, itâs entirely plausible that Facebook and Tencent might be net-negative for technological developments. The apps they develop offer fun, productivity-dragging distractions; and the companies pull smart kids from R&D-intensive fields like materials science or semiconductor manufacturing, into ad optimization and game development.
The internet companies in San Francisco and Beijing are highly skilled at business model innovation and leveraging network effects, not necessarily R&D and the creation of new IP. (Thatâs why, I think, that the companies in Beijing work so hard. Since no one has any real, defensible IP, the only path to success is to brutally outwork the competition.) I wish we would drop the notion that China is leading in technology because it has a vibrant consumer internet. A large population of people who play games, buy household goods online, and order food delivery does not make a country a technological or scientific leader.
With the recent Twitter team announcement of bluesky, a research effort looking at creating a protocol standard out of Twitter, this piece is a timely look at a topic on a lot of minds in tech on the risks of the mega platforms, and what to do about it.
There are some great details here explaining the differences between the two. The idea of newer communications platforms morphing into networks of disparate systems with shared protocol standards certainly gets my decentralization nerves tingling:
A protocol-based system, however, moves much of the decision making away from the center and gives it to the ends of the network. Rather than relying on a single centralized platform, with all of the internal biases and incentives that that entails, anyone would be able to create their own set of rulesâincluding which content do they not want to see and which content would they like to see promoted. Since most people would not wish to manually control all of their own preferences and levels, this could easily fall on any number of third partiesâwhether they be competing platforms, public interest organizations, or local communities. Those third parties could create whatever interfaces, with whatever rules, they wanted.
More freedom without diminishing the overall power of the technology (at least too much) is always preferable to a world where we try and create mass consensus when thatâs an impossible achievement.
Iâm glad to see that later in the piece thereâs analysis of the business model issue. Companies like Facebook and Twitter didnât shy away from protocols in favor of centralized walled-gardens because for technical reasons, or because protocols arenât advanced enough. Itâs the simple fact that aggregating the largest audience possible is more efficient for monetization. Itâs not necessarily in any one inventorâs interest to work on the protocol problem â why not make a new technology proprietary and closed instead of working on a protocol that advantages a marketplace of competitors?
Protocol standards are also painfully slow to evolve and expand once they gain wide adoption, for logical, and mostly appropriate, reasons. Getting features added to something like NNTP, SMTP, or HTTP takes years if it happens at all. For core communications infrastructure you can make the case for stability and compatibility, but that must be balanced with new innovations and ideas.
I agree broadly with decentralization for technical and sustainability reasons, but it remains to be seen how exactly to get from here to there.
AWSâs re:Invent conference just wrapped last week. Since weâre so deep into AWS technologies, I keep an eye out each year on the trends visible in Amazonâs product launches. They move at breathtaking speed to fill out their offering suite and keep their current momentum as the leader in the cloud space. Theyâre really nailing the bundling & scale economics that the likes of Microsoft and Oracle were so successful at in years past. When going upmarket, having a product for every problem outweighs the need for having the highest quality in any individual product line. Enterprises often value the ability to buy everythign they need from a single vendor higher than the quality of the products (what Ben Thompson has referred to as the âone throat to chokeâ phenomenon).
Here are a handful of the announcements I found most interesting, in no particular order:
AWS has finally relented to the customer base thatâs been reluctant to move to the cloud for the past decade. With the scale they have now theyâve been able to productize a managed service that puts an âAWS in-a-boxâ type of modular system into a customerâs datacenter, ideally giving the best of all worlds of security, compliance, and exposure to the AWS services and APIs. Itâll be interesting to see what kind of adoption this gets.
SageMaker is their service for creating, training, and deploying ML models. Itâs really an umbrella brand name for about a dozen sub-products for various pieces of the ML workflow. Studio intended to be a full âIDEâ-style interface for working with everything youâve built in SM. Clear indication that this is one of their big strategic plays going forward: lowering the barrier to doing ML and having customers new to the space learning with and expanding from the AWS platform from the start.
Rekognition is AWSâs computer vision service, with endpoints for analyzing video and image data for objects, sentiment, content moderation, and search. One of the barriers for image classification tasks has been the ability to tailor the models to recognize other domain-specific content (like âwhat kind of part is this?â from a list of parts the customer builds). It now lets you upload your own custom labeled image datasets for training custom Rekognition models.
This isnât really a service or expansion on one like the others in the list. This is more a knowledge base of content from Amazon engineers on how they internally build and operate software at scale.
A post from Balaji Srinivasan from a couple years back on Twitter Moments. He had some interesting points on the likely traffic and comps to other news outlets. Twitter is still huge in the timely news space. I liked this point on the opportunity here (especially for Twitter, which has done little with the platform for 5+ years):
Whenever we see a technology with empirical traction whose importance is neglected or even derided, itâs a useful signal of an investment or entrepreneurship opportunity. Good examples that Iâve been personally involved with include Soylent and Bitcoin, where skepticism, mockery, and outright hostility was eventually followed by huge returns for those who bet against the crowd.
âWaldenpondingâ is a phrase coined by Rao to describe the growing backlash movement against hyperconnectedness, driving people to disconnect completely and long for a life of lower information overload and deeper meaning â a reincarnation of Thoreauâs idea from Walden. This podcast interview is about an essay Rao wrote last year that argues against this idea, a contrarian viewpoint considering the ârightâ or âintelligentâ thing to do is considered to be disconnecting from the vapid, toxic environments of Twitter and Facebook. He makes a compelling case about a continuum of information-light vs. information-dense sources of data, and high and low latency, arguing for the merits of connectedness and low latency social media sources â the âGlobal Social Computer in the Cloudâ as he says. On one end you have low latency/shallowness (tweets) and the other high latency/richness (1000 page history books).
I love this idea. Anyone making arguments for diverse spectra of material and owning your own attention has my interest. In the essay he describes the view of the waldenponder as someone convinced that the Googles and Facebooks of the world have attention-hacked our brains and that their platforms are poison, an argument he refutes by saying that thatâs âgiving the attention hackers too much creditâ. I would agree â people have more agency than they admit. After all, itâs easier to blame your inability to put down your phone or close Instagram on an algorithm than your own decision making.
Great interview and fantastic, thought-provoking essay.
Strasburg tipping his pitches almost ended the Natsâ run:
He remembered the game Strasburg pitched in Arizona on August 3. The Diamondbacks pounded Strasburg for nine runs in less than five innings. The D-Backs knew what was coming. The Nationals broke down the tape and discovered Strasburg was tipping his pitches by the way he reached into his glove to grip the baseball near his waist, just before he raised his hands to the set position.
An annotated version of Mike Migurskiâs workshop on RapiD and Disaster Maps from the NetHope Summit. Facebookâs work on this stuff looks primed to change the way everyone is doing OpenStreetMap contribution.
Iâve never used TikTok, but itâs been a fascination tech story to follow its insane growth over the last 8-12 months. With the current geopolitical climate and the fact that itâs owned by Chinese owner ByteDance, it seemed like this CFIUS investigation was inevitable.
I just got the latest version of the iPad Pro, opting for the 11â model instead of the previous generation 12.9â one that Iâve been using for 2 years. Some brief thoughts so far on a weekâs worth of usage:
The iPad
So far the smaller form factor takes a little bit of getting used to, but the weight and size is a huge improvement in portability. When this iPad is the only thing in my bag, it almost feels empty itâs so light. I also love the ability to one-hand the device without feeling like Iâm about to drop it. One of the downsides of the 12.9â size is that using it sans-keyboard as a reading device (especially in portrait mode) is unwieldy. The 11â size can be comfortably used in one hand for reading. You also still get all of the iPadOS multitasking features for split screen productivity apps, which was one of the biggest drivers for originally going with the Pro model.
Keyboard Folio & Pencil
I got the Smart Keyboard Folio and the new Pencil to go with it, and both are pretty major improvements over those two products from a generation ago. The smaller size keyboard is taking a little adjustment, but itâs not too bad. I love the feel of the keys on Appleâs iPad keyboards, and this one is an incremental improvement in tactile feeling from the last generation. The new version of the Pencil seems to have less latency in sketching, which makes writing and drawing feel more natural than it did â even though the Pencil even since version 1 has been leaps and bounds better than any other stylus hardware ever made. With the magnetic docking inductive charging, itâs also nice to have a Pencil thatâs always at 100% full charge, ready to go. Too often Iâd get out the old one after a period of not using it only to find it dead. Itâs a quick charge, but taking up the Lightning port to charge it was always annoying.
Since I made the switch, Iâve been doing a lot more work on the iPad versus the MacBook Pro. Even with multitasking, the âmodalâ nature of app usage on an iPad seems to keep my mind more focused and less alt-tabbing between various windows. While not impossible to do, itâs hard to end up in the trap of 50 open browser tabs on an iPad than a full laptop. Thereâs also the fact that I donât have a heating element on the lap while using it, like the superheated aluminum case on a MBP when Chrome, Slack, and other memory-heavy apps are churning hard.
So far, so good. This week with some travel abroad Iâll give it a shot as the primary device and see how it feels.
Why does it take so long for new technologies with seemingly-obvious positive benefits to get adopted? This example on the speed with which the polio vaccine was adopted and administered are incredible, but an outlier:
The polio vaccine is an outlier in the history of new technology because of the speed at which it was adopted. It is perhaps the lone exception to the rule that new technology has to suffer years of ignorance before people take it seriously. I donât know of anything else like it.
You might think it was quickly adopted because it saved lives. But a far more important medical breakthrough of the 20th century â penicillin â took almost 20 years from the time it was discovered until it was used in hospitals. Ten times as many died in car accidents as from polio in the early 1950s, but it took half a century for seat belts to become a standard feature in cars.
The point of the post is to highlight why things usually go in the other direction, with new technologies taking years or decades to adopt. Sometimes not even until the next generation of children grows up with it. Wouldnât it be great to see more positive progress embraced like the cure for polio?
Since Iâve been following the progress studies movement and Jason Crawfordâs Roots of Progress blog, it was cool to see video of his talk on the history of steel from a San Francisco meetup a few weeks ago.
Iâve been looking for a smooth way to dictate notes and thoughts while hands-free from my phone, particularly while running or driving.
When I run I typically wear one AirPod and have my phone inaccessible in a waistband pouch on my back. Since Iâm usually listening to audiobooks while running, I donât have an easy way to log thoughts or perform the audio equivalent of highlighting things.
I never use Siri at all but for a couple of easy, reliable Shortcuts for dictation. I thought this was a perfect candidate to explore the âHey Siriâ activation support with custom commands from the Shortcuts app (formerly Workflow).
This shortcut from MacStories provides a simple base for appending to a note in the Notes app. This is good, but for my use case I need to be able to do this completely hands-off. Using Shortcuts to capture and send workflow data around typically requires access to the app, forcing the device to be unlocked for it to work. This still could be convenient enough for, say, dictations in the car where the phone is in its mount or nearby, but in my waist pouch itâs totally inaccessible. I donât want to have to mess with anything at all while Iâm in motion running, so I needed something else.
So I logged into Zapier to see what I could do with its webhook trigger. If you send data to Zapier, they make it easy to connect to hundreds of different web services using custom multi-step workflows. Mine was going to be simple: dictate note â append text to a Google Doc.
I created a document called âScratchpadâ in my G Suite account to house any speech dictations. All I want is a temporary placeholder where I can record thoughts to get back to later. Each new dictated note appends a new line with the content. I use a workflow like this to add tasks in Todoist, but I needed something looser and more flexible.
Create the Zap
On the Zapier side, I created a zap with a webhook trigger first. This gives you a URL to copy and bring over into the Shortcuts app.
Create the Shortcut
Create a new Shortcut with these three steps:
Dictate text â to capture the speech-to-text data
URL â to set the base URL for the Zapier webhook (copied from your zap)
Get Contents of URL â this is what assembles the data into a POST request to the webhook endpoint
The only things you need to do here are paste in the zap URL, set it to the POST method, and edit the âRequest Bodyâ property. I added a note property and inserted the value of Dictated Text which will pass in that transcription from your dictation.
Setup the Zap
Once thatâs done, creating the zap on the Zapier side is only two steps: a webhook trigger:
And an âAppend to Documentâ action event with Google Docs:
Iâve been using this for a couple of days for ad-hoc comments while listening to books. Itâs been a convenient way so far to quickly jot things down like I do when Iâm reading paperbacks or Kindle. The only downside is that Siri mis-hears things a lot compared to the Google Assistant, which we use a good bit around the house. The dictation is usually passable, since itâs informal and usually close enough that when I review the notes, I recall what I was trying to say and can correct it. If I ever end up with a backlog of notes in there without being reviewed, the error rate on dictation will probably leave me stumped.
Through a Twitter thread I ran across this running catalog of resources on the history of the tech industry â books, articles, movies, and more. A definitive list of content. There are some great recommendations here that Iâd never heard of, especially in the books and podcasts sections.
Iâve got a copy of The Dream Machine that Iâm planning on digging into next, a history of personal computing and biography of JCR Licklider.
Iâm a historian of innovation. I write mostly about the causes of Britainâs Industrial Revolution, focusing on the lives of the individual innovators who made it happen. Iâm interested in everything from the exploits of sixteenth-century alchemists to the schemes of Victorian engineers. My research explores why they became innovators, and the institutions they created to promote innovation even further.
This connects nicely with the recent âprogress studiesâ movement.
This is another great one from last year on Jason Crawfordâs Roots of Progress project, in which he dives into advancements in human progress. In this post he covers a brief background on cement, one of the oldest of mankindâs technological discoveries:
Stone would be ideal. It is tough enough for the job, and rocks are plentiful in nature. But like everything else in nature, we find them in an inconvenient form. Rocks donât come in the shape of houses, let alone temples. We could maybe pile or stack them up, if only we had something to hold them together.
If only we couldâbear with me now as I indulge in the wildest fantasyâpour liquid stone into molds, to create rocks in any shape we want! Orâas long as Iâm dreamingâwhat if we had a glue that was as strong as stone, to stick smaller rocks together into walls, floors and ceilings?
This miracle, of course, exists. Indeed, it may be the oldest craft known to mankind. You already know itâand you probably think of it as one of the dullest, most boring substances imaginable.
From reading this I added Concrete Planet to the reading list. Iâm overdue for some more history of technology reading this year.
This is an interesting startup out of Tel-Aviv called ECOncrete thatâs creating a new concrete recipe and technology thatâs safer for coastal wildlife and actually strengthens over time in the water.
They create different shapes and textures to match the surface patterns of local rocks and corals, so that marine plants, algaes, and other animals are more attracted to it.
Weâve been exploring options for adding a CMS to our Jekyll-powered website for Fulcrum over the last couple of weeks, looking for ways to add more content editor-friendly capabilities without having to overhaul everything under the hood, or move to a full hosted CMS like Wordpress. The product and design teams responsible for the technical development of the website all prefer the simplicity and flexibility of static site generators, but understand the relative opacity of learning git, command lines, and the vagaries of something like Jekyll for team members just writing content.
One of the options weâve been looking at is Netlify CMS, along with their deployment and hosting platform as a GitHub Pages replacement. Their CMS is open source, and itâs attractive because of how simple it is to wire up to your static site with a single YAML file. Essentially all you need to do is define your content types in the configuration, then the CMS generates all of the editing UI for creating new or editing existing markdown files.
To kick the tires, I set it up locally for this site, and also ended up migrating the hosting for the entire site over to Netlify. The transition was totally seamless; now Iâve got my site running with the latest and greatest Jekyll and other libraries, added a CMS for when I want to quickly make edits or posts without involving a git workflow, and Netlifyâs CDN is blazing fast. I love that none of the rest of my workflow using a git repo, markdown, or Jekyll has to change â all pushes to master trigger automated tests and deploys on Netlify.
There are some other things there Iâm going to experiment with, especially the option for post-processing operations like minifying CSS and Javascript, as well as lossless image compression, both in service of page speed performance improvements.
I recently learned that you can pair your AirPods with the Apple TV, which Iâve been using for the last couple of weeks. With two kids sleeping nearby plus noise from the nearby kitchen, itâs impossible to get the volume loud enough to make out dialog in most shows. Because of this we always have the captions on for everything. But this new discovery solves this problem, plus it makes it easy to get up and walk away for a minute without having to pause anything.
This guide shows how to connect to them. Holding down the ⯠button on the Apple Remote pulls up an output source selector, like what you get with AirPlay dialog menus. My AirPods showed up in there the first time with no Bluetooth pairing required â probably some iCloud account magic happening to bypass that handshake process. After theyâre paired, you can use the volume control on either the Siri Remote or even the volume controls on your iPhone inside of the Remote app. Very slick experience.
iOS 13 has support for pairing multiple sets of AirPods to a single device. If this comes to tvOS, itâll be fantastic for both of us to be able to watch without noise issues.
Another fun one from the Primitive Technology channel. I previously linked to his videos a few months back. This time he builds a stacked brick wall around a new thatched hut out of clay bricks. The patience and craftsmanship required to build the things he does is truly admirable.
I think weâd all be mentally healthier if we spent more time disconnecting and creating things. If only I had the Queensland jungle in my backyard!
For a long time Iâve used the full 1Password desktop app and its browser plugin that installs alongside for support inside of Chrome. But recently I set up the 1Password X browser extension they first released a couple of years ago, and Iâm converted. Since access to accounts is most useful in a web browser context, implementing it as an extension makes sense. I donât know much about the tech backend or advantages of building a Chrome extension versus a âthick-clientâ browser plugin, but it seems like itâs certainly a benefit to conform to the browserâs best practice for building add-ons; and extensions are the way to go in Chrome. One of their big motivations here was deepening the cross-platform support since you can install Chrome (and Firefox) on so many OS platforms, including Linux.
The full features of the 1Password desktop app are available from within the extension â access to multiple vaults and all your accounts, editing and organizing your accounts, and creating new ones. In addition to the same handy integration for filling 2FA codes and their helpful password generator for new sites, X adds a built-in form filling utility, similar to the âautofillâ capability that browsers have had for a long time, but with access to your 1Password account if youâve got it unlocked. The feature even supports an inline generator and account creation wizard for when youâre signing up for new services, which in my experience is one of the biggest barriers to getting new users to understand and use 1Password: they donât add new accounts they sign up for into their vault. Helping users make sure things are always added (and updated!) in their vault is one of the key steps to reaching the âwowâ moment as a user. Once youâve got a few dozen (or in my case hundreds) of entries set up and well-organized in your vault, itâs magical to never have to worry about losing access to accounts.
The one thing thatâll take getting used to is that you canât unlock the vault with the Touch ID sensor on my MacBook Pro anymore using the X extension. Itâs been surprising to me how much I mustâve relied on this, as well as the Cmd-\ shortcut to autofill. You never realize how baked-in a behavior is until you upset the routine! This should just be a muscle memory thing to get used to.
One of the things I admire about 1Password is that itâs clear their product team are all constant users of their own product. Every time I think of something thatâd be slick, it seems theyâve already thought of it, or if not they eventually build it. And not only that, theyâll even go the extra mile and tie in keyboard shortcuts and all the other accoutrements that demonstrate that they themselves are power users of their product.
My appreciation for their effort doesnât stop at the technology or product. From a business standpoint, I admire what theyâve been able to do with their pivot from desktop app to SaaS with their Business and Family plan offerings. Many app developers have made moves over the last few years toward subscription pricing, sometimes with mixed results. Iâve always been a fan of SaaS models for services I rely on â without continuous funding, how will they make their excellent product even better? Itâs not just about changing the billing model from perpetual to recurring either; theyâve actually converted to a hosted service that offers something distinctly different than what a desktop app can do.
A few months ago I joined the advisory board of the Suncoast Developers Guild, a code school and developer community here in St. Pete. Our company has been involved with this group since back when they first launched the Iron Yard campus back in 2014.
Weâve had a successful experience connecting with the local community through this channel, supporting students looking to shift careers into work on software and recruiting them into our team. Currently 5 people from our dev and product teams came out of those cohorts of front-end or full stack development grads.
Through my role as an advisor there, weâre working on a few things that weâre hoping expand the footprint of the Guild and bring more companies into the fold. Iâve long been an advocate for ânon-traditionalâ education paths and hands-on experience over formal education, but many companies are still stuck in the world of looking for those 4-year degrees â they donât know how to recruit, vet, and measure skillsets without a GPA and letters alongside it (BS / MS).
I wrote this post a number of years ago targeted at people new to or thinking about jobs in the programming world. Iâm not a developer myself, but work with them everyday and have spent plenty of time in the community to know how to identify and hire the right skills I look for in creators. Going back and reading this this morning, I still would agree with everything I wrote. Supporting the SDG is part of my effort to have skin in the game on this perspective. Helping match the right core passions and mental tools to the companies than need them is what itâs about, regardless of the path one takes to get there.
I saw this Nightline interview clip with Steve Jobs from a recent Steven Sinofsky post.
In this clip is his famous âbicycle for the mindâ quote about the personal computer.
This is a 21st century bicycle that amplifies a certain intellectual ability that man has. And I think that after this process has come to maturity, the effects that itâs going to have on society are going to far outstrip even those of the petrochemical revolution has had.
Hard to believe Jobs was this prescient at age 26, when computers were still considered to be hobbyist toys.
I had my main blog/website on Tumblr back when it first launched in 2007, which I used for a number of years before migrating it over to this current self-managed iteration on GitHub back around 20111. At the time I loved Tumblrâs middleground between the long-form-friendly full Wordpress blog and the short-form nature of Twitter. Tumblrâs âtumblelogâ concept easily supported either mode depending on what you wanted to post. Their post editor was fantastic (and still is, in my opinion), especially good back in the days before Medium when WYSIWIG editors weâre all pretty terrible. It was the place I learned to use Markdown in everyday writing, which I still use everywhere today, even in my own personal note text files.
Though I havenât been a user of Tumblr in years, I have some negative and positive feelings about it. The negative is, of course, that Verizon is treating it like a fire sale âwrite downâ, with the previously $1.1bn acquisition in 2013 degrading down to now selling it off to Automattic for a rumored price of âless than $3mâ. Itâs astonishing that something could lose that much value in the marketplace in such a short period of time.
The upside here is that thereâs no better owner and future shepherd of the product than its new one. Automattic has been one of the best community-oriented companies for 15 years, with a publishing platform that powers a quarter of the internet. Itâs sad to see it lose so much of itâs former self, but maybe itâll see a revitalization under new ownership.
I still have that Tumblr account up, but stopped ever posting to it quite a few years ago. ↩
I ran across this piece about a month ago, but avoided it after sensing spoilers for the book I was in the middle of at the time.
Is our universe an empty forest or a dark one? If itâs a dark forest, then only Earth is foolish enough to ping the heavens and announce its presence. The rest of the universe already knows the real reason why the forest stays dark. Itâs only a matter of time before the Earth learns as well.
This is also what the internet is becoming: a dark forest.
In response to the ads, the tracking, the trolling, the hype, and other predatory behaviors, weâre retreating to our dark forests of the internet, and away from the mainstream.
Itâs a thought-provoking article that probably resonates with many internet citizens these days. Since certain former central venues of internet participation (message boards, social media, Reddit, comment threads, chatrooms) have become hives of polarization, negativity, and hypersensitivity, many of us just donât participate like we used to. I myself still tweet occasionally, but nothing like the more unfiltered content I and many others wouldâve posted in the early days of the service. I more and more favor writing long-form to a more patient audience here (if one even exists), and otherwise most interaction happens in person or with closely-knit networks.
The dark forests grow because they provide psychological and reputational cover. They allow us to be ourselves because we know who else is there. Compared to the free market communication style of the mass channels â with their high risks, high rewards, and limited moderation â dark forest spaces are more Scandinavian in their values and the social and emotional security they provide. They cap the downsides of looking bad and the upsides of our best jokes by virtue of a contained audience.
Like the author of the piece says, itâs like the universe in Liuâs Remembrance trilogy: announcing your presence can only cause bad things to happen, so retreat to the private, protected spaces. Itâll be interesting to see how these conversation venues evolve over time at this âdark forestâ retreat continues.
Wearables have become such a big market these days that thereâs a wide variety of options to pick from if you want to monitor activity metrics. From the basic Fitbit step counters to more ruggedized outdoor watches to full-blown smartwatches, thereâs a device for everyone.
Iâve been a devoted user of Garminâs activity tracking watches for years now, starting out with the Forerunner 220. A couple of years ago I upgraded to the fÄnix 5 model, one of their highest-end watches.
I used the 220 model for about 3 years for run tracking. It was always reliable for me â water/sweat resistant, long enough battery life, and provided accurate GPS data. Because I also wanted to monitor heart rate during activities, I also used to use the chest strap HR monitor to feed that data to the watch. It worked reliably for a long while, but I think the contacts got corroded and the data started to get wonky after a time. Iâd see huge surges in HR for no reason that would suddenly drop back down to normal.
Iâve now been using the fÄnix for a couple of years and have loved it, one of the better devices Iâve ever owned. After a good experience with Garminâs Forerunner series, I felt confident enough that Iâd get benefit out of one of the higher end models. Letâs walk through some of its best features.
Multisport Activity Tracking
One of the things I didnât like about the Forerunner was that it only supported recording run activities. The fÄnix supports over a dozen activity types, indoor and outdoor, like cycling, climbing, swimming, and more. With the Forerunner it would still log GPX tracks that could be exported and treated however you want, but when synced to Garmin Connect or Strava, it would consider every activity a ârunâ. With fÄnix when you select a different activity type, it gets picked up accurately in both sync services and treated differently for metrics reporting.
There are some differences between activity types in terms of instant feedback on the watch display. For example, between runs and rides, you can have different âlapâ lengths to notify you of progress along an activity. So the advanced features like HR zones, pacing, and other things differ in how theyâre fed back to you while youâre active.
Iâm interested in incorporating swimming into my workout routine and to see how that would work with the watch.
HR Monitor
Having the HR monitor built into the device has some great advantages: mostly that itâs always on, and always available. I like that I get passive tracking of heart rate all the time to be able to see the resting heart rate during the day and during sleep (more on sleep tracking in a moment). I donât have a good sense for the accuracy of the measurement with the on-wrist infrared sensor, but it seems generally consistent with what I used to see with the chest strap. To me itâs mostly important to have relative consistency between activities, and that I can see it in real time during activities. When Iâm running I usually switch the watch display to view HR, which tracks amazingly closely with how I feel during a run. I can see a measurement of when Iâm on the limit, so I typically use that readout to pace myself.
Battery Life
This is one of the best features about the fÄnix, to me. Garmin reports 2 weeks of passive usage, 24 hours of active usage, which tracks pretty closely with my experience. What I tell people is that it lasts so long that I usually donât remember exactly when I last charged it. This is the main reason that the Apple Watch has never interested me. I like the idea of richer apps on a wristwatch (especially with the phoneless-but-still-connected capability of the Series 3), but having to charge something every night is a nonstarter to me.
Sleep Tracking
Given that I wear the watch all the time, the sleep tracking is an easy side benefit. Ever since reading Why We Sleep recently, Iâm more interested in prioritizing long enough sleep cycles (which with children simply means going to bed early). The watch reports not only sleep time, but also sleep stages somehow, through some combination of heart rate monitoring and movement tracking it buckets your sleep time into deep, light, and REM sleep stages. I donât need hyper-accurate reporting, so this is a slick feature to get for free with an exercise tracker.
A rare example of 8+ hours of sleep
Iâve heard about the Oura ring as well for more detailed sleep tracking, but itâs a bit pricey for something I donât have a big problem with right now. If I want to get more sleep, the simple solution is to prioritize it (which I donât do well).
Smartwatch Capability
Through Bluetooth pairing, the fÄnix also supports integration with push notifications from the phone. This can be convenient sometimes, but Iâve honestly never used it that much. Probably the most utility for me is quick access to turn-by-turn directions while in the car or on my bike. Quick readout of SMS and instant messages is convenient, too.
Strava Integration
You can set up Garmin Connect to sync with a number of services, including Strava, which is the only one I use for activity tracking. The main feature it has tied to Strava that I like is that with Segments in Strava, any segments you add to your favorites transfer to the watch for live progress tracking. Itâs a feature they call Live Segments, and itâs cool because itâll give you live feedback on your performance against your previous efforts and the KOMs from your friends. I love the ability to challenge myself on my own personal records on common routes.
The syncing works pretty flawlessly both with Garmin Connect and Strava. Never had a problem making sure my data is always up to date.
Any Downsides?
Itâs been a rock-solid device for me, overall, with no major drawbacks.
The custom charging connector is probably the only downside, and not too acute because of the long battery life and rarity of needing to charge. Itâd be much smarter for Garmin to use USB-C or micro-USB, but I donât know what would motivate a custom interface. Given that the connector plugs in perpendicular to the watch back, itâs possible that thereâs not enough thickness to fit the receptacle for a USB-type connector. Regardless, the need for a special cable to charge is an annoyance. I have keep one at home and a spare at the office so I can charge anywhere.
Overall itâs a very solid device, and Iâd consider buying other Garmin devices down the road.
Roots of Progress has an interesting deep dive on why it took so long for a (relatively) simple invention of the bicycle, even in a time when the principles of a bicycleâs components were well understood for a long time. Thereâs an interesting inventory of potential hypotheses about why it took until the late 1800s.
Early iterations of human-powered transport looked like inventors trying to replicate the carriage, with devices that looked like âhorseless carriagesâ, someone providing power, another person steering. The first breakthrough toward something that looked like a modern bicycle (at least in form factor) was from German inventor Karl von Drais, modeling his design on the horse rather than the carraige:
The key insight was to stop trying to build a mechanical carriage, and instead build something more like a mechanical horse. This step was taken by the aforementioned Karl von Drais in the early 1800s. Drais was an aristocrat; he held a position as forest master in Baden that is said to have given him free time to tinker. His first attempts, beginning in 1813, were four-wheeled carriages like their predecessors, and like them failed to gain the support of authorities.
It seems cultural and economic factors make the most sense as explanations, versus technological or environmental ones:
In light of this, I think the deepest explanation is in general economic and cultural factors. Regarding economic factors, it seems that there needs to be a certain level of surplus to support the culture-wide research and development effort that creates inventions.
Like most breakthroughs viewed with the advantage of hindsight (and in this case 150 years of it), the invention seems so obvious we struggle to imagine why the people of the 18th or 19th centuries wouldnât have worked on the problem. Combine the non-obviousness with a lack of cultural motivation and an unfriendly economic environment, and itâs not surprising it took so long.
Yesterday was Neuralinkâs unveiling of what theyâve been working on. Their team of engineers, neurosurgeons, and computer science experts are working on a âneural laceâ brain-computer interface.
Elon Musk announced the launch of a company to work on this problem back in 2016. Seeing this amount of progress, itâs clear now that the science fiction story of a cybernetic implant looks like a possible near future reality. The idea itself conjures images of Neuromancerâs console cowboys and Effingerâs âmoddiesâ, neural augmentations that enable things like plugging into the matrix and personality modification.
The near-term intent that Neuralink is after is to use the lace as an assistive technology for those with motor impairments and other medical conditions. But there are moonshot goals to âincrease the bandwidthâ between computers and the human mind.
The whole idea gives new meaning to the famous Steve Jobs quote:
What a computer is to me is the most remarkable tool that weâve ever come up with, and itâs the equivalent of a bicycle for our minds.
If Neuralink is successful, instead of being limited by the bandwidth of the inputs â keyboard, mouse, touchscreen â and outputs â pixels and sound waves â weâll have a two-way massive digital pipeline in between. A supersonic jet for the mind.
I always enjoy conversations with Marc Andreessen and Ben Horowitz. This interview (conducted by Slack founder Steward Butterfield) reviews their experiences as founders back in the pre-bubble era and compares and contrasts that thematically with the tech landscape today.
At the recent WWDC, Apple announced an overhaul to their Maps product, including millions of miles of fresh data from their vehicle fleet, along with a new Street View-like feature called âLook Aroundâ. Even though itâs exciting to see them invest in mapping, it seems like a bridge too far to ever catch the quality of Google Maps. Om Malik compares the relative positions between the two to that of Bing to Google in search. Apple is approaching Maps as an application first, when really maps are about data:
Why do I think Google Maps will continue to trump Apple despite the latterâs fancy new graphics and features? Because when it comes to maps, the key metrics are navigation, real-time redirection, and traffic information. Googleâs Waze is a powerful weapon against all rivals. It has allowed Google to train its mapping algorithms to become highly effective and personal (not to mention how much intelligence that might have been shared with Waymo).
I would add point of interest data to this list as a key metric. That used to be purchased from commercial providers, scraped from the internet, and mapped manually, but now the fleet of vehicles (and Googleâs users searching for places) provide a continuous stream of validation and updates to place data. With the combination of Google Maps, the Android OS, and soon a fleet of autonomous Waymo vehicles, it seems like Google will continue to be an unstoppable data juggernaut.
Geoffrey Mooreâs Crossing the Chasm is part of the tech company canon. Itâs been sitting on my shelf for years unread, but Iâve known the general nature of the problem it illuminates for years. Weâve even experienced some of its highlighted phenomena first hand in our own product development efforts in bringing Geodexy, allinspections, and Fulcrum to market.
In principle, the advice laid out rings very logical, nothing out of left field that goes against any conventional wisdom. It helps to create a concrete framework for thinking about the âpsychographicâ profile of each customer type, in order from left to right on the curve:
Innovators
Visionaries
Pragmatists
Conservatives
Laggards
Itâs primarily addressed to high-tech companies, most of which in the âstartupâ camp are somewhere left of the chasm. The challenge, as demonstrated in the book, is to figure out what parts of your strategy, product, company org chart, and go-to-market need to change to make the jump across the chasm to expansion into the mainstream on the other side.
There are important differences between each stage in the market cycle. As a product transitions between stages, there are evolutions that need to take place for a company to successfully mature through the lifecycle to capture further depths of the addressable market. Mooreâs model, however, distinguishes the gap between steps 2 and 3 as dramatically wider in terms of the driving motivations of customers, and ultimately the disconnect of what a product maker is selling from what the customer believes they are buying.
The danger of the chasm is made more extreme by the fact that many companies, after early traction and successes with innovators and visionaries, are still young and small. A company like that moving into a marketplace of pragmatists will encounter much larger, mature organizations with different motivations.
The primary trait displayed by the visionary as compared to the pragmatist is a willingness to take risk. Where a visionary is willing to make a bet on a new, unproven product, staking some of their own social and political capital on the success of high tech new solutions, the pragmatist wants a solution to be proven before they invest. Things like social proof, case studies, and other forms of evidence that demonstrate ROI in organizations that look like their own. Not only other companies of their rough size, but ones also in their specific industry vertical, doing the same kind of work. In other words, only a narrow field of successes work well as demonstrable examples of value for them.
Knowing about this difference between market phases, how would a creator prepare themselves to capture the pragmatist customer? One is left with a dilemma: how can I demonstrate proof within other pragmatic, peer organizations when they all want said proof before buying in? We have our own product thatâs in (from my optic) the early stages of traction right of the chasm, so many of the psychographics the book provides to define the majority market ring very true in interactions with these customers.
Presented with this kind of conundrum in how to proceed, Mooreâs strategy for what to do here is, in short, all about beachheads. He uses the example of D-Day and the successful Allied landings on the Normandy beachhead as an analogy for how you can approach this sort of strategy. Even if you have a broadly-applicable product, relevant to dozens of different industries, you have to spend so much time and energy on a hyper-targeted marketing campaign to connect with the pragmatist on the other end that you wonât have enough resources to do this for every market. The beachhead will be successfully taken and held only if you go deep enough into a single vertical example to hold onto that early traction until you can secure additional adjacent customers. Only then can you worry about moving inland and taking more territory.
All in all it was a worthwhile, quick read. Nothing revelatory was uncovered for me that I wasnât already aware of in broad strokes. However, it is one of those books thatâs foundational to anyone building a B2B software product. Understanding the dynamics and motivations of customers and how they evolve with your productâs growth is essential to building the right marketing approach.
This is a neat interactive tool to visualize distortion due to map projection using Tissotâs indicatrix, a mathematical model for calculating the amount of warp at different points:
Nicolas Auguste Tissot published his classic analysis on the distortion on maps in 1859 and 1881. The basic idea is that the intersection of any two lines on the Earth is represented on the flat map with an intersection at the same or a different angle. He proved that at almost every point on the Earth, thereâs a right angle intersection of two lines in some direction which are also shown at right angles on the map. All the other intersections at that point will not intersect at the same angle on the map, unless the map is conformal, at least at that point.
A typeface designed to mimic the National Park Service signs that are carved using a router bit.
Perfect timing on finding this one. Iâve been working on a cartography project to simulate a USGS-style topographic map in QGIS, and this could work perfectly in that design. Excellent work from the Design Outside Studio.
SpaceX is developing a space-based broadband internet system of 24 satellites. The design of this hardware looks incredible. I hope it gets traction and sparks a consumerization of this sort of tech. Between projects like this and the work of Planet and others with microsatellites, that industry seems like itâs on the cusp of some big things.
I just ran across this YouTube channel called Primitive Technology, created by an Australian from the North Queensland bush country who attempts to recreate building things with Stone Age technology. He makes his own charcoal, fires clay hardware, makes tools, and supplies himself with mud, clay, wood, and everything else right out of the local environment.
Each one is silent with the work speaking for itself. Turn on captions to see embedded explainers talking about what heâs doing. An easy YouTube rabbit hole.
I loved this piece, a history of the spreadsheet from Steven Levy originally written in 1984.
Itâs a great retrospective that demonstrates how much impact spreadsheets had on business, even though we now consider them a fact of life and a given foundation of working with numbers on computers:
Ezra Gottheil, 34 is the senior product-design planner at Lotus. He shows up for work in casual clothes, and his small office is cluttered with piles of manuals and software. When I visited Gottheil he gave me a quick introduction to electronic speadsheeting. Computer programs are said to use different âmetaphorsâ to organize their task; a program might use the metaphor of a Rolodex, or a file cabinet. When you âbootâ almost any spreadsheet program into your personal computer, you see little more than some letters running across the top of the display screen and some numbers running down the side. This serves to indicate the grid of a ledger sheet, the metaphor used by Lotus and other best-selling spreadsheets like VisiCalc, Multiplan, and SuperCalc. The âcursor,â a tiny block of light on the screen that acts like a kind of electronic pencil, can be moved (by a touch of the computer keyboard) to any cell on the spreadsheet in order to âinputâ numbers or formulas. By placing in the cells either figures of formulas that adjust figures according to different variables, it is possible to duplicate the relationships between various aspects of a business and create a âmodel.â The basic model for a restaurant, for example, world include expenses such as salaries, food and liquor costs, and mortgage or rent payments; revenues might be broken down into âbarâ and âfood,â perhaps even further by specific dishes. Every week, the figures would be updated, the formulas reworked if necessary (perhaps the price of the olive oil had risen) and the recalculated model provides an accurate snapshot of the business.
Here we sit 30 years later and the basics of the spreadsheet and fundamental means of interaction have hardly changed.
Itâs interesting that now tools like Observable are using some of the same principles of interactivity that weâve all used in spreadsheets for decades and applying them to code â edit code or data and watch as your output dynamically changes.
Like many in the Twitterverse, I love the platform. It provides my main interface to following whatâs happening, along with staying connected to interests both personal and professional.
Jumping off something James wrote yesterday, Iâve felt similar about Twitterâs utility the last year or so. It feels like Iâm experiencing some sort of content creep â probably a function of an increasing number of accounts I follow and the neighboring universe of likes and retweets from that expanding footprint, which generates a massive amount of noise in the algorithmic feed.
I donât spend a ton of time on Twitter anymore, but I do look at it multiple times a day. Unlike some, I actually like the algorithmic feed. The idea of seeing things adjacent to the folks I follow is an attractive one, but itâs gotten to be overwhelming with toxic content, topics I personally donât want to see on Twitter (or at all), and can be overwhelming echo chamber on some topics when high profile events happen. I need to make the time (as James did) to purge the follow count down of the unnecessary. I did also discover muting topics recently, which has helped tone down the stuff I donât care about â not via Twitter, at least.
Twitterâs had its Lists feature since 2009, but I barely got into using it before abandoning it and never going back. The process for adding and removing from lists and general list consumption has always been terrible, as if Twitter is likely to kill the feature at some point. Jamesâs recommendation of TweetDeck definitely makes consuming the list feeds more manageable. Iâm going to give that a try and set up a couple of topic-based lists to see how that works.
Since I do so many of my runs at night (even as late as 10-10:30pm), Iâve always been mindful of being visible for safety. Until we moved last month, I used to drive down to the Coffee Pot Bayou area and run on whatâs called the North Bay Trail, since runs in my old neighborhood were boring. That whole route was on a dedicated trail set back from the street, so visibility was less of an issue. Now that Iâm doing most runs in the neighborhood, even though the sidewalks are good, there are plenty of crossings that can be sketchy in the dark. So I bought a headlamp to try out.
I havenât gotten to use it yet, but will likely be doing some runs in the evenings over the next week.
One feature I do really like to make it a multitasker is itâs got a red light mode. Really meant to be used in outdoors activities to preserve night vision when doing things like checking maps or looking around in your tent, Iâve already found it useful for reading in the bed at night. Usually Iâll only read my Kindle Paperwhite in the bed since itâs got a nice low power backlight, but this is great because it allows be to read paperbacks in the dark, as well.
This latest piece from Steven Sinofsky considers product strategy on 2 axes:
What problem is being solved and
How it is solved
The spectrum he paints here runs from the most conservative (old things in old ways, âincrementingâ) to the most forward-leaning (new things in new ways, âinventingâ). No approach in this matrix is âthe answerâ in all cases, each has its merits based on timing, product type, stage, customer set, sales approach, or business model. Also a product team growing over a course of 5 to 10 years shouldnât necessarily always be in a single quadrant on the matrix. Itâs helpful to use as a lens to view your own team through.
Why is this so tricky? Because almost nothing we use is entirely new,an invention. Facebook was not the first social network. Instagram wasnât the first way to share photos. Google wasnât the first search engine with ads or ad network. Windows wasnât the first graphical OS. Word, Excel and more were hardly the first productivity tools in those categories.
Yet those products were all very innovative. Innovative products are a portfolio of new and old that lead to creative solutions. Marco Iansiti at Harvard Business School once taught me, innovation = invention + impact. The impact can be to solve new problems, changing market perspectives on categories, or causing customers to consider new ways to use technology.
What this brings front of mind when I think about my own past efforts, or even other products I follow in the market, is that teams focus to much on the new things rather than the new ways. Different approaches to tech platform architecture, deployment strategy, or even productization (pricing and packaging) can often be keys to unlocking new customers or growth with existing ones.
One of my favorite tech figures, a16zâs Steven Sinofsky, gives a history of âClippyâ, the helpful anthropomorphic office supply from Microsoft Office. As the product leader of the Office group in the 90s, he gives some interesting background to how Clippy came to be. I found most fascinating the time machine look back at what personal computing was like back then â how different it was to develop a software product in a world of boxed software.
Everyone makes fun of it today, but Clippy did presage the world of AI-powered âassistantâ technology that everyone is getting familiar with today.
Wolfeâs work, particularly his Book of the New Sun âtetralogyâ, is some of my favorite fiction. He just passed away a couple weeks ago, and this is a great piece on his life leading up to becoming one of the most influential American writers. I recommend it to everyone I know interested in sci-fi. Even reading this made me want to dig up The Shadow of the Torturer and start reading it for a third time:
The language of the book is rich, strange, beautiful, and often literally incomprehensible. New Sun is presented as âposthistoryââa historical document from the future. Itâs been translated, from a language that does not yet exist, by a scholar with the initials G.W., who writes a brief appendix at the end of each volume. Because so many of the concepts Severian writes about have no modern equivalents, G.W. says, heâs substituted âtheir closest twentieth-century equivalentsâ in English words. The book is thus full of fabulously esoteric and obscure words that few readers will recognize as Englishâfuligin, peltast, oubliette, chatelaine, cenobite. But these words are only approximations of other far-future words that even G.W. claims not to fully understand. âMetal,â he says, âis usually, but not always, employed to designate a substance of the sort the word suggests to contemporary minds.â Time travel, extreme ambiguity, and a kind of poststructuralist conception of language are thus all implied by the bookâs very existence.
Zoom was in the news a lot lately, not only for its IPO, but also the impressive business theyâve put together since founding in 2011. Itâs a great example of how you can build an extremely viable and healthy business in a crowded space with a focus on solid product execution and customer satisfaction. This profile of founder Eric Yuan goes into the core culture of the business and the grit that made the success possible.
The folks over at FullStackTalent just published this Q&A with Tony in a series on business leaders of the Tampa Bay area. It gives some good insight into how we work, where weâve come from, and what we do every day. Thereâs even a piece about our internal âGeoTriviaâ, where my brain full of useless geographical information can actually get used:
Matt: Whatâs your favorite geography fun fact?
Tony: Our VP of Product, Coleman McCormick, is the longest-reigning champion of GeoTrivia, a competition we do every Friday. We just all give up because he [laughter], you find some obscure thing, like what country has the longest coastline in Africa, and within seconds, heâs got the answer. Heâs not cheating, he just knows his stuff! We made a trophy, and we called it the McCormick Cup.
Disney recently announced details on their upcoming âDisney+â direct-to-consumer streaming service at their Investors Day â big news for everyone in the tech and media scene since Disney is one of very few content companies with enough leverage purely from differentiated content to make a strong competing tech play against Netflix, Amazon, and others.
Most others in the traditional media space have no chance of competing on a tech level with the likes of Netflix or YouTube, but Disney has enough of its own unique IP to create their own garden and draw away enough attention to be interesting. Between Disney Animation, Pixar, Marvel, ESPN, LucasFilm, and others, thatâs a moat of pure creative content property that could give them enough breathing room to catch up on the technical side to build a direct-to-customer business around.
This piece from Stratechery covers down on whatâs interesting here if Disney actually does play the long game. The shift from affiliate fees and traditional distribution will mean foregoing near-term revenue from cable carriers and other licensees in service of building a growing base of direct relationships with consumers, akin to the machine Netflix has built. But a key difference is how Disney+ fits into the overall Disney machine:
This is the only appropriate context in which to think about Disney+. While obviously Disney+ will compete with Netflix for consumer attention, the goals of the two services are very different: for Netflix, streaming is its entire business, the sole driver of revenue and profit. Disney, meanwhile, obviously plans for Disney+ to be profitable â the company projects that the service will achieve profitability in 2024, and that includes transfer payments to Disneyâs studios â but the larger project is Disney itself.
By controlling distribution of its content and going direct-to-consumer, Disney can deepen its already strong connections with customers in a way that benefits all parts of the business: movies can beget original content on Disney+ which begets new attractions at theme parks which begets merchandising opportunities which begets new movies, all building on each other like a cinematic universe in real life. Indeed, it is a testament to just how lucrative the traditional TV model is that it took so long for Disney to shift to this approach: it is a far better fit for their business in the long run than simply spreading content around to the highest bidder.
Anyone that works in a successful company with a large distributed staff can attest to remote-first being the future for knowledge work organizations. The more we expand our remote team at our company, the better we all get at realizing all of its benefits. It seems like an inevitability to me that thereâll be a tipping point where all new tech companies begin as remote-centric groups. Naval, the founder of AngelList (which is a key player in recruiting and hiring infrastructure for startups):
âWeâre going to see an era of everyone employing remote tech workers, and itâs not too far away. In fact, nowâs the time to prepare for it. But I think in the meantime, the companies that are going to do the best job at it are the ones that are remote companies or that have divisions internally that are remote. Itâs going to be done through lengthy trials. Itâs going to be done through new forms of evaluating whether someone can work remotely effectively.â
Jan Chipchase from Studio D posted these fun, creative, realistic, and sometimes scary speculations on what sorts of behavioral side effects could play out with the proliferation of autonomous vehicles. See also the follow on 15 more concepts.
The practice of what we currently call parking will obviously change when your vehicle is able to park and drive itself. Think of your vehicle autonomously cruising the neighbourhood to be washed, pick-up groceries and recharge its batteries whilst youâre off having lunch. What is the optimal elasticity of your autonomous vehicle to you? What are the kinds of neighbhourhoods it likes to drive around in when youâre not using it? This is an especially pertinent question, when a vehicle is considered a sensing platformâââthe technology to autonomously negotiate the city can collect rich data for other uses.
While the batch of feature enhancements isnât mind-blowing, Iâm glad to see Apple continuing to evolve these. AirPods are the best product theyâve released since the iPhone. I use mine for hours every single day â far more than I ever used any previous headphones. I recently got one of these Qi wireless chargers for my office, so Iâll be glad to have the inductive charging for the AirPods, too. Of course the extra battery life will be a huge plus.
Email is seeing a resurgence in an age when everyoneâs been crying that email is dead. The comeback is not so much as a tool for intra-office communication (though itâs still alive and well in most organizations, Slack has overtaken email in ours), but as a publishing medium.
Newsletters have become a popular means for connecting with readers, helping publishers (and especially independent writers) cut through the noise that pervades social media channels. The constant feed of non-stop, clickbait-ish content makes it hard to cut through that waterfall with deep analysis or thoughtful writing.
Blogs are still around, but since they require engaging readers deeply enough to get them to visit your site, itâs challenging to compete with Facebook and Twitter for the attention share.
I still prefer a combination of RSS feeds and Pinboard bookmarks for managing my own feeds (plus Twitter), but I also find some of the new email content folks are putting together to be a nice compromise from the traditional blog. Sort of the best of both worlds combining the longer form subscription to content like blogs + RSS give you with a direct approach to deliver 1 thing per day or week to a place youâll always see: the email inbox.
Hereâs a summary of email newsletters Iâve been enjoying, all of which I read consistently (otherwise Iâd unsubscribe!):
The Exponential View â Azeem Azhar on technology, business, trends, society. Full of interesting links and commentary.
Stratechery ($) â The strategy and business of tech, by Ben Thompson. One of the best reads to keep up with the macro industry trends. Lots of original analysis on a variety of topics.
Product Habits â Links about building products, marketing, and startups. Put together by Hiten Shah.
Axios PM â Axios is doing some interesting things with the traditional news model. I use the Axios PM as a daily touchpoint on whatâs happening in the wider world of news. Delivered in the afternoon each day.
FT World News ($) â International perspective on the news from around the world.
Daily Stoic â Ryan Holidayâs daily bite of stoicism. Always a good reminder to snap back to reality.
Cleaning the Glass ($) â One of my favorites, with deep analysis of basketball topics from Ben Falk, former analytics guy from the Sixers and Blazers.
These run the gamut; some are free, some I pay for personally, and some we have corporate subscriptions to.
Itâs interesting to see these trends ebb and flow. Even as social media platforms like Twitter and Facebook cross the decade mark, having been large, mature platforms for about that long, people are still figuring out how to make use of them on both sides â producers and consumers. Authors are rediscovering that email still provides one of the most predictable form factors for connecting directly with a reader, without having to go through gatekeepers.
This is an old announcement, but new to me. CloudFlare now hosts privacy-centric DNS at 1.1.1.1, available to all:
We talked to the APNIC team about how we wanted to create a privacy-first, extremely fast DNS system. They thought it was a laudable goal. We offered Cloudflareâs network to receive and study the garbage traffic in exchange for being able to offer a DNS resolver on the memorable IPs. And, with that, 1.1.1.1 was born.
The Mars rover Opportunity is now out of commission. This Twitter thread from Jacob Margolis goes through a timeline of what happened to the rover. It first landed and began exploring the Martian surface in 2004. The system exceeeded its intended planned operational lifespan by â14 years and 46 daysâ. An incredible feat of engineering.
I donât post much about politics here, preferring to keep most of that to myself. I did find this piece an interesting perspective on the rise of a particular flavor of socialist-oriented ideology, and the too-common notion that so much should be guided, directed, or outright owned by government. On the risk of regulatory capture vs the value of the market:
Bureaucracy at any level provides opportunities for special interests to capture influence. The purest delegation of power is to individuals in a free market.
This is an interesting interview with Been Kim from Google Brain on developing systems for seeing how trained machines make decisions. One of the major challenges with neural network-based based deep learning systems is that the decision chain used by the AI is a black box to humans. Itâs difficult (or impossible) for even the creators to figure out what factors influenced a decision, and how the AI âweightedâ the inputs. What Kim is developing is a âtranslationâ framework for giving operators better insight into the decision chain of AI:
Kim and her colleagues at Google Brain recently developed a system called âTesting with Concept Activation Vectorsâ (TCAV), which she describes as a âtranslator for humansâ that allows a user to ask a black box AI how much a specific, high-level concept has played into its reasoning. For example, if a machine-learning system has been trained to identify zebras in images, a person could use TCAV to determine how much weight the system gives to the concept of âstripesâ when making a decision.
For the last 7 days Iâve only been using the iPad. Iâve had a 12.9â iPad Pro for about a year, but have only used it in âwork modeâ occasionally so I donât have to lug the laptop home all the time. Most of what I do these days doesnât require full macOS capability, so Iâm experimenting in developing the workflow to go tablet-only.
Slack, G Suite apps, mail, calendar, Zoom, Asana, and 1Password covers about 85% of the needs. There are a few things like testing Fulcrum, Salesforce, any code editing, that can still be challenging, but they partially work depending on what Iâm trying to do.
Iâm really enjoying it now that Iâve gotten a comfort level with navigating around and multitasking features. I find that the âone app at a timeâ nature of iOS helps me stay on track and focus on deeper tasks â things like writing documents, planning, and of course being able to sketch and diagram using the Pencil, which I do a ton of. Iâve liked Notability so far of the drawing apps Iâve tested for what I need.
One of the biggest things I had to figure out a solution for was being able to write and publish to this website efficiently. Since I use Jekyll and GitHub Pages under the hood, I hadnât found a simple solution to manage the git repository and preview posts. Iâll go deeper on that workflow in a future post, because itâs a pretty comfortable setup (for me) that others might find useful.
Overall Iâm liking working on iPad more and more. It gets easier as I accrue knowledge of tips, tricks, and other workflows.
As computing platforms get more complex and critical to daily life, maintaining secure usage gets more challenging.
Iâve written about thisbefore, but itâs a known mantra in the product and IT space that security and usability are inversely proportional. That is, a gain in one is a loss in the other. This has long been visible in enterprise software that is perceived as annoying or frictional in the pursuit of security (password rotation every n days, canât reuse, complexity requirements). Itâs what gives employees a bad taste in their mouth about enterprise systems, among other things. That reduction in usability begets bad behavior on the part of users â the proverbial Post-It note on the monitor with the last 3 passwords in clear text.
Those of us that make software never want to compromise on usability, but us realists recognize the need for secure data and privacy. There are exciting developments lately that might be closing this gap.
Password managers like 1Password already have done a lot to maintain secure computer usage behavior by simplifying the âsecure defaultsâ â primarily not reusing passwords across services and enabling realistic use of longer, random strings. Two-factor authentication adds a wrinkle in usability that (unlike many other auth wrinkles) affords a powerful layer of security, albeit with a cost. The two-factor support within 1Password makes it shockingly smooth to deal with, though. So much so that I enable two-factor auth on any service that offers it, without hesitation.
What got me thinking about this topic again was a specific new addition to the personal security workflow. I just got an iPhone XS; itâs my first experience with Face ID (which deserves a healthy dose of praise in its own right). But the real breakthrough is the integration of 1Password into the built-in Password Autofill facility in iOS 12.
Hereâs a before and after example of signing into GitHub on my phone:
Before: Go to GitHub, see that Iâm signed out, switch to 1Password, copy password, return to GitHub, paste credentials, tap sign in, go back to 1Password, copy 2FA code, go back and paste it in, success.
After: Go to GitHub, tap âPasswordsâ in browser, Face ID, pick account, it autofills, paste 2FA code, success.
This seems like trivial stuff, but given how many seconds/minutes of each day I spend doing this process, itâs a big deal. Before, making this process smoother would require a dent in its security. Now we get to have a friction-free process without the compromise.
Fulcrum, our SaaS product for field data collection, is coming up on its 7th birthday this year. Weâve come a long way: from a bootstrapped, barely-functional system at launch in 2011 to a platform with over 1,800 customers, healthy revenue, and a growing team expanding it to ever larger clients around the world. I thought Iâd step back and recall its origins from a product management perspective.
We created Fulcrum to address a need we had in our business, and quickly realized its application to dozens of other markets with a slightly different color of the same issue: getting accurate field reporting from a deskless, mobile workforce back to a centralized hub for reporting and analysis. While we knew it wasnât a brand new invention to create a data collection platform, we knew we could bring a novel solution combining our strengths, and that other existing tools on the market had fundamental holes we saw as essential to our own business. We had a few core ideas, all of which combined would give us a unique and powerful foundation we didnât see elsewhere:
Use a mobile-first design approach â Too many products at the time still considered their mobile offerings afterthoughts (if they existed at all).
Make disconnected, offline use seamless to a mobile user â They shouldnât have to fiddle. Way too many products in 2011 (and many still today) took the simpler engineering approach of building for always-connected environments. (requires #1)
Put location data at the core â Everything geolocated. (requires #1)
Enable business analysis with spatial relationships â Even though weâre geographers, most people donât see the world through a geo lens, but should. (requires #3)
Make it cloud-centric â In 2011 desktop software was well on the way out, so we wanted an platform we could cloud host with APIs for everything. Creating from building block primitives let us horizontally scale on the infrastructure.
Regardless of the addressable market for this potential solution, we planned to invest and build it anyway. At the beginning, it was critical enough to our own business workflow to spend the money to improve our data products, delivery timelines, and team efficiency. But when looking outward to others, we had a simple hypothesis: if we feel these gaps are worth closing for ourselves, the fusion of these ideas will create a new way of connecting the field to the office seamlessly, while enhancing the strengths of each working context. Markets like utilities, construction, environmental services, oil and gas, and mining all suffer from a similar body of logistical and information management challenges we did.
Fulcrum wasnât our first foray into software development, or even our first attempt to create our own toolset for mobile mapping. Previously weâd built a couple of applications: one never went to market, was completely internal-only, and one we did bring to market for a targeted industry (building and home inspections). Both petered out, but we took away revelations about how to do it better and apply what weâd done to a wider market. In early 2011 we went back to the whiteboard and conceptualized how to take what weâd learned the previous years and build something new, with the foundational approach above as our guidebook.
We started building in early spring, and launched in September 2011. It was free accounts only, didnât have multi-user support, there was only a simple iOS client and no web UI for data management â suffice it to say it was early. But in my view this was essential to getting where we are today. We took our infant product to FOSS4G 2011 to show what we were working on to the early adopter crowd. Even with such an immature system we got great feedback. This was the beginning of learning a core competency you need to make good products, what Iâd call âidea fusionâ: the ability to aggregate feedback from users (external) and combine with your own ideas (internal) to create something unified and coherent. A product canât become great without doing these things in concert.
I think itâs natural for creators to favor one path over the other â either falling into the trap of only building specifically what customers ask for, or creating based solely on their own vision in a vacuum with little guidance from customers on what pains actually look like. The key Iâve learned is to find a pleasant balance between the two. Unless you have razor sharp predictive capabilities and total knowledge of customer problems, you end up chasing ghosts without course correction based on iterative user feedback. Mapping your vision to reality is challenging to do, and it assumes your vision is perfectly clear.
On the other hand, waiting at the beck and call of your user to dictate exactly what to build works well in the early days when youâre looking for traction, but without an opinion about how the world should be, you likely wonât do anything revolutionary. Most customers view a problem with a narrow array of options to fix it, not because theyâre uninventive, but because designing tools isnât their mission or expertise. Theyâre on a path to solve a very specific problem, and the imagination space of how to make their life better is viewed through the lens of how they currently do it. Like the quote (maybe apocryphally) attributed to Henry Ford: âIf Iâd asked customers what they wanted, they wouldâve asked for a faster horse.â In order to invent the car, you have to envision a new product completely unlike the one your customer is even asking for, sometimes even requiring other industry to build up around you at the same time. When automobiles first hit the road, an entire network of supporting infrastructure existed around draft animals, not machines.
Weâve tried to hold true to this philosophy of balance over the years as Fulcrum has matured. As our team grows, the challenge of reconciling requests from paying customers and our own vision for the future of work gets much harder. What constitutes a âbig ideaâ gets even bigger, and the compulsion to treat near term customer pains becomes ever more attractive (because, if youâre doing things right, you have more of them, holding larger checks).
When I look back to the early â10s at the genesis of Fulcrum, itâs amazing to think about how far weâve carried it, and how evolved the product is today. But while Fulcrum has advanced leaps and bounds, it also aligns remarkably closely with our original concept and hypotheses. Our mantra about the problem weâre solving has matured over 7 years, but hasnât fundamentally changed in its roots.
For his final weekly column of his long career, Walt Mossberg talks about what he calls âambient computingâ, the penetration of IoT, AR, VR, and computers throughout our lives:
I expect that one end result of all this work will be that the technology, the computer inside all these things, will fade into the background. In some cases, it may entirely disappear, waiting to be activated by a voice command, a person entering the room, a change in blood chemistry, a shift in temperature, a motion. Maybe even just a thought. Your whole home, office and car will be packed with these waiting computers and sensors. But they wonât be in your way, or perhaps even distinguishable as tech devices. This is ambient computing, the transformation of the environment all around us with intelligence and capabilities that donât seem to be there at all.
Great piece from Chris Anderson on the prospects of the commercial drone space. He makes great points about the true success of the technology being its penetration into business applications:
Although it might surprise you, I hope the future of drones is boring. As the CEO of a drone company, I obviously stand to gain from the rise of drones, but I donât see that happening if we are focused on the excitement of drones. The sign of a successful technology is not that it thrills but that it becomes essential and accepted, fading into the wallpaper of modernity. Electricity was once a magic trick, but now it is assumed. The internet is going the same way. My end goal is for drones to be thought of as just another unsexy industrial tool, like agricultural machinery or generators on construction sites â as obviously useful as they are unremarkable.
Another good reminder from Fred Wilson on the importance of focus. He suggests setting no more than 3 âbig effortsâ in a year, the âmust dosâ. More than that is lying to yourself and losing steam on the ones you really care about:
But regardless of whether you have two, three, or four big efforts this year, you should test all of your initiatives agains the âmust doâ vs âcan doâ test. Just because you can do something doesnât mean you should. Iâve written about the importance of strategy and saying no. Strategy isnât saying no. It is figuring out what is the most important thing for your company and deciding to focus on it and say no to everything else.
A couple years ago I bought a Kindle Paperwhite, after moving almost exclusively to ebooks when the Kindle iPhone app launched with the App Store. I read constantly, and always digital books, so I thought Iâd write up some thoughts on the Kindle versus its app-based counterparts like the Kindle apps, iBooks, and Google Books, all of which Iâve read a significant amount with. For I long time I resisted the Kindle hardware because I wasnât interested in a reflective-only reading surface. The Paperwhiteâs backlit screen and low cost made it easy for me to justify buying. I knew Iâd use the heck out of it if I got one.
I had a brief stint with iBooks when Apple launched that back in 2010. At the time, the Kindle apps for iOS platforms were seriously lacking in handling the finer details of the reading experience. You couldnât modify margins or typeset layout, iBooks had better font selection, highlighting and notetaking worked inconsistently, and the brightness controls were poor. But eventually the larger selection available on Kindle and Amazonâs continued feature development in their app brought me back.
Buying the Paperwhite was a great investment. The top reasons are itâs portability, backlit screen, and the battery life.
When I say âportabilityâ, itâs not about comparison to the iPhone (obviously the ultimate in portable, always-with-you reading), but with physical books. Prior to the Kindle, Iâd do probably 1/3 of my reading on paper, and thatâs now dropped almost to zero1. Even with the leather case I use, itâs so lightweight I can carry it everywhere, and I donât need to bring paper books with me on trips or airplanes anymore. Itâs light enough to be unnoticeable in a backpack, and even small enough to fit in some jacket pockets.
The backlit screen is great and gives the advantage of eInk combined with the ability to use in darkness. The best thing about that screen is the fidelity of brightness control you can get versus an iOS device. In full darkness you can tune down the backlight to nearly zero, still read in the dark and not disturb anyone else. With my iPad, even at the minimum brightness setting it can light up the room if itâs really dark.
The battery life on eInk devices is unbelievable. In two years Iâve probably charged the Kindle a dozen times total. When itâs in standby mode it uses effectively zero power, and even in use (if the backlightâs not turned up) the drain is minimal. I almost forget that itâs electronic at all. In a world where everything seems to need charging, itâs great to have some technology that doesnât.
Iâd be remiss if I didnât mention the beauty of accessing the massive library of books directly from the device. With a few taps I can have a new book purchased and downloaded, reading it in seconds. Using the iOS version for so long, Iâve missed out on this. Thanks to the Apple IAP policies and Amazon (justifiably) not wanting to share revenue with Apple for book sales, the app is only a reader; thereâs no integrated buying experience. I just dealt with this by going out and buying titles through a browser session, but I didnât realize the smoothness I was missing out on until I had it integrated with the Kindle.
Amazonâs long been an acquirer of other companies, but doesnât have a great track record of integrations. They bought Audible and Goodreads long ago (2008 and 2013 respectively), both of which Iâve used for years. Only recently have they integrated any of that into the Kindle experience. On their iOS apps they launched a ânarrationâ feature thatâll play back the audio in sync with the pages if you own audio and text versions (a little goofy, but at least theyâre integrated). There arenât many titles I own both audio and text versions of, but the ability to sync progress between the two formats is really nice. On the Goodreads front, the integration there on the Kindle is fantastic. I have access to my âwant to readâ list right on the home screen for quick access.
With so many devices and quirky pieces of technology, itâs nice to have something reliable and simple that does one job consistently well.
I only read physical books if they arenât available in e-format, or theyâre nonfiction or reference books with heavy use of visuals. ↩
Great post from Benedict Evans on the state of voice computing in 2017. On wider answer domains and creating the uncanny valley:
This tends to point to the conclusion that for most companies, for voice to work really well you need a narrow and predictable domain. You need to know what the user might ask and the user needs to know what they can ask.
This has been the annoyance with voice UIs. For me Siri was really the first commonplace voice interface I ever tried for day to day things. The dissonance between âyou can say a few thingsâ and âask me anythingâ has been the issue with Siri. Apple set false expectations of the technology that end up creating a let down. Evans makes a good point on the combination of selecting the right problem and narrowing the domain:
This was the structural problem with Siri - no matter how well the voice recognition part worked, there were still only 20 things that you could ask, yet Apple managed to give people the impression that you could ask anything, so you were bound so ask something that wasnât on the list and get a computerized shrug. Conversely, Amazonâs Alexa seems to have done a much better job at communicating what you can and cannot ask. Other narrow domains (hotel rooms, music, maps) also seem to work well, again, because you know what you can ask. You have to pick a field where it doesnât matter that you canât scale.
With the expansion of this tech in Google Now, Alexa, Siri and others, the problem becomes âwhat can I ask?â rather than the technical conversion of speech to text and text to command. âAsk me anythingâ is a non-starter, because right now you know the failure rate on any given question will be high. This is what happened with Siri and many users; it only takes a few failures of what we perceive as simple answers to switch us off entirely. I gave up on Siri years ago, and I wonder how hard itâll be for Apple to reframe the perception of the technology to restore that confidence.
The latest episode of Debug had a great discussion with Don Melton and Jim Ray on Safariâs development, web standards, and the state of web advertising.
The link jumps right to the discussion of ads and the ethics of content blocking in current publishing landscape. Rene Ritchie has some interesting thoughts on the subject and wrote a piece on iMore about it. Ads, Javascript embeds, and trackers are toeing the line on privacy (if not crossing it), but more simply than that, theyâre making the Internet slower and more obnoxious to use. Ad networks, however scummy, are keeping the sites we like afloat.