Weekend Reading: Koestler on Awareness, 21st Century Alchemy, and the Gini Coefficient

July 30, 2021 • #

🔮 The Nightmare That Is a Reality

In early 1944, journalist Arthur Koestler was onto the horrors of the Holocaust taking place in Europe. He wrote this essay, originally published in the New York Times, calling attention to the atrocities in a climate where most in media were denying or claiming conspiracy.

At present we have the mania of trying to tell you about the killing, by hot steam, mass-electrocution and live burial of the total Jewish population of Europe. So far three million have died. It is the greatest mass-killing in recorded history; and it goes on daily, hourly, as regularly as the ticking of your watch.

We say, “I believe this,” or, “I don’t believe that,” “I know it,” or “I don’t know it”; and regard these as black-and-white alternatives. Now in reality both “knowing” and “believing” have varying degrees of intensity. I know that there was a man called Spartacus who led the Roman slaves into revolt; but my belief in his one-time existence is much paler than that of, say, Lenin. I believe in spiral nebulae, can see them in a telescope and express their distance in figures; but they have a lower degree of reality for me than the inkpot on my table.

Even during the war, the levels of denial were palpable. People didn’t believe the available evidence of what the Nazi regime was doing. He closes out the essay with a remarkably prescient observation of what happens when communications are pervasive: we have more evidence than ever before, and yet still have trouble separating fact and fantasy:

Our awareness seems to shrink in direct ratio as communications expand; the world is open to us as never before, and we walk about as prisoners, each in his private portable cage. And meanwhile the watch goes on ticking. What can the screamers do but go on screaming, until they get blue in the face?

See also episode #40 of The Portal, where Eric Weinstein discusses the essay.

⚗️ 21st Century Alchemy

Alex Crompton responds to the supply problem in investing in companies:

There are way more investors than there are companies that make investors money. By some estimates, less than 1% of the companies investors fund generate over 75% of the profits across the entire industry.

Since investors are always seeking the opportunities from the same supply of founders and companies, there are only a few tactics that can work to differentiate yourself and find the truly great returns — primarily access, exposure, and quality selecting (picking the winners from the group).

But at his firm EF, a unique sort of incubator, they focus on generating supply. If you can generate founders no one else is finding (because they’re otherwise never founding companies), you create a type of alchemy that spawns ideas that’d never get off the ground otherwise.

This is what we’re doing at EF. We are taking in raw materials — hundreds of extraordinary people from across the world every year — and putting them through an iterative, data driven methodology. We are experimenting all the time: collecting information about the founders we support; understanding their qualitative experience; and learning what works and doesn’t work. From the moment we first make contact, we are building a methodology to get them from Day minus 100 to Day 1 of something valuable.

📊 Against Overuse of the Gini Coefficient

Vitalik Buterin on the Gini coefficient’s problems when measuring distributions in crypto:

A typical resident of a geographic community spends most of their time and resources in that community, and so measured inequality in a geographic community reflects inequality in total resources available to people. But in an internet community, measured inequality can come from two sources: (i) inequality in total resources available to different participants, and (ii) inequality in level of interest in participating in the community.

Taking Back Our User Accounts

July 28, 2021 • #

Identity management on the internet has been broken for years. We all have 800 distinct logins to different services, registered to different emails with different passwords. Plus your personal data exists in a morass of data silos, each housing a different slice of your personal information, each under a different ToS, subject to differing privacy regulations, and ultimately not owned by you. You sign up for a user account on a service in order for it to identify you uniquely, providing functionality tailored to you. Service providers getting custody of your personal data is a side-effect that’s become an accepted social norm.

Ethereum chain

In this piece, Jon Stokes references core power indicators in public finance like capital ratios or assets under management that help tell us when an institution is getting too big:

As a society, we realized a long time ago that if we let banking go entirely unregulated, then we end up with these mammoth, rickety entities that lurch from crisis to crisis and drag us all down with them. So when we set about putting regulatory limits on banks, we used a few simple, difficult-to-game numbers that we could use as proxies for size and systemic risk.

The “users table” works as an analogous metric in tech: the larger the users table gets (the more users a product has), the more centralized and aggregated their control and influence. Network effects, user lock-in, and power over privacy policies expand quadratically with the scope of the user base.

As Stokes points out, web3 tech built on Ethereum will gradually wrest back control of the users table with a global, decentralized replacement controlled by no-one-in-particular, wherein users retain ownership of their own identity:

Here’s what’s coming: the public blockchain amounts to a single, massive users table for the entire Internet, and the next wave of distributed applications will be built on top of it.

Dapps on Ethereum are so satisfying to use. The flow to get started is so smooth — a couple of clicks and you’re in. There’s no sign up page, no way for services to contact you (presumably unless they build something to do so and you opt-in to giving your information). Most of my dapp usage has been in DeFi, where you visit a new site, connect your wallet, and seconds later you can make financial transactions. It’s wild.

The global users table decentralizes the authentication and identity layers. You control your identity and your credentials, and grant access to applications if you choose.

Take the example of a defi application like Convex. When I visit the app, I first grant the service access to interact with my wallet. Once I’m signed in, I can stake tokens I own, or claim rewards from staking pools I’ve participated in proportional to my share of the pool. All of the data that represents my balances, staking positions, and earned rewards lives in the smart contracts on the Ethereum blockchain, not in Convex’s own databases. Services like this will always need to maintain their own application databases for aspects of their products. But the critical change with the global users table is that the user interaction layer exists on-chain and not in a silo’d database, with custody completely in the hands of the person with the keys to the wallet.

If more services use the dapp model and build on the public, on-chain global users table, what will the norms become around extending that table with additional metadata? With some systems like ENS (the Ethereum Name Service, decentralized DNS), subdomains and other addresses associated with an ENS address are properties written on the blockchain directly. This makes sense for something like name services, where they’re public by design. But other use cases will still require app developers to keep their own attributes associated with your account that don’t make sense on the public, immutable blockchain. I may want GitHub to know my email address for receiving notifications from the app, but I may not want that address publicly attributed to my ETH address.

Web3 is so new that we haven’t figured out yet how all this shakes out. The most exciting aspect is how it overturns the custody dynamics of user data. Even though this new world moves the users table out of the hands of individual companies, everyone will benefit (users and companies) over the long-term. Here’s Stokes again:

If you want to build a set of network effects that benefit your company specifically, it won’t be enough to simply cultivate a large users table or email list — no, you’ll have to offer something on-chain that others are also incentivized to use, so that the thing you’re uniquely offering spreads and becomes a kind of currency.

Incentives for app developers will realign in a way that produces more compelling products and a better experience for users.

Making Cast Iron

July 27, 2021 • #

I’m a sucker for a How It’s Made episode, and this tour of the Lodge factory combines that with my Food YouTube-watching obsession.

What’s amazing here is to see the reuse at work, and how few inputs are required to go from raw materials to kitchen cookware. Scrap metal, pig iron, sand, and heat come together to make products that can last generations if cared for properly. Some of the most useful tools out there are some of the simplest. In an age when we have infinite gadgets to do every specialized thing, it’s cool to see Lodge’s business booming for the most basic and versatile of cooking tools.

One side observation on systems:

When I watch episodes of How It’s Made, I try and visualize the system diagram of inputs, outputs, and operations (like what you’d find in Donella Meadows’s excellent Thinking in Systems — I know, I must be fun to watch TV with). The most interesting production lines are those that reduce and reuse throughout the process, and maximize what can be done per unit of area. Lodge’s facility is about as reductionist as you can get while maintaining the throughput they do.

On Effectiveness vs. Efficiency

July 26, 2021 • #

“Efficiency is doing things right; effectiveness is doing the right things.”

— Peter Drucker

People throw around these two words pretty indiscriminately in business, usually not making a distinction between them. They’re treated as interchangeable synonyms for broadly being “good” at something.

We can think about effectiveness and efficiency as two dimensions on a grid, often (but not always) in competition with one another. More focus on one means less on the other.

That Drucker quote is a pretty solid one-line distinction. But like many quotes, it’s concerned with being pithy and memorable, but not that helpful.


“Doing things right” is too amorphous. I’d define the two dimensions like this:

  • Efficiency is concerned with being well-run, applying resources with minimal waste; having an economical approach
  • Effectiveness is a focus on fit, fitting the right solution to the appropriate problem, being specific and surgical in approach

Where would speed fit into this? Many people would think of velocity of work as an aspect of efficiency, but it’s also a result of and an input to effectiveness. When a team of SEAL operators swoop in to hit a target, we’d say that’s just about the pinnacle of being “effective”, and swiftness is a key factor in driving that effectiveness.

Let’s look at some differences through the lens of product and company-building. What does it mean to orient on one over the other? Which one matters more, and when?

A company is like a machine — you can have an incredibly efficient machine that doesn’t do anything useful, or you can have a machine that does useful things while wasting a huge amount of energy, money, and time.

With one option, our team leans toward methods and processes that efficiently deploy resources:

  • Use just the right number of people on a project
  • Create infrastructure that’s low-cost
  • Build supportive environments that get out of peoples’ way
  • Instrument processes to measure resource consumption
  • Spend less on tools along the way

With this sort of focus, a team gets lean, minimizes waste, and creates repeatable systems to build scalable products. Which all sounds great!

On the other dimension, we apply more attention on effectiveness, doing the right things:

  • Spend lots of time listening to customers to map out their problems (demand thinking!)
  • Get constant feedback on whether or not what we’re making helps customers make progress
  • Test small, incremental chunks so we stay close to the problem
  • Make deliberate efforts, taking small steps frequently, not going too far down blind alleys with no feedback

Another great-sounding list of things. So what do we do? Clearly there needs to be a balance.

Depending on preferences, personality types, experiences, and skill sets, different people will tend to orient on one of these dimensions more than the other. People have comfort zones they like to operate in. Each stage of product growth requires a different mix of focuses and preferences, and the wrong match will kill your company.

If you’re still in search of the keys to product-market fit — hunting for the right problem and the fitting solution — you want your team focused on the demand side. What specific pains do customers have? When do they experience those pains? What things are in our range that can function as solutions? You want to spend time with customers and rapidly probe small problems with incremental solutions, testing validity of your work. That’s all that matters. This is Paul Graham’s “do things that don’t scale” stage. Perfecting your machine’s efficiency is wasted effort until you’re solving the right problems.

A quick note on speed, and why I think it’s critical to being effective — if you’re laser-focused on moving carefully and deliberately to solve the right problem at the right altitude, but not able to move quickly enough, you won’t have a tight enough feedback loop to run through the iteration cycle enough over a period of time. Essential to the effectiveness problem is the ability to rapidly drive signal back from a user to validate your direction.

When you find the key that unlocks a particular problem-solution pair, then it’s time to consider how efficiently you can expand it to a wider audience. If your hacked-together, duct-taped solution cracks the code and solves problems for customers, you need to address the efficiency with which you can economically expand to others. In the early to mid-stage, effectiveness is far and away the more important thing to focus on.

The traditional definition of efficiency refers to achieving maximum output with the minimum required effort. When you’re still in search of the right solution, the effort:output ratio barely matters. It only matters insofar as you have the required runway to test enough iterations to get something useful before you run out of money, get beat by others, or the environment changes underneath you. But there’s no benefit to getting 100 miles/gallon if you’re driving the wrong way.

Getting this balance wrong is easy. There’s a pernicious aspect to many engineers, particularly so in pre-PM-fit products: they like to optimize things. You need to forcefully resist spending too much time on optimization, rearchitecting, refactoring, et al until it’s the right time (i.e. the go-to-market fit stage, or thereabout). As builders or technologists, most of us bristle at the idea of doing something the quick and dirty way. We have that urge to automate, analyze, and streamline things. It’s not to say that there’s zero space for this. If you spend literally zero time on a sustainable foundation, then your product clicks and it’s time to scale up, you’ll be building on unstable ground (see the extensive literature on technical debt here).

There’s no “correct” approach here. It depends on so many factors. As Tom Sowell says, “there are no solutions, only trade-offs.” In my first-hand experience, and from sideline observations of other teams, companies are made by favoring effectiveness early and broken by ignoring efficiency later.

Elyse 6.0

July 14, 2021 • #

Well a lot has happened since Elyse’s 5.0 mark!

That birthday happened in the middle of the pandemic while we were still (mostly) isolated, she started kindergarten remotely, switched to going in-person, then switched schools at spring break, learned how to ride a bike, broke her arm and had a cast for a month, plus all the other changes kids go through at that age.

Elyse 6.0

Since we moved in March she’s been loving the new school with more classmates in the local neighborhood. Hopefully for the 2021-22 school year it’ll be even closer to fully back to normal again. We’re gradually finding our local attractions in the new locale, but we’ve also got a lot more space to make the house itself a fun hangout for the summer.

On to first grade!

Weekend Reading: DeFi Yields, Cloudflare's Internet, and Standards in Logistics

July 2, 2021 • #

📈 How Are DeFi Yields So High?

This is a great primer on yield farming in DeFi from Nat Eliason. Seeing the insane 1000% APYs on some DeFi products, you have to wonder if it’s a Ponzi scheme (hint: sometimes it probably is). But there are plenty of legitimate and relatively reliable projects growing right now that look fascinating for the movement.

☁️ Cloudflare’s Intelligent Design

Cloudflare has such an interesting approach to building the “pipes and wires” of the internet, a business most people wouldn’t think of as glamorous (even though it’s technically extremely complex). The only other companies out there building and shipping products as quickly are Stripe and Amazon, one that Byrne Hobart calls out the reference to:

Their “workers” product lets customers write code and then deploy it to the edge around the world; they can be location-agnostic, both in the technical sense that packets won’t take a needlessly roundabout path to users and in the legal sense that if they run something in a country that requires data to be stored locally, it will be stored locally. They originally built this as an internal tool for deploying their own code, then started letting customers use it. And then they turned that decision into an abstraction: “And so we implemented what we internally and somewhat cheekily called the Bezos Rule. And what the Bezos Rule is, is the exact same rule that Amazon put in place when they were developing AWS, which is, any API or any development tool that we build for ourselves and for our own team, we also are then going to make available to our customers.” Cloudflare built an uptime factory, then workers became an uptime factory factory, and with the Bezos rule they’ve codified the production of such things: an uptime factory factory factory. They are no doubt adding new layers of recursion even now.

🚢 Ever Given, Supply Chains, and the Physical World

A great overview of the state of logistics from Flexport founder Ryan Peterson.

With demand for goods rising around the world, our shipping infrastructure is hitting scaling limits and bottlenecks that will be hard problems in the coming years. Petersen considers inconsistent standards and fragmentation to be major challenges to surmount:

Our computers, laptops, tablets, phones, and more can all connect quickly to the information we seek thanks to standardization. And while today’s global trade network is kind of like an internet of physical goods, it’s missing a standard like HTTP. The same way data passes between devices via the internet, goods pass between ocean ports, airports, warehouses, and other entities to reach their final destination. Without a logistics standard to act as a request-response protocol, all the players — suppliers, drayage, ports, warehouses, buyers — have to stitch their networks together manually.

Information gets lost; layers of redundancy, designed as backups given low visibility, slow the exchange: connections end up being very brittle. Let’s say there’s a shipment scheduled to arrive in Long Beach on Tuesday. But which terminal exactly and what pier number? What time is pickup? How long before late charges are incurred? Finding these answers is labor-intensive and imprecise. Logistics managers end up consulting different sources on websites, via email, or in person.

The dirty secret of the industry is that no one really knows where their stuff is. But if global trade were like the network of information as it is on the internet, we could simply type or speak into a search bar to ask and answer these questions, precisely.

This is not about the desired features of such a system, but rather about the need for standardization, the need for a universal language for global trade. Once this exists, the physical world, like software, becomes searchable, programmable, accessible — connecting a patchwork of country-specific regulations and more.

Interface points between ships, terminals, carriers, and suppliers should follow standars, like APIs for the physical world. But standards are one of the hardest coordination problems to solve. The most powerful and versatile standards are adopted organically. How can you get thousands of freight forwarders speaking the same language?

Ethereum Name Service

July 1, 2021 • #

I’ve bought a couple of domains on ENS recently, which is an a decentralized, on-chain version of DNS running on Ethereum.

Go to ens.domains and connect your ETH wallet, then you can search for available names just like you would on a normal domain registrar. With ENS, your domain name is essentially an NFT, with addresses and TXT records nested underneath.

I have no immediate need for this, other than the convenient reverse lookup to your eth address, which is neat. But if you believe in the future of web3 and Ethereum, it’s a good time to invest in the internet real estate of the next generation.

If it continues to be successful, ENS will disintermediate registrars from domain name purchases. Auctions to buy names from existing holders don’t need to be black box offers and escrowed negotiations; we could have auctions on OpenSea for popular addresses.

Subscribe here to receive my newsletter, Res Extensa, a digest of my latest posts, links, and updates. Currently bi-weekly.