Weekly Links: LiDAR, WannaCry, and OSM Imagery

May 18, 2017 • #

🗺 LiDAR Data for DC Available as an AWS Public Dataset

LiDAR point cloud data for Washington, DC, is available for anyone to use on Amazon Simple Storage Service (Amazon S3). This dataset, managed by the District of Columbia’s Office of the Chief Technology Officer (OCTO), with the direction of OCTO’s Geographic Information System (GIS) program, contains tiled point cloud data for the entire District along with associated metadata.

This is a great move by the District to make high value open data available.

🖥 WannaCry and the Power of Business Models

Ben Thompson breaks down the blame game of the latest zero-day attack on Windows systems. This article makes a great case for the business model being to blame rather than Microsoft, their customers, the government, or someone else. a SaaS business model naturally aligns incentives for everyone:

I am, of course, describing Software-as-a-service, and that category’s emergence, along with cloud computing generally (both easier to secure and with massive incentives to be secure), is the single biggest reason to be optimistic that WannaCry is the dying gasp of a bad business model (although it will take a very long time to get out of all the sunk costs and assumptions that fully-depreciated assets are “free”). In the long run, there is little reason for the typical enterprise or government to run any software locally, or store any files on individual devices. Everything should be located in a cloud, both files and apps, accessed through a browser that is continually updated, and paid for with a subscription. This puts the incentives in all the right places: users are paying for security and utility simultaneously, and vendors are motivated to earn it.

🛰 DigitalGlobe Satellite Imagery Launch for OpenStreetMap

DG is opening up access to imagery for tracing in OpenStreetMap, giving the project a powerful new resource for more basemap data. Especially cool for HOTOSM projects:

Over the past few months, we have been working with several of our partners that share the common goal of improving OpenStreetMap. To that end, they have generously funded the launch of a global imagery service powered by DigitalGlobe Maps API. This will open more data and imagery to aid OSM editing. OSM contributors will see a new DigitalGlobe imagery source, in addition to imagery provided by our partners, Bing and Mapbox.

📷 Updating Google Maps with Deep Learning

If you’re in the mapping space, seeing any of this R&D that Google is doing is mind-boggling.

Weekly Links: Podcast Edition

May 4, 2017 • #

🚗 The Man Behind Uber

The Daily is the New York Times’ daily radio show, which I’ve been enjoying lately. This episode is a companion to their recent piece on Travis Kalanick, Uber’s CEO.

🚢 Containers

Containers is an audio documentary on global trade and container shipping. Alexis Madrigal dives into the processes that bring things like coffee from a farm in Ethiopia to your local hipster coffee shop.

🚀 Nukes

The crew from Radiolab looks at the nuclear arsenal chain of command. At their invention, atomic weapons were treated like other military munitions: the military leadership had authority to use them like other conventional weapons. Over time we implemented the system we have now, requiring presidential authorization.

Weekly Links: Cartography's Future, Interactive Maps, and Building Moats

April 27, 2017 • #

🚙 Cartography in the Age of Autonomous Vehicles

An excellent, extremely detailed analysis from Justin O’Bierne on how maps and cartography might evolve if autonomous vehicles negate our need for turn-by-turn navigation.

We can’t apply today’s maps to tomorrow’s cars – but this is exactly what those who think cartography is dying are doing. (It’s not that we’ll no longer be navigating, it’s that we’ll be navigating different things – and we’ll need new kinds of maps to help us.)

🌎 Few Interact With Our Interactive Maps–What Can We Do About It?

Brian Timoney’s done some great writing on this topic over the last few years. In the GIS world, enormous amounts of money are spent by governments to build and host map portals. The goals are typically noble (transparency, openness, providing access to citizens), but the results are mixed. Much of the spend is in making the information interactive. The dirty secret is that people don’t actually interact with these maps. He proposes a number of ideas of how to get the best of both: lower costs to create with the same (or higher) consumer engagement. For example, static maps cost much less to create and could even do better at directing a reader to the right information:

Just because you’re publishing a map to the web, doesn’t mean it has to be a web map. If a user is only going to spend 10-15 seconds with your map without interacting, why spend two weeks wrestling with your Javascript? And the great thing is the focus a static map brings–a single view, a single story: don’t bury the lede.

💡 The New Moats

Jerry Chen from Greylock thinks “systems of intelligence” will be the next business model for software companies to create defensible value. He differentiates “systems of record” and “systems of engagement” as two layers in a stack of software applications that have existed since the dawn of the IT revolution in the 1990s.

These AI-driven systems of intelligence present a huge opportunity for new startups. Successful companies here can build a virtuous cycle of data because the more data you generate and train on with your product, the better your models become and the better your product becomes. Ultimately the product becomes tailored for each customer which creates another moat, high switching costs.

Aerial imagery with the Mavic

April 24, 2017 • #

At work we’ve been building an integration between Fulcrum and DroneDeploy, a service for automating drone flight and data capture for aerial imagery. It’s compatible with the Mavic, so I gave it a shot with some test flights over my house.

The idea is simple: use DroneDeploy to draw on a map the area you want to survey from above, and their app handles building the flight plan, sending it to the drone, and flying the waypoints to take all the photos. You then take the pictures from the drone’s storage and upload to your DroneDeploy project for processing. It stitches them into a single mosaic and does a few other data processing functions to give you maps of NDVI plant health, elevation, and even a 3D model of the scene.

Aerials of my house

This data is from a 3 minute flight over my house at about 150 feet. The post-processed scene reports 0.75 acres at 0.6 in/pixel resolution. Only 13 stills required to create this image. It’s pretty impressive for a few minutes of setup and a few minutes of flying. In the full-res images you can actually see Elyse and I clearly standing in the backyard. She was a little spooked as it took off, but loved the landing!

Weekly Links: OSM on AWS, Fulcrum Editor, & Real-time Drone Maps

April 21, 2017 • #

Querying OpenStreetMap with Amazon Athena 🗺

Using Amazon’s Athena service, you can now interactively query OpenStreetMap data right from an interactive console. No need to use the complicated OSM API, this is pure SQL. I’ve taken a stab at building out a replica OSM database before and it’s a beast. The dataset now clocks in at 56 GB zipped. This post from Seth Fitzsimmons gives a great overview of what you can do with it:

Working with “the planet” (as the data archives are referred to) can be unwieldy. Because it contains data spanning the entire world, the size of a single archive is on the order of 50 GB. The format is bespoke and extremely specific to OSM. The data is incredibly rich, interesting, and useful, but the size, format, and tooling can often make it very difficult to even start the process of asking complex questions.

Heavy users of OSM data typically download the raw data and import it into their own systems, tailored for their individual use cases, such as map rendering, driving directions, or general analysis. Now that OSM data is available in the Apache ORC format on Amazon S3, it’s possible to query the data using Athena without even downloading it.

Introducing the New Fulcrum Editor 🔺

Personal plug here, this is something that’s been in the works for months. We just launched Editor, the completely overhauled data editing toolset in Fulcrum. I can’t wait for the follow up post to explain the nuts and bolts of how this is put together. The power and flexibility is truly amazing.

Real-time Drone Mapping with FieldScanner 🚁

The team at DroneDeploy just launched the first live aerial imagery product for drones. Pilots can now fly imagery and get a live, processed, mosaicked result right on a tablet immediately when their mission is completed. This is truly next level stuff for the burgeoning drone market:

The poor connectivity and slow internet speeds that have long posed a challenge for mapping in remote areas don’t hamper Fieldscanner. Designed for use the fields, Fieldscanner can operate entirely offline, with no need for cellular or data coverage. Fieldscanner uses DroneDeploy’s existing automatic flight planning for DJI drones and adds local processing on the drone and mobile device to create a low-resolution Fieldscan as the drone is flying, instead of requiring you to process imagery into a map at a computer after the flight.

Mavic Pro First Impressions

April 19, 2017 • #

I bought a Mavic Pro a couple weeks ago and just got a chance to take my first flights this past weekend. In short, it’s the most impressive technology product I’ve used in years. I’ve never owned any drone, so this is pretty cool for someone in the mapping industry. Let’s dive in.

Mavic Pro

Since going out to fly aerial mapping missions with some partners of ours a couple months back, I wanted to buy one of DJI’s drones — either the larger Phantom 4 Pro, or the smaller Mavic. Extensive research led me to the portability and almost-equivalent technical specs of the Mavic over the P4. It’s so close in most of its capabilities, but the compactness of it is remarkable. I got the kit with the carrying bag, and it’s so small you could literally take it anywhere. I love the prospect of having this as a photography platform while traveling.

I did my first test flight in the backyard, plopped it down on the patio and kicked on the drone and remote control. Everything linked up right away and the DJI Go app was “Ready to Fly”. It’s so simple it seems like you’re doing something wrong. It feels like there should be more configuration. As long as you’ve got a clear GPS signal and you’re in “beginner” mode, you can just take off.

My first reaction was how easy it is to fly. You don’t have to do anything and the drone just hovers. Let go of the controls at any time and it stays put. The controller sensitivity feels smooth and intuitive; I was strafing sideways, rotating, and descending to create cool sweeping shots within 2 minutes. With a little practice you could do pro-level photography with this. Landing was just as easy: you descend where you want to land and as you approach the ground the drone halts at about 18” using its collision detection sensors. With another long hold on the left stick, it initiates the landing sequence and slowly touches down. I also tried the “Return to Home” feature, which is enabled as long as you let the drone get a good locked home location before takeoff. It’s so cool to see it work. The drone can be away from you and when you tap Return to Home on the app, the drone comes home and makes a smooth and careful landing. In a couple of tests it came home and landed in a 5-10 foot radius from the takeoff point.

Next is the software. The DJI Go app is what you use when you dock your device with the controller to get the live video, heads-up display, and settings controls, and it’s an amazing piece of software. I hadn’t used earlier versions, but in version 4, you can control everything from the app. The video feed from the drone and the HUD view of all the needed metrics looks great (altitude, bearing, distance). Triggers on the sides of the remote snap photos and start recording video. DJI has honed the system down to the simplicity of a video game. I’ve only done a couple of flights, but the video and photo quality is excellent. 4K video from this tiny airframe and camera is a stunning feat.

One of my flights was in about 15 knot winds, and the little guy held up well. The camera’s gimbal was rock steady even in breezy conditions. I noticed a tiny bit of jitter when flying into the teeth of the wind, but not enough to make a difference. I flew one mission of aerial imagery with DroneDeploy, but will dive deeper on that in a future post when I can do more flights.

A few other things on the docket to try:

  • Object detection and tracking — you can lock onto a moving object and the drone and camera will follow. When I find a use case for it I’ll try it out and report back. Looks neat from videos I’ve seen.
  • Flying at high altitude — so far I haven’t gone above about 150 feet.
  • Flying at longer ranges — haven’t yet gone farther than a few hundred yards away, but the range on this thing is huge. When I get more confident with it I’d like to do some longer flights for cool video. Thinking about our Florida Keys trip to Marathon in June!

Weekly Links: Tensor Processing, Amazon, and Preventing Traffic Jams

April 13, 2017 • #

Google’s “Tensor Processing Unit” 💻

Google has built their own custom silicon dedicated to AI processing. The power efficiency gains with these dedicated chips is estimated to have saved them from building a dozen new datacenters.

But about six years ago, as the company embraced a new form of voice recognition on Android phones, its engineers worried that this network wasn’t nearly big enough. If each of the world’s Android phones used the new Google voice search for just three minutes a day, these engineers realized, the company would need twice as many data centers.

Jeff Bezos’ Annual Letter to Shareholders 📃

An excellent read. Their philosophy of experimentation comes through. I liked this bit, on the “velocity” of decision making:

Day 2 companies make high-quality decisions, but they make high-quality decisions slowly. To keep the energy and dynamism of Day 1, you have to somehow make high-quality, high-velocity decisions. Easy for start-ups and very challenging for large organizations. The senior team at Amazon is determined to keep our decision-making velocity high. Speed matters in business – plus a high-velocity decision making environment is more fun too. We don’t know all the answers, but here are some thoughts.

First, never use a one-size-fits-all decision-making process. Many decisions are reversible, two-way doors. Those decisions can use a light-weight process. For those, so what if you’re wrong? I wrote about this in more detail in last year’s letter.

Second, most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. Plus, either way, you need to be good at quickly recognizing and correcting bad decisions. If you’re good at course correcting, being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.

How not to create traffic jams, pollution and urban sprawl 🚘

The Economist analyzes the state of parking economics. The gist: free or low-cost parking equals congestion and more drivers roaming for longer. Some great statistics in this piece:

As San Francisco’s infuriated drivers cruise around, they crowd the roads and pollute the air. This is a widespread hidden cost of under-priced street parking. Mr. Shoup has estimated that cruising for spaces in Westwood village, in Los Angeles, amounts to 950,000 excess vehicle miles travelled per year. Westwood is tiny, with only 470 metered spaces.

Weekly Links: Cars, AI Doctors, and the Mac Pro's Future

April 6, 2017 • #

Cars and Second Order Consequences 🚙

The cascading effect of a world with no human drivers is my favorite “what if” to consider with the boom of electric, autonomous car development. Benedict Evans has a great analysis postulating several tangential effects:

However, it’s also useful, and perhaps more challenging, to think about the second and third order consequences of these two technology changes. Moving to electric means much more than replacing the gas tank with a battery, and moving to autonomy means much more than ending accidents. Quite what those consequences would be is much harder to predict: as the saying goes, it was easy to predict mass car ownership but hard to predict Walmart, and the broader consequences of the move to electric and autonomy will come in some very widely-spread industries, in complex interlocked ways.

A.I. versus M.D. 💊

Siddhartha Mukherjee looks at the potential for AI in medicine, specifically as a diagnostic tool. Combine processing and machine learning with sensors everywhere, and things get interesting:

Thrun blithely envisages a world in which we’re constantly under diagnostic surveillance. Our cell phones would analyze shifting speech patterns to diagnose Alzheimer’s. A steering wheel would pick up incipient Parkinson’s through small hesitations and tremors. A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether there’s a new mass in an ovary that requires investigation. Big Data would watch, record, and evaluate you: we would shuttle from the grasp of one algorithm to the next. To enter Thrun’s world of bathtubs and steering wheels is to enter a hall of diagnostic mirrors, each urging more tests.

This piece is one of the best explanations of neural networks I’ve read.

The Mac Pro Lives

If you follow the Apple universe, you’ve surely heard the frustration of professional Mac users who’ve felt abandoned by Apple neglecting their pro hardware for 3 years. They’re resurrecting the lineup now with a redesigned Mac Pro. The craziest bit about this story is that Apple is coming out of the shell to talk about a new product months before launch, to a handful of select journalists.

Kindle

April 4, 2017 • #

A couple years ago I bought a Kindle Paperwhite, after moving almost exclusively to ebooks when the Kindle iPhone app launched with the App Store. I read constantly, and always digital books, so I thought I’d write up some thoughts on the Kindle versus its app-based counterparts like the Kindle apps, iBooks, and Google Books, all of which I’ve read a significant amount with. For I long time I resisted the Kindle hardware because I wasn’t interested in a reflective-only reading surface. The Paperwhite’s backlit screen and low cost made it easy for me to justify buying. I knew I’d use the heck out of it if I got one.

I had a brief stint with iBooks when Apple launched that back in 2010. At the time, the Kindle apps for iOS platforms were seriously lacking in handling the finer details of the reading experience. You couldn’t modify margins or typeset layout, iBooks had better font selection, highlighting and notetaking worked inconsistently, and the brightness controls were poor. But eventually the larger selection available on Kindle and Amazon’s continued feature development in their app brought me back.

Buying the Paperwhite was a great investment. The top reasons are it’s portability, backlit screen, and the battery life.

When I say “portability”, it’s not about comparison to the iPhone (obviously the ultimate in portable, always-with-you reading), but with physical books. Prior to the Kindle, I’d do probably 1/3 of my reading on paper, and that’s now dropped almost to zero1. Even with the leather case I use, it’s so lightweight I can carry it everywhere, and I don’t need to bring paper books with me on trips or airplanes anymore. It’s light enough to be unnoticeable in a backpack, and even small enough to fit in some jacket pockets.

The backlit screen is great and gives the advantage of eInk combined with the ability to use in darkness. The best thing about that screen is the fidelity of brightness control you can get versus an iOS device. In full darkness you can tune down the backlight to nearly zero, still read in the dark and not disturb anyone else. With my iPad, even at the minimum brightness setting it can light up the room if it’s really dark.

The battery life on eInk devices is unbelievable. In two years I’ve probably charged the Kindle a dozen times total. When it’s in standby mode it uses effectively zero power, and even in use (if the backlight’s not turned up) the drain is minimal. I almost forget that it’s electronic at all. In a world where everything seems to need charging, it’s great to have some technology that doesn’t.

I’d be remiss if I didn’t mention the beauty of accessing the massive library of books directly from the device. With a few taps I can have a new book purchased and downloaded, reading it in seconds. Using the iOS version for so long, I’ve missed out on this. Thanks to the Apple IAP policies and Amazon (justifiably) not wanting to share revenue with Apple for book sales, the app is only a reader; there’s no integrated buying experience. I just dealt with this by going out and buying titles through a browser session, but I didn’t realize the smoothness I was missing out on until I had it integrated with the Kindle.

Amazon’s long been an acquirer of other companies, but doesn’t have a great track record of integrations. They bought Audible and Goodreads long ago (2008 and 2013 respectively), both of which I’ve used for years. Only recently have they integrated any of that into the Kindle experience. On their iOS apps they launched a “narration” feature that’ll play back the audio in sync with the pages if you own audio and text versions (a little goofy, but at least they’re integrated). There aren’t many titles I own both audio and text versions of, but the ability to sync progress between the two formats is really nice. On the Goodreads front, the integration there on the Kindle is fantastic. I have access to my “want to read” list right on the home screen for quick access.

With so many devices and quirky pieces of technology, it’s nice to have something reliable and simple that does one job consistently well.

  1. I only read physical books if they aren’t available in e-format, or they’re nonfiction or reference books with heavy use of visuals. 

Weekly Links: AI, APFS, and MBA Mondays

March 30, 2017 • #

Trying out a new thing here to document 3 links that caught my interest over the past week. Sometimes they might be related, sometimes not. It’ll be an experiment to journal the things I was reading at the time, for posterity.

The Arrival of Artificial Intelligence 🔮

Good piece from Ben Thompson comparing the current developmental stage of machine learning and AI with the formative years of Claude Shannon and Alan Turing’s initial discoveries of information theory. They figured out how to take mathematical logic concepts (Boolean logic) and merge them with physical circuits — the birth of the modern computer. With AI we’re on the brink of similar breakthroughs. Thompson does well here to make clear the distinctions between Artificial General Intelligence (what most people think of when they hear the term, things like Skynet) and Narrow Intelligence (which is all we have currently, AIs that can replicate human thinking in a narrow problem set).

The New APFS Filesystem 📱

Apple announced their new APFS file system at last year’s WWDC, and this week launched it as part of the iOS 10.3 update. Their HFS+ file system is now 20 years old, but file systems aren’t something that you change lightly. They’re the core data storage and retrieval engine for computers, and massively complex. APFS is engineered with encryption as a first-class feature and also includes enhancements for SSD-based storage. The most amazing thing to me about this story is the guts it takes to make a seismic change like this to millions of devices in one swoop. It’s the sort of change that is 100% invisible to the average iPhone owner if it works, and could brick millions of phones if it doesn’t. Working in a software company building mission-critical software, it takes serious planning, testing, and skills to deploy risky changes like this to move your platform forward. Kudos to Apple for pulling off such a monumental and thankless change.

Fred Wilson’s MBA Mondays 💼

I’ve read Fred Wilson’s AVC blog for some time, but only through post links that make the rounds. Recently I discovered his archive of “MBA Mondays” articles covering tons of business topics. He’s got pieces on budgeting, cash flow, equity, M&A, unit economics — tons of great stuff from someone learning and practicing all of this in reality. Much more digestible than textbook business school material. I’m gradually making my way through the archive from the beginning and really enjoying it.