Touch ID and Security

September 17, 2015 • #

I recently wrote a review on the Fulcrum blog for one of my favorite pieces of software, 1Password. It’s a password management app to help you keep better organized with your hundreds of passwords, codes, and secure data that you typically have laying around in emails, documents, and post-it notes on your desk1.

I’m a heavy user of 1Password on my iPhone to look up accounts while I’m mobile. Because 1Password vault security is only as secure as your master password, the natural tendency is to have a long, complex, intricate passphrase to type to unlock the vault. And from the iPhone, you want your vault to re-lock pretty rapidly so the door to your digital safe isn’t left swinging open while your phone’s sitting on the table. The net result is having to constantly type a hard-to-type passphrase on a hard-to-type-on device. No good and no fun.

Touch ID

My problems were solved a few weeks ago I finally enabled the Touch ID functionality in 1Password 5 for accessing your vault using your fingerprint, versus typing the 30-character password2. After using it like this for a few days, it seemed less secure to me, since it wasn’t even requiring my impressively-complicated password to get in. I dug into some of the documentation to find out how secure the implementation of Touch ID authorization is in 1Password, and how Touch ID works in iOS.

The app documentation has a great article outlining exactly how Touch ID works within 1Password. For a long time it had a “PIN Code” feature to have a quick access code for unlocking the vault after you had recently unlocked the vault with your master password, and the Touch ID feature works similarly. The data is still encrypted with the master password. It’s designed explicitly as a mechanism for adding convenience to the process, which is a critical component to maintaining good security best practices:

“Just as Apple has designed Touch ID not as a replacement for a device passcode, we do not use Touch ID in 1Password as a replacement for your Master Password. Touch ID is a convenience mechanism that provides a way to quickly unlock 1Password after there has been a full unlock (with your Master Password).”

The intersection of convenience and security is interesting. They’re fundamentally opposite: a totally secure system is extremely inconvenient to access, a convenient one is insecure. The best systems strike a balance somewhere in the center. The problem with highly secure but inconvenient systems is that they entice users to defuse the security of the whole system by taking shortcuts. Think of the corporate IT environment with all the bells and whistles on security—password strength requirements, required resets every month, can’t reuse passwords, minimum lengths—it’s this massive inconvenience that results in the post-it note on the monitor with the keys to the kingdom written on it.

The security of how Touch ID’s technology works is another matter, one of hardware and storage. With the release of the A7 processor in 2013, Apple introduced something called the Secure Enclave3, which allows applications to store bits completely outside the scope of the kernel on a physically isolated area of the chip. This is where biometrics get stored, along with cryptographic data for other applications. Apple’s technical documentation about Touch ID security covers in minute detail exactly how iOS devices store your fingerprint data on the Secure Enclave, and the ultimate reason why Touch ID is actually more secure than not using it:

“Since security is only as secure as its weakest point, you can choose to increase the security of a 4-digit passcode by using a complex alphanumeric passcode. To do this, go to Settings > Touch ID & Passcode and turn Simple Passcode off. This will allow you to create a longer, more complex passcode that is inherently more secure.”

This is a key point that’s relevant at the OS level and within apps like 1Password or banking apps using biometrics. If, because of the convenience factor, biometrics enable people to keep their encryption passphrases more secure at the core, then we’re all better off.

  1. It’s utterly essential to modern computing, so go buy it right now if you don’t have it already.

  2. The Agile Bits team released this functionality a year ago, but for some reason I never bothered to try it.

  3. Apple has an in-depth security document covering Secure Enclave and the entire security architecture of iOS and the hardware. Worth a read if you can stomach the geeky stuff.

Addresses and Geocoding: Do New Systems Improve What We Have?

August 8, 2015 • #

There’s been a boom in the last couple years of big tech companies trying to reach to the periphery of the globe and bring Internet access to people without connectivity. Facebook is launching giant solar-powered drones with lasers, Google is floating balloons with antennae into the stratosphere, and smartphones are cheaper than ever.

The success rate of these projects is hard to quantify, it’s too early. But for the mapping industry, it’s a fact that billions of people don’t have access to the kinds of map data we have in the US or Europe, and the immaturity of infrastructure and public services like managed street addresses and quality map data are holding back the advance of mobile location-based services. E-commerce companies like Amazon and logistics providers like UPS and FedEx rely on quality geographic data to conduct business. Cities like Lagos, Dhaka, or Kinshasa are enormous booming urban centers, but still don’t have reliable addressing systems for navigating city streets.

House number address

Given the combination of expanding connectivity to disconnected places and the vacuum of reliable geodata, a number of services have sprung up in recent years with systems for global wayfinding and geocoding. The particular focus here is to bring a mechanism for providing addresses to places where there are no other alternatives. When I first read that people were building new systems for geocoding it piqued my interest, so I dug into them to see what they’re all about, and what they might be bringing to the table that we don’t already have.

The Problem

The first step in understanding the problem at hand is to lay down some definitions that differentiate an “address” from a “coordinate”. An address is an identifier for a place where a person, organization, or the like is located or can be found, while a coordinate is a group of numbers used to indicate position in space.

This fundamental difference is important because addresses only truly matter where there are people, but coordinates are universal identifiers for anywhere on the globe. A location in the center of the North Atlantic has a position in any global geographic coordinate system, but having a human-readable address isn’t important; it’s unnecessary for everyday use. Coordinate or grid systems can function as addresses, but the reverse isn’t always the case.

I thought I’d compare some different geocoding systems to see where the pros and cons are. Are they really necessary, or can we make use of existing proliferated systems without reinventing this wheel?

The “neo-addressing” systems

Coordinates in several systems

These systems all provide similar capabilities, with a primary focus of providing memorable human-friendly identifiers for places. There are others out there in the wild, but I’ll just talk about some of the prominent ones I’ve run across:

  • Mapcode - Created by a Dutch non-profit founded by former TomTom employees
  • what3words - A system based on a global grid of 3m x 3m squares, with identifiers composed of triplets of everyday words
  • Open Location Code - An open source system developed and sponsored by Google

Each of these geocoding services have similar sets of objectives: to make addresses derivable for anywhere on Earth using algorithms, assign shorter and more memorable codes than coordinate systems or postal codes, and to have codes that reduce ambiguity (not contain “O” and “0”, or by using distinctly different words and phrases). The interesting thing with all of them is that by deriving coordinates deterministically, the result can be controlled and forcefully made more human-friendly. In the case of what3words, it generates shorter and more memorable word combinations in areas with higher population density. So lives.magma.palace will take you to Philadelphia’s Independence Hall, while conservatory.thrashing.incinerated will get you to the remote Arctic islands of Svalbard. This is a clever method to optimize the pool of words for usage frequency, and obviously not something that can be controlled with traditional coordinate systems.

Algorithmic systems can also allow a user to shorten the code for a less granular location. With OLC, you can knock off the last couple characters and get a larger area containing the original location. 76VVQ9C6+ encompasses the few city blocks around our building. 76VVQ9C6+9M gets you right to my office. Because it represents an area rather than only a point, truncating to get successively larger areas is possible. Truncating a lat/lon coordinate moves the point entirely.

The what3words approach seems the most creative and truly memorable method, though it sounds sort of gimmicky. They’ve done a lot to accommodate for things like offensive words, avoiding homophones, removing ambiguous combinations, and even providing the system in several languages.

Spreading adoption for any of these systems will be an enormous challenge. They all seem to be different varieties of the same wheel. If I was developing mapping applications, which system should I support? All of them? Software developers will have to buy into one or more new systems and users will have to understand how they work.

Another issue is one of ownership. If a new scheme for addressing requires a special algorithm or code library for calculating coordinates, it should be in the public domain and serve as an open standard (if anyone expects adoption to grow). In the age of open source, no platform developer is going to license a proprietary system for generating coordinates with so many open alternatives out there. Both OLC and Mapcodes have an open license, but what3words is currently proprietary.

Let’s compare these tools to what existing coordinate schemes we already have.

Existing models, grids, and coordinate systems

USGS topographic map

Addresses in the classic sense of “123 Main St” make sense for navigation, particularly due to a hundred years of usage and understanding. When I’m searching for “372 Woodlawn Court” in my car, there are some conventions about addressing that help me get there without knowing specific geographic coordinates–odd numbers are on one side and even on the other, numbers follow a sequence in a specific direction–so people can still do some of the wayfinding themselves. Naturally this is reliant on having a trusted, known address format, but nonetheless, adoption of new geocoding systems should be valuable for everyone, not just in places without modern address systems.

How do new means of addressing physical space stack up to the pre-existing constructs we’ve had for decades (or centuries)? Do the benefits outweigh the costs of adopting something new?

Here are several of the common coordinate systems used globally for navigation and mapping:

  • Plain latitude and longitude - in decimal or degree-minute-second format
    • Example: 27.79987, -82.63402 or 27°47’59.5314” N 82°38’2.472” W
    • Pro: In use for centuries, supported across any mapping tools
    • Con: Lengthy coordinates needed to get accurate locations
  • UTM (Universal Transverse Mercator) - a grid-based map projection that segments the world into 60 east/west “zones” of 6° each, with coordinates expressed as a number of meters north of the equator and east of the zone’s central meridian (“northing” and “easting”)
    • Example: 17N 339031 3076104
    • Pro: Uses meters for measurement, great for orienteering with paper maps, nearby coordinates can be compared to measure distance easily
    • Con: Long coordinates, requires knowledge of reference zones to find position, some tools don’t support
  • MGRS (Military grid reference system) - another grid-based standard used by NATO militaries, similar to UTM, but with different naming conventions
    • Example: 17R LL 39031 76104
    • Pro: Same as UTM, somewhat more intuitive scheme with smaller grid cells
    • Con: Same as UTM
  • Geohash - an encoded system similar to the ones mentioned earlier, but the underlying algorithm has been in the public domain since 2008, and there are existing tools that already support it
    • Example: dhvnpsg9zz2
    • Pro: Existing algorithm-based system, open standard, short codes
    • Con: Not human-readable

MGRS in the US

MGRS grid coverage in the US

These systems have some distinct advantages over building something new (and naturally some disadvantages). But I think the gains had with algorithmic libraries and services like those mentioned above aren’t enough to warrant convincing millions of people to adopt something new.

If you look back at the primary benefits of Open Location Codes or what3words, it’s memorability. I’ll grant that what3words has a leg up in this department, but the others, not so much. Is 17RLL3861573116 really that much worse than 76VVQ9F6+4V? Neither are very human-friendly to me, but at least something like MGRS has a worldwide existing base of understanding, users, and tools supporting it.

I would concede that memorability and reduced ambiguity could help to replicate the ease-of-use we get with classic addresses. But in the days of ubiquitous GPS, smartphones, and apps, people don’t realistically memorize anything about location anymore. We punch everything into a mapping app or the in-car navigation system. Given that, what benefit are we left with inventing a new system of expressing location?

I think it’s wise to spread adoption of widespread systems like MGRS or UTM before we start asking citizens of developing countries to adopt systems that no one else is using yet, even if those systems do come with some new benefits.

Other Interesting Reading

If you’re interested in reading more background on some of these systems, check out these links:

The Craft of Baseball

July 17, 2015 • #

I’m a baseball fan from way back, and grew up as a Braves fan during the early years of their 1990s NL East dominance. As much as I always enjoyed following the sport as a casual fan, I’d never studied the game much, nor its history beyond the bits that are conventional knowledge to anyone with an interest in the sport (the seminal records, player achievements, and legends of the game). I’ve been on a kick lately of reading about sports I enjoy—baseball and soccer—and have picked up a few books on the subjects to find out what I’ve been missing.

Dodger Stadium

I just finished reading George Will’s Men at Work: The Craft of Baseball, his 1989 book that dives deep on the strategy of the game. He sits down with 4 separate professional baseball men to analyze the sport and its component parts: managing with Tony La Russa, hitting with Tony Gwynn, fielding with Cal Ripken, Jr., and pitching with Orel Hershiser. One of the first things that attracted me to this as a re-primer to a newfound interest in baseball is that it’s not new. This book is over 20 years old, so most of the players mentioned in the text are ones I grew up watching.

The book offers a deep analysis of the tactics of baseball games. Rather than write about the specifics as an armchair expert, the author leaves most of the opinion about the elements of the game to the actual practitioners. He poses the question and lets La Russa’s 2,700 wins or Gwynn’s 3,000 hits do the talking. Will does pepper in some of his own opinions on things like the practicality of the designated hitter rule (he thinks pitchers hitting in the NL is a waste of time), and that Walter Johnson is hands-down the best pitcher to have played the game (a bold position, but not a surprising one). But it’s by no means a book of opinion on the game.

Baseball men

He spends a lot of the book’s introduction emphasizing the differences between baseball and other sports. No one would deny that baseball is extremely different than the other Big Three US sports, all of which are “get object to the other side to score” games. All of those sports have depths of complexity in and of themselves, but the important differentiation isn’t about which sport is “harder” or innately “better”. He points out that baseball is the only sport where the defense initiates every play—pitcher throwing to batter. This shows that no matter how dominant or overpowering a particular hitter is, he only gets 1 of every 9 team at-bats. One offensive player simply can’t dominate the entire game on behalf of his team if the other eight are consistently striking out. In football or basketball, the ball can be dished to the same runningback or power forward each play, if he’s dominating. The only player on the baseball field that can dominate is the pitcher, a part of the defense. I love these dynamics of baseball games, with each pitch functioning as a set piece with strategies set up for each hitter, count, baserunner position, batter tendency, and stadium configuration. A typical baseball game consists of 300 pitches or more, so the intricate interlock of the game’s components is incredibly complex when trying to compete at the big league level, for 162 games a season.

Orel Hershiser

The theme throughout the book, touched on by each of the professionals, is that baseball is, fundamentally, a game of attrition. There are more opportunities to fail and go into a slump than there are to succeed, even for the cream of the crop. Even the winningest managers in the modern era (La Russa, Bobby Cox, Joe Torre) racked up 2,000 losses in their careers. At the end of the day, baseball is a game of failure, and excelling at the game is an exercise in minimizing failure as much as it is about success. There’s an excellent anecdote at the start of the book from Warren Spahn, the Braves’ left-handed legend, speaking at a dinner at the US Capitol with a host of congressmen:

Spahn was one of a group of former All-Stars who were in Washington to play in an old-timers’ game. Spahn said: “Mr. Speaker, baseball is a game of failure. Even the best batters fail about 65 percent of the time. The two Hall of Fame pitchers here today (Spahn, 363 wins, 245 losses; Bob Gibson, 251 wins, 174 losses) lost more games than a team plays in a full season. I just hope you fellows in Congress have more success than baseball players have.

The pros that get on top are the ones that overcome the ridiculous rate of failure to edge out the competition.

Much is said in the game about “luck” as an immovable fixture of the sport. You can’t watch a broadcast or listen to a manager’s press conference without them talking about luck or misfortune. Analysts in the last 10 to 15 years have created an entire science out of developing statistics that remove luck from the equation when measuring a pitcher, fielder, or hitter’s effectiveness on the field. Part of the reason luck becomes an interesting “metric” when analyzing the sport is the sheer number of individual events in a baseball season—pitches, hits, strikeouts, runs, stolen bases, the list goes on and on. A season is 2430 games, not including the playoffs, so there’s an enormous amount of data streaming out continuously, ripe for analysis.

“Luck is the residue of design.” -Branch Rickey

Because of this, baseball is a game of numbers and averages (with a “steadily thickening sediment of statistics”, in Will’s words). Lots of current baseball writing and analysis is overrun by esoteric sabermetricians hyperanalyzing the game in such ridiculous detail that casual fans wouldn’t even understand the meaning of the numbers. Look at stats like wins above replacement (WAR), batting average on balls in play (BABIP), or ultimate zone rating (UZR) and try to understand their meanings without detailed study. With Men at Work, I liked that Will’s approach was closer to the surface in reflecting on the practical aspects of the game, rather than the in-the-weeds examination of player performance and team contribution that’s become commonplace in the post-Moneyball era. There’s certainly no shortage of statistics or an appreciation of their importance to the sport, but they take a backseat to the observable strategies and decision-making processes of a La Russa or Hershiser. My favorite part about baseball statistics has always been looking at historical trends in player output, and many of the old school numbers work just fine for seeing individual and team performance.

I highly recommend Men at Work to anyone interested in baseball, and particularly more avid fans of the sport. This book deepened my appreciation of the game, and now makes me think differently about strategies unfolding on the field.