Archive of posts with tag 'Open Source'
Muse, a tool for journaling prompts â
January 29, 2025 ⢠#So far this year Iâve gone headlong into whatâs possible with AI development, specifically using Cursor. Iâve been using it to write code, write blogs, and even write this post.
The potential for âpersonal softwareâ is profound here. Thereâve been a few times already where I wanted a simple script for automating something, and it wasnât even worth bothering to look around for pre-existing ones. I just had Cursor help me make my own. So Iâve decided to ship something once a month: a full project hosted and usable around a single idea.
When it comes to journaling and Morning Pages, I occasionally reference a list of journaling prompts I keep to help me get going. So in that spirit I built this tool, Muse.
Using a bulleted list of prompts in a markdown file, Muse will randomly display a prompt with each roll of the dice. I decided to build it with React and Expo to potentially make it available on mobile.
Iâm not sure how much Iâll use it, but itâs a fun little project.
The source is on GitHub here. You can modify the prompts.md
file to edit or add any of your own prompts, too.
gpt-researcher â
June 19, 2024 ⢠#GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.
The agent can produce detailed, factual and unbiased research reports, with customization options for focusing on relevant resources, outlines, and lessons. Inspired by the recent Plan-and-Solve and RAG papers, GPT Researcher addresses issues of speed, determinism and reliability, offering a more stable performance and increased speed through parallelized agent work, as opposed to synchronous operations.
Iâve been tinkering around with this tool for doing deeper topic research. It uses the ChatGPT API and GPT-4o to conduct multi-phase agent-based research. Adds sources, citations, and can be directed with specific tuning. Very interesting.
Ancient Myths and Open Source â
May 3, 2023 ⢠#Sam Arbesman finds similarities between the oral storytelling of myth. With generation retellings and small, effective adaptations over time, ancient tales were made more Lindy:
Over time though, each ever-retold story was sanded into a gem of a tale that could last all those years, not necessarily because the written text was preservedâto be found in a clay pot in some desert cave millennia hence, or etched into a rock so it could not be forgottenâbut because there was a community devoted to its recounting.
OpenStreetMap is Having a Moment â
November 20, 2020 ⢠#My friend Joe Morrison wrote an excellent piece on the current state of corporate investment in OpenStreetMap:
The four companies in the inner circle â Facebook, Apple, Amazon, and Microsoft â have a combined market capitalization of over six trillion dollars.š In almost every other setting, they are mortal enemies fighting expensive digital wars of attrition. Yet they now find themselves eagerly investing in and collaborating on OSM at an unprecedented scale (more on the scale later).
The post references Jennings Andersonâs analysis of corporate contributions to the project (check out the paper), which has skyrocketed in the past couple of years. Mapbox was an early major contributor, but not the first. But what blew my mind from Jenningsâs talk from State of the Map was the scale that teams at Apple and Amazon have achieved in not just use of OSM, but also contributions back to the data. Just look at the number of users on Appleâs data team.
I havenât kept up with the latest happenings in OSM over the last couple years, but corporate involvement (or any kind of official involvement taking the project beyond hobbyists) is ultimately healthy for the growth of the project. It seems like the scale and quality has expanded over the last 5 years to âirreplicableâ levels for everyone except the larget players (ahem, Google).
My prediction is that automated extraction is going to overtake a still-unknown but major chunk of imagery-to-map tracing style mapping. There are still lots of corrections and details that are hard to detect by imagery, but projects like RAPiD are super interesting.
Weekend Reading: Software Builders, Scarcity, and Open Source Communities
September 19, 2020 ⢠#đ¨âđť We Need More Software Builders, Not Just Users
On the announcement of Airtableâs latest round and $2.5b valuation (!), founder Howie Liu puts out a great piece on the latest round of changes in pursuit of their vision.
No matter how much technology has shaped our lives, the skills it takes to build software are still only available to a tiny fraction of people. When most of us face a problem that software can answer, we have to work around someone elseâs idea of what the solution is. Imagine if more people had the tools to be software builders, not just software users.
đ Scarcity as an API
Artificial scarcity has been an effective tactic for certain categories of physical products for years. In the world of atoms, at least scarcity can be somewhat believable if demand outstrips supply â perhaps a supply chain was underfilled to meet the demands of lines around the block. Only recently are we seeing digital goods distributed in similar ways, where the scarcity is truly 100% forced, where the scarcity is a line of code that makes it so. Apps like Superhuman or Clubhouse generate a large part of their prestige status from this approach.
Dopamine Labs, later called Boundless, provided a behavioral change API. The promise was that almost any app could benefit from gamification and the introduction of variable reward schedules. The goal of the company was to make apps more addictive and hook users. Like Dopamine Labs, a Scarcity API would likely be net-evil. But it could be a big business. What if a new software company could programmatically âdropâ new features only to users with sufficient engagement, or online at the time of an event? What if unique styles could be purchased only in a specific window of time?
đĽ The Glory of Achievement
Antonio Garcia-Martinezâs review of Nadia Eghbalâs book on open source, Working in Public.
Itâs really about how a virtualized, digital world decoupled from the physical constraints of manufacturing and meatspace politics manages to both pay for and govern itself despite no official legal frameworks nor institutional enforcement mechanisms. Every class of open source project in Eghbalâs typology can be extended to just about every digital community you interact with, across the Internet.
Mermaid Diagrams â
July 12, 2020 ⢠#Mermaid is a neat project to add a custom syntax layer in Markdown to support rendering diagrams like flowcharts and sequences.
Roam just recently added support for Mermaid in Roam graphs, which renders your diagram live as you edit the markup in a page. This is useful for adding visuals to pages, but Iâll have to mess around and learn its syntax to get comfortable.
Digital Gardeners â
July 8, 2020 ⢠#The âdigital gardenâ concept is gaining in popularity. Iâve seen a dozen of them recently, with groups like the Roam community taking to publishing their own open notebooks.
Maggie Appleton (an awesome, prolific member of the #roamcult) created this small library of resources for creating your own garden, along with several examples of others in the community. I still have the idea in my backlog of side projects to look at spinning up an open notebook like this.
Hereâs Maggieâs illustration of where the digital garden sits on the spectrum of content. All of her illustrations are excellent.
Weekend Reading: Beastie Boys, Links, and Screencasting
May 2, 2020 ⢠#đĽ Beastie Boys Story
We watched this a couple nights ago. Itâs hard to tell how objectively good it was, but I loved the heck out of it as a decades-long fan.
đ Linkrot
Iâll have to try out this tool that Tom built for checking links. When Iâve run those SEM tools that check old links, I get sad seeing how many are redirected, 404âd, or dead.
đš Screencasting Technical Guide
This is an excellent walkthrough on how to make screencasts. Iâve done my own tinkering around with ScreenFlow to make a few things for Fulcrum. Itâs something I want to do more of eventually. A good resource for gear + tools, preparation, and editing.
Weekend Reading: Tagging with Turf, Mars Panorama, and Kinds of Easy
March 7, 2020 ⢠#đş turf-tagger
Bryan put together this neat little utility for merging point data with containing polygon attributes with spatial join queries. It uses Turf.js to do the geoprocess in the browser.
đ Mars Curiosity High-Res Panorama
Amazing photography of the Mars surface:
NASAâs Curiosity rover has captured its highest-resolution panorama yet of the Martian surface. Composed of more than 1,000 images taken during the 2019 Thanksgiving holiday and carefully assembled over the ensuing months, the composite contains 1.8 billion pixels of Martian landscape. The roverâs Mast Camera, or Mastcam, used its telephoto lens to produce the panorama; meanwhile, it relied on its medium-angle lens to produce a lower-resolution, nearly 650-million-pixel panorama that includes the roverâs deck and robotic arm.
â Different Kinds of Easy
- âEasyâ because thereâs a delay between benefit and cost.
The cost of exercising is immediate. Exercise hurts while youâre doing it, and the harder the exercise the more the hurt. Investing is different. It has a cost, just like exercising. But its costs can be delayed by years.
Whenever thereâs a delay between benefit and cost, the benefits always seem easier than they are. And whenever the benefits seem easier than they are, people take risks they shouldnât. Itâs why there are investing bubbles, but not exercise bubbles.
Why Hasnât Open Source Software Disrupted Esri? â
November 14, 2019 ⢠#This is a great post from my friend Joe Morrison assessing the state of the open source software movement in GIS against the biggest corporate competitive friction point, Esri.
Even as QGIS and comparable open source projects have invested in better branding, that doesnât mitigate the ultimate killer in the quest to replace Esri at most large organizations: misaligned incentives. The typical IT department at a large company or government agency is defensive by nature due to the asymmetrical risk of adopting new technology; you can get fired for championing a new software that winds up becoming a costly failure, but itâs tough to get fired for preserving the status quo.
He nails just about every point Iâd make when you realistically compare what challenges are there for not only open source projects, but even just thinking about competitors when youâre a new entrant coming into an already-mature space.
I think inertia and incentives are such massive forces, bigger than most of us technologists realize at first, that have to be overcome before youâll get any traction in a wide marketplace. Even if youâre coming at a problem in a largely new way (creating a new category of software), users will still prefer to find the closest comparable analog to stack you up against, even if unfairly. To reach beyond the âearly adoptersâ and tinkerers, you have to somehow overcome these challenges â get good at telling your story, be 10X better (3X or 5X usually isnât enough), and have the persistence and patience to chew away at the problem for a long time. The chasm is real, and taking the wheel away from the incumbents is an enormous undertaking.
Weekend Reading: Blot, Hand-Drawn Visualizations, and Megafire Detection
November 9, 2019 ⢠#đ Blot.im
Blot is a super-minimal open source blogging system based on plain text files in a folder. It supports markdown, Word docs, images, and HTML â just drag the files into the folder and it generates web pages. I love simple tools like this.
đ Handcrafted Visualization: Precision
An interesting post from Robert Simmon from Planet. These examples of visualizations and graphics of physical phenomena (maps, cloud diagrams, drawings of insects, planetary motion charts) were all hand-drawn, in an era where specialized photography and sensing werenât always options.
A common thread between each of these visualizations is the sheer amount of work that went into each of them. The painstaking effort of transforming a dataset into a graphic by hand grants a perspective on the data that may be hindered by a computer intermediary. Itâs not a guarantee of accurate interpretation (see Chapplesmithâs flawed conclusions), but it forces an intimate examination of the evidence. Something thatâs worth remembering in this age of machine learning and button-press visualization.
I especially love that Apollo mission âlunar trajectoryâ map.
đĽ The Satellites Hunting for Megafires
Descartes Labs built a wildfire detection algorithm and tool that leans on NASAâs GOES weather satellite thermal spectrum data, in order to detect wildfires by temperature:
While the pair of GOES satellites provides us with a dependable source of imagery, we still needed to figure out how to identify and detect fires within the images themselves. We started simple: wildfires are hot. They are also hotter than anything around them, and hotter than at any point in the recent past. Crucially, we also know that wildfires start small and are pretty rare for a given location, so our strategy is to model what the earth looks like in the absence of a wildfire, and compare it to the situation that the pair GOES satellites presents to us. Put another way our wildfire detector is essentially looking for thermal anomalies.
mapwith.ai â
October 22, 2019 ⢠#In a conversation yesterday I learned about this project called RapiD, led by Facebook to use computer vision technology to detect features for mapping in OpenStreetMap. Theyâre working on a fork of the iD editor that does assistive feature detection to present an editor with generated geometries to add to the map. I messed around with it in the test areas theyâre supporting so far and itâs a clever combination of computer-assisted detection and human-based mapping. This shows some promise to enhance OSM contributor technology and lower the barrier to entry for new editors.
bookish.tech â
October 18, 2019 ⢠#This is a handy tool for searching book metadata, built by Tom MacWright. Itâs a âuniversal converterâ for book IDs in various services, like Goodreads, OpenLibrary, and WorldCat.
Itâs open source and also has an API. Iâll have to see about factoring these other IDs into my own Library reading log.
Search the Archives
September 5, 2019 ⢠#Since Iâve been posting here so frequently, itâs gotten challenging to scroll through the archive to find links to things I wrote about before. Last night I worked on implementing a simple site search page that searches the title, text, and tags of posts to find relevant content. This is a short post on how I put that together.
I use Jekyll to manage the site content and generation, with all of my posts written as markdown files with some custom front-matter to handle things like tagging, search-friendliness, and some other things used in templating the site. There are also a couple of other custom things build using Jekyllâs âcollectionsâ feature for other content types, like my Books section.
I ran across a library called Lunr that provides client-side search on an index that generates when your site builds. Itâs small and simple, outputting an index as JSON document that then can be combed through from a search to return data from posts or other content types. This provided exactly what I wanted: lightweight and something that would support Jekyll and GitHub hosting that I use without having to change anything, add third-party indexing services, or use clunky Google Site Search to accomplish. I wanted something native to my own site.
The best implementation that I found that matched what I wanted was from Katy DeCorah. Using her version as a starter, I took that and customized to fit my site and index and return the specific data I wanted to appear in search results. The outcome looks nice and is certainly simple to use. Right now it still only supports searching the post archives, but thatâs good enough for now. Iâm still exploring ways to browse my archives by other dimensions like tags, but I want to do that in a way thatâs useful and also as lightweight as possible.
Head to the /search page and check it out.
Weekend Reading: tracejson, Euclid, and Designing at Scale
August 24, 2019 ⢠#đ° tracejson
An extension to the GeoJSON format for storing GPS track data, including timestamps. GPX has been long in the tooth for a long time, but it works and is supported by everything. This approach could have some legs if application developers are interested in a new, more efficient, way of doing things. I know Iâd like to explore it for Fulcrumâs GPS-based video capability. Right now we do GPX and our own basic JSON format for storing the geo and elapsed time data to match up video frames with location. This could be a better way.
đˇ Byrneâs Euclid
This is a gorgeous web recreation of Oliver Byrneâs take on Euclidâs Elements. A true work of art.
đ¨ Design Tooling at Scale
This post from the Dropbox design team dives into how a large team with a complex product created a design system for a consistent language. It goes into how they organize the stack of design elements and structures using Figma for collaboration.
Weekend Reading: Terrain Mesh, Designing on a Deadline, and Bookshelves
August 17, 2019 ⢠#đ MARTINI: Real-Time RTIN Terrain Mesh
Some cool work from Vladimir Agafonkin on a library for RTIN mesh generation, with an interactive notebook to experiment with it on Observable:
An RTIN mesh consists of only right-angle triangles, which makes it less precise than Delaunay-based TIN meshes, requiring more triangles to approximate the same surface. But RTIN has two significant advantages:
- The algorithm generates a hierarchy of all approximations of varying precisions â after running it once, you can quickly retrieve a mesh for any given level of detail.
- Itâs very fast, making it viable for client-side meshing from raster terrain tiles. Surprisingly, I havenât found any prior attempts to do it in the browser.
đ¨đ˝âđ¨ Design on a Deadline: How Notion Pulled Itself Back from the Brink of Failure
This is an interesting piece on the Figma blog about Notion and their design process in getting the v1 off the ground a few years ago. Iâve been using Notion for a while and can attest to the craftsmanship in design and user experience. All the effort put in and iterated on really shows in how fluid the whole app feels.
đ Patrick Collisonâs Bookshelf
Iâm always a sucker for a curated list of reading recommendations. This oneâs from Stripe founder Patrick Collison, who seems to share a lot my interests and curiosities.
MapSCII â
July 19, 2019 ⢠#I love that people out in the open source world still build things like this. Itâs an xterm-compatible renderer for accessing interactive map from a terminal console.

I even tried it from a console on an EC2 instance Iâve got. A quick telnet command gets you connected, a and z keys to zoom, arrow keys to pan:
telnet mapscii.me
Funnily enough, I could even see this being useful if you needed to reference a map while on a command line on a remote server. Mostly itâs a clever toy, but impressive nonetheless.
Open Source Game Clones â
June 10, 2019 ⢠#I donât play games at all anymore these days, but I ran across this repository of open source clones of classic PC games.
Some notable ones from the list that we used to play:
- 0 AD â Age of Empires
- C-Evo â Civilization II
- Interstate Outlaws â Interstate â76
- Jedi Outcast Linux â Jedi Knight II: Jedi Outcast
- OpenRCT â Rollercoaster Tycoon
- OpenSkyscraper â SimTower
The world of open source software (and GitHub, specifically) really is amazing. Kids out there the age I was when I played these can now go online and look at the source code to see how theyâre made.
Assisted Tasks for OpenStreetMap â
May 30, 2019 ⢠#The Humanitarian OpenStreetMap Team has been working on an experimental version of the Tasking Manager tool that incorporates deep learning-assisted mapping projects.

The OSM community has long been (and still largely is) averse to machine-based mapping, as itâs counter to the founding ethos of the project being âcreated by mappers, where they live.â But if the project is to survive and still see adoption and usage in commercial applications, there has to be effort to improve the depth of coverage and refresh rate to stay competitive with the commercial providers like Google and HERE. Theyâre out robotically and mechanically mapping the world trying to actively remove humans from the loop.
Itâs good to see HOT taking after work done already by Facebook, Mapbox, and Development Seed to combine ML capabilities with OSM to improve the map at scale.
Discovering QGIS
May 29, 2019 ⢠#This week weâve had Kurt Menke in the office (of Birdâs Eye View GIS) providing a guided training workshop for QGIS, the canonical open source GIS suite.
#qgis workshop selfie @spatialnetworks ... a great group hard at work! pic.twitter.com/fp1jNsQMFL
â Kurt Menke (@geomenke) May 28, 2019
Itâs been a great first two days covering a wide range of topics from his book titled Discovering QGIS 3.
The team attending the workshop is a diverse group with varied backgrounds. Most are GIS professionals using this as a means to get a comprehensive overview of the basics of âwhatâs in the boxâ on QGIS. All of the GIS folks have the requisite background using Esri tools throughout their training, but some of us that have been playing in the FOSS4G space for longer have been exposed to and used QGIS for years for getting work done. Weâve also got a half dozen folks in the session from our dev team that know their way around Ruby and Python, but donât have any formal GIS training in their background. This is a great way to get folks exposure to the core principles and technology in the GIS professionalâs toolkit.
Kurtâs course is an excellent overview that covers the ins and outs of using QGIS for geoprocessing and analysis, and touches on lots of the essentials of GIS (the discipline) and along the way. All of your basics are in there â clips / unions / intersects and other geoprocesses, data management, editing, attribute calculations (with some advanced expression-based stuff), joins and relates, and a deep dive on all of the powerful symbology and labeling engines built into QGIS these days1.
The last segment of the workshop is going to cover movement data with the Time Manager extension and some other visualization techniques.
-
Hat tip to Niall Dawson of North Road Geographics (as well as the rest of the contributor community) for all of the amazing development thatâs gone into the 3.x release of QGIS! âŠ
Contours in OpenDroneMap â
May 15, 2019 ⢠#One of my favorite open source projects ongoing right now is OpenDroneMap. I havenât gotten to use it much recently, but followed development over the last couple years. Outside of some loose testing a while ago, I havenât flown my Mavic for any imagery collection since. I need to go out to the waterfront nearby and fly some new data so I can kick the tires some more on ODMâs latest stuff.

Piero just announced completion of contour support in WebODM, which is the web front-end to the ODM processing suite. This is some powerful geospatial software, and a great example of what a committed team can do with open source. The new contour capability leverages the GRASS engine as well as GDAL to process, then allows you to export to GIS. I canât wait to try it out.
A Growing Geo Community: Takeaways from FOSS4G NA 2019 â
April 23, 2019 ⢠#I had a great time last week out in San Diego for this yearâs FOSS4G-NA. This post I wrote for the company blog runs down general conference takeaways, plus a wavetops overview with best-of sessions I attended. It was a fantastic event, as always. The content was great, but the real win is getting to hang out with friends old and new and stay in touch with what everyoneâs working on:
The key to getting the most out of a FOSS4G is the people. I havenât attended one since 2013, but each one Iâve been to proves this out. The connections Iâve made at past events have been incredibly rewarding and lucrative over the past 8 years. I tell everyone thinking about going that to get the most out of it, you should be an active participant in the community. Sure the sessions are great content, but forming human relationships and expanding your network is the overarching goal. Such a positive, energetic crowd to spend time with!
Next up is the international show in Bucharest, then 2020 in Calgary.
FOSS4G North America 2019
April 11, 2019 ⢠#Next week Joe and I will be out in San Diego for FOSS4G-NA 2019. Thisâll be my first one since I think 2012. Thereâs always an excellent turnout and strong base of good folks to catch up with. This year theyâve put together a B2B and Government Theme day to kick it off, which to my knowledge is a new thing for an event typically focused on the eponymous free, open source, and community-driven projects.

I thumbed through the agenda to pick out some topics Iâm interested in catching this year:
- Open source for utilities and telecom
- OpenStreetMap and WikiData
- Open source in higher education
- PDAL
- OpenDroneMap
- âDigital twinâ technology for infrastructure
Sponsoring QGIS â
March 22, 2019 ⢠#We recently began a corporate sponsorship for the QGIS project, the canonical open source desktop GIS. I got back into doing some casual cartography work using QGIS back in December after a years-long hiatus of using it much (I donât get to do much actual work with data these days). Bill wrote up a quick post about how we rely on it every day in our work:
As a Mac shop, the availability of a sophisticated, high-quality, powerful desktop GIS that runs natively on MacOS is important to us. QGIS fits that bill and more. With its powerful native capability and its rich third-party extension ecosystem, QGIS provides a toolset that would cause some proprietary software, between core applications and the add-on extensions needed to replicate the full power of QGIS, to cost well more than a QGIS sponsorship.
The team over at Lutra Consulting, along with the rest of the amazing developer community around QGIS have been doing fantastic work building a true cross-platform application that stacks up against any other desktop GIS suite.
Weekend Reading: Calculator, SaaS Metrics, and System Shock
March 9, 2019 ⢠#đť Open Sourcing Windows Calculator
Seems silly, but this kind of thing is great for the open source movement. Thereâs still an enormous amount of tech out there built at big companies that creates little competitive or legal risk by being open. Non-core tools and libraries (meaning not core to the business differentiation) are perfect candidates to be open to the community. Check it on GitHub.
đ The Metrics Every SaaS Company Should Be Tracking
An Inside Intercom interview with investor David Skok, the king of SaaS metric measurement. His blog has some of the best reference material for measuring your SaaS performance on the things that matter. This deck goes through many of the most important figures and techniques like CAC:LTV, negative churn, and cohort analysis.
đŽ Shockolate â System Shock Open Source
A cross-platform port of one of the all-time great PC games, System Shock1. I donât play many games anymore, but when I get the itch, I only seem to be attracted to the classics.
-
Astute readers and System Shock fans will recognize a certain AI computer in this websiteâs favicon. âŠ
Desktop GIS â The Evergreen Topic â
February 12, 2019 ⢠#A good post from my colleague Bill Dollins about the state of the desktop GIS:
I pretty much watch Esri from a distance these days but, even from my perch, I canât see any workflow that doesnât involve desktop software at some point. Itâs simply still crucial to the editing and publishing of maps and other geospatial content.
Thatâs even true outside of the Esri world. My team lives in QGIS. As of this writing, QGIS is at version 3.4.4 and itâs hard for me to make a case that I need to use anything else, aside from convenient proprietary hooks to other systems and workflows. We do have a lone ArcMap instance for the production of MXDs, but we make heavy use of FME so we can produce data in even the most proprietary of formats if needed, without consuming a license of anything additional.
In our universe we work so much with creating data, QA, cleaning, packaging, and analysis that desktop GIS is (as Bill says) still critical to the workflow.
I would add that Iâve been very impressed with the state of the QGIS product in my recent usage of it. Itâs amazing that with community development, time, and step-by-step progress, an open source developer community can build such a powerful product. Of course QGIS is also underpinned by dozens of super-powerful libraries that are amazing utilities in and of themselves â GDAL, Proj4, GEOS, and more.
Weekend Reading: Fulcrum in Santa Barbara, Point Clouds, Building Footprints
February 2, 2019 ⢠#đ¨đ˝âđ Santa Barbara County Evac with Fulcrum Community
Our friends over at the Santa Barbara County Sheriff have been using a deployment of Fulcrum Community over the last month to log and track evacuations for flooding and debris flow risk throughout the county. Theyâve deployed over 100 volunteers so far to go door-to-door and help residents evacuate safely. In their initial pilot they visited 1,500 residents. With this platform the County can monitor progress in real-time and maximize their resources to the areas that need the most attention.
âThis app not only tremendously increase the accountability of our door-to-door notifications but also gave us a real time tracking on the progress of our teams. We believe it also reduced the time it has historically taken to complete such evacuation notices.â
This is exactly what weâre building Community to do: to help enable groups to collaborate and share field information rapidly for coordination, publish information to the public, and gather quantities of data through citizens and volunteers they couldnât get on their own.
âď¸ USGS 3DEP LiDAR Point Clouds Dataset
From Howard Butler is this amazing public dataset of LiDAR data from the USGS 3D Elevation Program. Thereâs an interactive version here where you can browse whatâs available. Using this WebGL-based viewer you can even pan and zoom around in the point clouds. More info here in the open on GitHub.
đ˘ US Building Footprints
Microsoft published this dataset of computer-generated building footprints, 125 million in all. Pretty incredible considering how much labor itâd take to produce with manual digitizing.
Weekend Reading: Shanghai, Basecamp, and DocuSaurus
January 26, 2019 ⢠#đ¨đł 195-Gigapixel Photo of Shanghai
Shot from the Oriental Pearl Tower, the picture shows enormous levels of detail composited from 8,700 source photos. Imagine this capability available commercially from microsatellite platforms. Seems like an inevitability.
đ How Basecamp Runs its Business
I, like many, have admired Basecamp for a long time in how they run things, particularly Ryan Singerâs work on product design. This talk largely talks about how they build product and work as an organized team.
đ Docusaurus
This is an open source framework for building documentation sites, built with React. Weâre currently looking at this for revamping some of our docs and it looks great. Weâll be able to build the docs locally and deploy with GitHub Pages like always, but itâll replace the cumbersome stuff weâve currently got in Jekyll (which is also great, but requires a lot of legwork for documentation sites).
Fulcrum Desktop
January 4, 2019 ⢠#A frequent desire for Fulcrum customers is to maintain locally a version of the data they collect with our platform, in their database system of choice. With our export tool, itâs simple to pull out extracts in formats like CSV, shapefile, SQLite, and even PostGIS or GeoPackage. What this doesnât allow, though, is an automatable way to keep a local version of data on your own server. Youâd have to extract data manually on some schedule and append new stuff to existing tables youâve already got.
A while back we built and released a tool called Fulcrum Desktop, with the goal of alleviating this problem. Itâs an open source command line utility that harnesses our API to synchronize content from your Fulcrum account into a local database. It supports PostgreSQL (with PostGIS), Microsoft SQL Server, and even GeoPackage.
Other than the primary advantage of providing a way to clone your data to your own system, one of the cool things you can do with Desktop is easily make your data available to your GIS users in a tool like QGIS. It also has a plugin architecture to support other cool things like:
- Media management â syncing photos, videos, audio, signatures
- S3 â storing media files in your own Amazon S3 bucket
- Reports â Generating PDF reports
If you have the Fulcrum Developer Pack with your account, you have access to all of the APIs, so you need that to get Desktop set up (though it is available on the free trial tier).
Weâve also built another utility called fulcrum-sync
that makes it easy to set up Desktop using Docker. This is great for version management, syncing data for multiple organizations, and overall simplifying dependencies and local library management. With Docker âcontainerizingâ the installation, you donât have to worry about conflicting libraries or fiddle with your local setup. All of the FD installation is segmented to its own container. This utility also makes it easier to install and manage FD plugins.
Kindle Highlights
December 14, 2018 ⢠#I started making this tool a long time back to extract highlighted excerpts from Kindle books. This predated the cool support for this that Goodreads has now, but I still would like to spend some time getting back to this little side project.
Eric Farkas has another tool that looks like it does this, as well, so thatâs worth checking out as a possible replacement. What I really want is my own private archive of the data, not really my own custom extraction tool. The gem I was using for mine mightâve been the same one, or does something similar reading from Amazonâs API. Itâs nice because it outputs the data in JSON, so then it can be easily parsed apart into yaml or Markdown to use elsewhere. Each excerpt looks like this:
{
"asin": "B005H0O8KQ",
"customerId": "A28I9D90ISXNT6",
"embeddedId": "CR!CJ3JV6W1D918FDT8WZTVP0GG6CNN:86C04A71",
"endLocation": 72905,
"highlight": "Springs like these are the source of vein-type ore deposits. It's the same story that I told you about the hydrothermal transport of gold. When rainwater gets down into hot rock, it brings up what it happens to find thereâsilver, tungsten, copper, gold. An ore-deposit map and a hot-springs map will look much the same. Seismic waves move slowly through hot rock.",
"howLongAgo": "2 months ago",
"startLocation": 72539,
"timestamp": 1446421339000
}
If I can soon Iâll spend some time tinkering and see if I can pull some for other books Iâve read since.
OpenDroneMap
October 24, 2018 ⢠#Since I got the Mavic last year, I havenât had many opportunities to do mapping with it. Iâve put together a few experimental flights to play with DroneDeploy and our Fulcrum extension, but outside of that Iâve mostly done photography and video stuff.
OpenDroneMap came on a scene a couple years ago as a toolkit for processing drone imagery. Iâve been following it loosely through the Twittersphere since. Most of my image processing has been done with DroneDeploy, since weâd been working with them on some integration between our platforms, but I was curious to take a look once I saw the progress on ODM. Specifically what caught my attention was WebODM, a web-based interface to the ODM processing backend â intriguing because itâd reduce friction in generating mosaics and point clouds with sensible defaults and a clean, simple map interface to browse resulting datasets.

The WebODM setup process was remarkably smooth, using Docker to stand-up the stack automatically. All the prerequisites you need are git, Python, and pip running to get started, which I already had. With only these three commands, I had the whole stack set up and ready to process:
git clone https://github.com/OpenDroneMap/WebODM --config core.autocrlf=input --depth 1
cd WebODM
./webodm.sh start
Pretty slick for such a complex web of dependencies under the hood, and a great web interface in front of it all.
Using a set of 94 images from a test flight over a job site in Manatee county, I experimented first with the defaults to see what itâd output on its own. I did have a bit of overlap on the images, maybe 40% or so (which you need to generate quality 3D). I had to up the RAM available to Docker and reboot everything to get it to process properly, I think because my image set is pushing 100 files.

That project with the default settings took about 30 minutes. It generates the mosaicked orthophoto (TIF, PNG, and even MBTiles), surface model, and point cloud. Hereâs a short clip of what the results look like:
This is why open source is so interesting. The team behind the project has put together great documentation and resources to help users get it running on all platforms, including running everything on your own cloud server infrastructure with extended processing resources. I see OpenAerialMap integration was just added, so Iâll have to check that out next.
Jupyter, Mathematica, and the Future of the Research Paper â
October 19, 2018 ⢠#Paul Romer has an interesting take on this piece from the Atlantic:
Jupyter rewards transparency; Mathematica rationalizes secrecy. Jupyter encourages individual integrity; Mathematica lets individuals hide behind corporate evasion. Jupyter exemplifies the social systems that emerged from the Scientific Revolution and the Enlightenment, systems that make it possible for people to cooperate by committing to objective truth; Mathematica exemplifies the horde of new Vandals whose pursuit of private gain threatens a far greater public lossâthe collapse of social systems that took centuries to build.
The original article is a great read, touching on the work of those like Steven Strogatz and Bret Victor to use more plain-language narrative to communicate complex mathematical ideas. Just as the GitHub has become âthe commonsâ for software and a bottomless resource for learning programming, Jupyter notebooks are taking over the world of data science and research publication.
mapquery â
April 20, 2016 ⢠#A neat tool for generating graphics from map data for visualizations.
JekyllConf â
March 31, 2016 ⢠#JekyllConf is coming up in May, a conference to talk about the latest with Jekyll, the static site generator that lets you publish websites without heavy needs for hosting and databases.
This isnât a typical conference, either. Itâs all online, streaming live via Google Hangouts on Air. Last year there were great presentations from people worldwide on things theyâre doing with the framework.
We converted our website some time back to use Jekyll for the entire thing. We started on Wordpress, which at the time made anything that wasnât a blog post annoying to deal with. Then we moved onto Wordpress just for the blog piece, with a Bootstrap-based custom site for other static pages. For the last 3 years or so weâve been 100% converted to a static site. Our Jekyll site is now thousands of files that generate the resulting pages, all hosted on GitHub, no web servers where we have to manage Apache or nginx configurations. We even have scripts that generate templates that then generate pages. Needless to say, itâs powerful and changed the way we publish.
I also use it for this site. Itâs the only way I enjoy making webpages anymore.
Awesome AWS â
October 16, 2015 ⢠#A collection of excellent resources, documents, and tools for Amazon Web Services.
speedtest-cli â
September 8, 2015 ⢠#If youâre a nerd about checking your download and upload bandwidth when things seem to be getting slow, youâve certainly used Speedtest.net to find out your raw internet speed. If youâre an even bigger nerd, youâll love this command line tool for running quick tests from the shell. The Speedtest website is so covered with nonsense these days, I wonder how much of my bandwidth is being taken to display Flash ads and run Javascript trackers.
Itâs not a 100% accurate tool, but it gets me what I need.
AlaSQL â
July 30, 2015 ⢠#A neat find for doing SQL queries in the browser. Open source and easy compatibility with tons of data formats.
fulcrum-node â
July 19, 2014 ⢠#Jason Sanfordâs node.js library for working with the Fulcrum API.
Bringing Geographic Data Into the Open with OpenStreetMap
September 9, 2013 ⢠#This is an essay I wrote that was published in the OpenForum Academyâs âThoughts on Open Innovationâ book in early summer 2013. Shane Coughlan invited me to contribute on open innovation in geographic data, so I wrote this piece on OpenStreetMap and its implications for community-building, citizen engagement, and transparency in mapping. Enjoy.
With the growth of the open data movement, governments and data publishers are looking to enhance citizen participation. OpenStreetMap, the wiki of world maps, is an exemplary model for how to build community and engagement around map data. Lessons can be learned from the OSM model, but there are many places where OpenStreetMap might be the place for geodata to take on a life of its own.
The open data movement has grown in leaps and bounds over the last decade. With the expansion of the Internet, and spurred on by things like Wikipedia, SourceForge, and Creative Commons licenses, thereâs an ever-growing expectation that information be free. Some governments are rushing to meet this demand, and have become accustomed to making data open to citizens: policy documents, tax records, parcel databases, and the like. Granted, the prevalence of open information policies is far from universal, but the rate of growth of government open data is only increasing. In the world of commercial business, the encyclopedia industry has been obliterated by the success of Wikipedia, thanks to the worldâs subject matter experts having an open knowledge platform. And GitHubâs meteoric growth over the last couple of years is challenging how software companies view open source, convincing many to open source their code to leverage the power of software communities. Openness and collaborative technologies are on an unceasing forward march.
In the context of geographic data, producers struggle to understand the benefits of openness, and how to achieve the same successes enjoyed by other open source initiatives within the geospatial realm. When assessing the risk-reward of making data open, itâs easy to identify reasons to keep it private (How is security handled? What about updates? Liability issues?), and difficult to quantify potential gains. As with open sourcing software, it takes a mental shift on the part of the owner to redefine the notion of âownershipâ of the data. In the open source software world, proprietors of a project can often be thought of more as âstewardsâ than owners. They arenât looking to secure the exclusive rights to the access and usage of a piece of code for themselves, but merely to guide the direction of its development in a way that suits project objectives. Map data published through online portals is great, and is the first step to openness. But this still leaves an air gap between the data provider and the community. Closing this engagement loop is key to bringing open geodata to the same level of genuine growth and engagement thatâs been achieved by Wikipedia.
An innovative new approach to open geographic data is taking place today with the OpenStreetMap project. OpenStreetMap is an effort to build a free and open map of the entire world, created from user contributions â to do for maps what Wikipedia has done for the encyclopedia. Anyone can login and edit the map â everything from business locations and street names to bus networks, address data, and routing information. It began with the simple notion that if I map my street and you map your street, then we share data, both of us have a better map. Since its founding in 2004 by Steve Coast, the project has reached over 1 million registered users (nearly doubling in the last year), with tens of thousands of edits every day. Hundreds of gigabytes of data now reside in the OpenStreetMap database, all open and freely available. Commercial companies like MapQuest, Foursquare, MapBox, Flickr, and others are using OpenStreetMap data as the mapping provider for their platforms and services. Wikipedia is even using OpenStreetMap as the map source in their mobile app, as well as for many maps within wiki articles.
What OpenStreetMap is bringing to the table that other open data initiatives have struggled with is the ability to incorporate user contribution, and even more importantly, to invite engagement and a sense of co-ownership on the part of the contributor. With OpenStreetMap, no individual party is responsible for the data, everyone is. In the Wikipedia ecosystem, active editors tend to act as shepherds or monitors of articles to which theyâve heavily contributed. OpenStreetMap creates this same sense of responsibility for editors based on geography. If an active user maps his or her entire neighborhood, the feeling of ownership is greater, and the user is more likely to keep it up to date and accurate.
Open sources of map data are not new. Government departments from countries around the world have made their maps available for free for years, dating back to paper maps in libraries â certainly a great thing from a policy perspective that these organizations place value on transparency and availability of information. The US Census Bureau publishes a dataset of boundaries, roads, and address info in the public domain (TIGER). The UKâs Ordnance Survey has published a catalog of open geospatial data through their website. GeoNames.org houses a database of almost ten million geolocated place names. There are countless others, ranging from small, city-scale databases to entire country map layers. Many of these open datasets have even made their way into OpenStreetMap in the form imports, in which the OSM community occasionally imports baseline data for large areas based on pre-existing data available under a compatible license. In fact, much of the street data present in the United States data was imported several years ago from the aforementioned US Census TIGER dataset.
Open geodata sources are phenomenal for transparency and communication, but still lack the living, breathing nature of Wikipedia articles and GitHub repositories. âCrowdsourcingâ has become the buzzword with public agencies looking to invite this type of engagement in mapping projects, to widely varying degrees of success. Feedback loops with providers of open datasets typically consist of âreport an issueâ style funnels, lacking the ability for direct interaction from the end user. By allowing the end user to become the creator, it instills a sense of ownership and responsibility for quality. As a contributor, Iâm left to wonder about my change request. âDid they even see my report that the data is out of date in this location? When will it be updated or fixed?â The arduous task of building a free map of the entire globe wouldnât even be possible without inviting the consumer back in to create and modify the data themselves.
Enabling this combination of contribution and engagement for OpenStreetMap is an impressive stack of technology that powers the system, all driven by a mesh of interacting open source software projects under the hood. This suite of tools that drives the database, makes it editable, tracks changes, and publishes extracted datasets for easy consumption is produced by a small army of volunteer software developers collaborating to power the OpenStreetMap engine. While building this software stack is not the primary objective of OSM, itâs this that makes becoming a âmapperâ possible. There are numerous editing tools available to contributors, ranging from the very simple for making small corrections, to the power tools for mass editing by experts. This narrowing of the technical gap between data and user allows the novice to make meaningful contribution and feel rewarded for taking part. Wikipedia would not be much today without the simplicity of clicking a single âeditâ button. Thereâs room for much improvement here for OpenStreetMap, as with most collaboration-driven projects, and month-by-month the developer community narrows this technical gap with improvements to contributor tools.
In many ways, the roadblocks to adoption of open models for creating and distributing geodata arenât ones of policy, but of technology and implementation. Even with ostensibly âopen dataâ available through a government website, data portals are historically bad at giving citizens the tools to get their hands around that data. In the geodata publishing space, the variety of themes, file sizes, and different data formats combine to complicate the process of making the data conveniently available to users. What good is a database Iâm theoretically allowed to have a copy of when itâs in hundreds of pieces scattered over a dozen servers? âPermissionâ and âaccessibilityâ are different things, and both critical aspects to successful open initiatives. A logical extension of opening data, is opening access to that data. If transparency, accountability, and usability are primary drivers for opening up maps and data, lowering the bar for access is critical to make those a reality.
A great example the power of the engagement feedback loop with OpenStreetMap is the work of the Humanitarian OpenStreetMap Teamâs (HOT) work over the past few years. HOT kicked off in 2009 to coordinate the resources resident in the OpenStreetMap community and apply them to assist with humanitarian aid projects. Working both remotely and on the ground, the first large scale effort undertaken by HOT was mapping in response to the Haiti earthquake in early 2010. Since then, HOT has grown its contributor base into the hundreds, and has connected with dozens of governments and NGOs worldwideâsuch as UNOCHA, UNOSAT, and the World Bankâto promote open data, sharing, transparency, and collaboration to assist in the response to humanitarian crises. To see the value of their work, you need look no further than the many examples showing OpenStreetMap data for the city of Port-au-Prince, Haiti before and after the earthquake. In recent months, HOT has activated to help with open mapping initiatives in Indonesia, Senegal, Congo, Somalia, Pakistan, Mali, Syria, and others.
One of the most exciting things about HOT, aside from the fantastic work theyâve facilitated in the last few years, is that it provides a tangible example for why engagement is such a critical component to organic growth of open data initiatives. The OpenStreetMap contributor base, which now numbers in the hundreds of thousands, can be mobilized for volunteer contribution to map places where that information is lacking, and where it has a direct effect on the capabilities of aid organizations working in the field. With a traditional, top-down managed open data effort, the response time would be too long to make immediate use of the data in crisis.
Another unspoken benefit to the OpenStreetMap model for accepting contributions from a crowd is the fact that hyperlocal map data benefits most from local knowledge. Thereâs a strong desire for this sort of local reporting on facts and features on the ground all over the world, and the structure of OpenStreetMap and its user community suits this quite naturally. Mappers tend to map things nearby, things they know. Whether itâs a mapper in a rural part of the western United States, a resort town in Mexico, or a flood-prone region in Northern India, thereâs always a consumer for local information, and often times from those for whom itâs prohibitively expensive to acquire. In addition to the expertise of local residents contributing to the quality of available data, we also have local perspective that can be interesting, as well. This can be particularly essential to those humanitarian crises, as thereâs a tendency for users to map things that they perceive as higher in importance to the local community.
Of course OpenStreetMap isnât a panacea to all geospatial data needs. There are many requirements for mapping, data issue reporting, and opening of information where the data is best suited to more centralized control. Data for things like electric utilities, telecommunications, traffic routing, and the like, while sometimes publishable to a wide audience, still have service dependencies that require centralized, authoritative management. Even with data that requires consolidated control by a government agency or department, though, the principles of engagement and short feedback loops present in the OpenStreetMap model could still be applied, at least in part. Regardless of the model, getting the most out of an open access data project requires an ability for a contributor to see the effect of their contribution, whether itâs an edit to a Wikipedia page, or correcting a one way street on a map.
With geodata, openness and accessibility enable a level of conversation and direct interaction between publishers and contributors that has never been possible with traditional unilateral data sharing methods. OpenStreetMap provides a mature and real-world example of why engagement is often that missing link in the success of open initiatives.
The complete book is available as a free PDF download, or you can buy a print copy here.
Terra
September 7, 2013 ⢠#Inspired by a couple of others, I released a micro project of mine called Terra, to provide a fast way to run several geospatial tools on your computer.

Because I work with a variety of GIS datasets, I end up writing lots of scripts and small automation utilities to manipulate, convert, and merge data, in tons of different formats. Working with geo data at scale like this challenges the non-software developer to get comfortable with basic scripting and programming. Iâve learned a ton in the last couple years about Unix environments, and the community of open source geo tools for working with data in ways that can be pipelined or automated. Fundamental knowledge about bash, Python, or Ruby quickly becomes critical to saving yourself countless hours of repetitive, slow data processing.
The renaissance tool of choice for all sorts of data munging is GDAL, and the resident command line suites of GDAL and OGR. The GDAL and OGR programs (for raster and vector data, respectively) are super powerful out of the box, once you understand the somewhat obtuse and involved syntax for sending data between datasources, and the myriad translation parameters. But these get extra powerful as multitools for all types of data is when you can read from, and sometimes write to, proprietary data formats like Esri geodatabases, ECW files, MrSID raster images, GeoPDFs, SpatiaLite, and others. Many of these formats, though, require you to build the toolchain from source on your own, including the associated client libraries, and this process can be a giant pain, particularly for anyone who doesnât want to learn the nuances of
make
and binary building1. The primary driver for building Terra was to have a simple, clean, consistent environment with a working base set of geotools. It gives you a prebuilt configuration that you can have up and running in minutes.
Terra uses Vagrant, for provisioning virtual machines, and Chef, an automation tool for batching up the setup, and maintaining its configuration. Vagrant is really a wrapper around VirtualBox VMs, and uses base Linux images to give you a clean starting point for each deployment. Itâs amazing for running dev environments. Itâs supports both Chef and Puppet, two CM tools for automating installation of software. I used Chef since I like writing Ruby, and created recipes to bootstrap the installs.
This all started because I got sick of setting up custom GDAL builds on desktop systems. Next on the list for this mini project is to provision installs of some other open geo apps, like TileMill and CartoDB, to run locally. Try it out on your computer, all you need is VirtualBox and Vagrant installed, and install is a few simple commands. Check it out on GitHub, follow the README to get yourself set up, and post an issue if youâd like to see other functionality included.
-
Donât mean to bag on Makefiles, theyâre fantastic. âŠ
Creating New Contributors to OpenStreetMap
January 15, 2013 ⢠#I wrote a blog post last week about the first few months of usage of Pushpin, the mobile app we built for editing OpenStreetMap data.
As I mentioned in the post, Iâm fascinated and excited by how many brand new OpenStreetMap users weâre creating, and how many who never edited before are taking an interest in making contributions. This has been an historic problem for the OpenStreetMap project for years now: How do you convince a casually-interested person to invest the time to learn how to contribute themselves?
There are two primary hurdles Iâve always seen with why âinterested usersâ donât make contributions; one technical, and one more philosophical:
- Editing map data is somewhat complicated, and the documentation and tools donât help many users to climb over this hump.
- Itâs hard to answer the question: âWhy should I edit this map? What am I editing, and who benefits from the information?â
To the first point, this is an issue largely of time and effort on the part of the volunteer-led developer community behind OpenStreetMap. GIS data is fundamentally complex, much moreso than Wikipediaâs content, the primary analog to which OpenStreetMap is often comparedââWikipedia for mapsâ. Itâs an apt comparison only on a conceptual level, but when it comes time to build an editor for the information within each system, the demands of OpenStreetMap data take the complexity to another level. As I said, the community is constantly chewing this issue, and making amazing progress on a new web-based editor. In building Pushpin, we spent a long time making sure that the user didnât need to know anything about the complex OpenStreetMap tagging system in order to make edits. We picked apart the wiki and taginfo to abstract the common tags into simple picklists, which prevents both the need to type lots of info, and the need to know that amenity=place_of_worship
is the proper tag for a church or mosque.
As for answering the âwhyâ, thatâs a little more complicated. People contribute to community projects for a host of reasons, so itâs a challenge to nail down how this should be communicated about OSM. There are stray bits around that tell the story pretty succinctly, but the problem lies in centralizing that core message. The LearnOSM site does a good job of explaining to a non-expert what the benefits are of becoming part of the contributor community, but it feels like the story needs to be told somewhere closer to the main homepage. Alex Barth recently proposed an excellent idea to the OpenStreetMap mailing list, a âcontributors markâ that can be used within OSM-based services to convey the value of free and open map data. This is an excellent idea that addresses a couple of needs. For one it communicates what the project actually is, rather than just sending the unsuspecting user to a page about ODbL, and it also gives a general sense of how the data is used by real people.
In order for those one million user accounts to turn into one million contributors, we need to do a better job at conveying the meaning of the project and the value it provides to OpenStreetMapâs thousands of data consumers.
Symbola
July 13, 2012 ⢠#If youâve ever done much involving symbol sets in mapping (especially web mapping), you know about the nightmare of managing 700 separate PNG files, with different, duplicate versions for slightly different size variations and colors. Even with a small set of a dozen symbols, if you want 3 sizes and 5 colors for each, youâre looking at 180 distinct PNG marker symbols to keep track of. Ugh. SVG format simplifies this in certain ways, but isnât as universally supported or easy to work with as simple GIFs or PNGs.
With TileMill, Iâve wanted to use marker symbolizers frequently in maps, but I often avoid it because it involves a bit of tweaking and configuration to get all the files in the right place, build out all the styles marker-by-marker, and changing colors isnât that easy.

To address this issue and make it easier to get dynamic markers on your maps, Zac built Symbola, an icon font constructed by embedding SVG graphics into TTF and OTF files. It uses the open source FontForge library and a python script to grab the set of SVG graphics, convert them to glyphs, and assigns them to unicode characters. There is a process involved to get the unicode data attributes into your data, but if you use PostGIS as much as I do, this is well worth doing. There are instructions in the README on how to insert specific characters into your PostGIS data tables using SQL. Itâs a clever way to manage symbology at the database level, rather than creating duplication all over your style files with hundreds of iconography-specific style definitions. Check it out on GitHub, you can even customize it and add your own markers.
WhereCampTB
February 22, 2012 ⢠#My talk from Ignite Spatial at WhereCampTB, talking about the OSM Tampa Bay meetup group. Check out the slides in better detail here.
It was a fun event a couple weeks ago â great participation from folks in all sorts of industries involved in mapping or using GIS tools.
WhereCampDC
June 23, 2011 ⢠#We just returned from a fantastic weekend up in DC - first at the Ignite Spatial event on Friday night, then the WhereCampDC unconference on Saturday. Being the first event of itâs kind that Iâve attended (with the âbarcampâ unconference session format), I thought Iâd write up some thoughts and impressions from an amazing 2-day trip.
Ignite Spatial
This was also my first experience hearing talks in the ignite formatâ20 slides, 15 seconds each, 5 minutes. A fantastic format to break people out of the habit of simply reading their slides off a screen. Held at Grosvenor Auditorium at National Geographic Society headquarters, the series was well-run, prompted and emceeâd by Nathaniel Kelso, who did a bang-up job pulling together logistics for both days Our own Tony Quartararo gave his first talk in the ignite format: Homophily and the âgeoherdâ, where he posited that if the theory of homophily (love of being alike) applies to the spread of human attitudes and behaviors, than it also can spread our communityâs interest in geography and technology onto other social circles of people who havenât yet been addicted to using Foursquare, editing OpenStreetMap, or contributing to open source projects. Being aware of our âthree degrees of influenceâ can help us to spread our collective interest in geospatial technology to those that may not even be aware of such things. Vizzualityâs Javier de la Torre presented his work on the OldWeather project, a social and community-driven effort to derive century-old historical weather data by having members transcribe Royal Navy captainâs logbooksâa clever solution to acquiring loads of data about wind, water temperature, sea conditions, and shipboard events from 100 years ago. He even showed off some stunning visualizations of the data. Definitely a crowd favorite. Sophia Parafina declared that âWMS is Deadâ (I agree!), Mapbox founder Eric Gundersen showed off making gorgeous maps with their TileMill map design studio, GeoIQâs Andrew Turner demonstrated the many, many ways he bent geodata to his will to find the perfect DC house.
WhereCamp unconference
The unconference was held at the Washington Post office, which has a nice setup for the format and the attendance that showed up (200 people!). This was my first experience with the user-generated conference format, and I enjoyed it far more out of it than other formal conferences. It starts with the attendees proposing talks and discussions and scheduling them out in separate rooms throughout the day, then everyone breaks up into groups to drill down on whatever topics they find useful. I attend a lot of conferences with high-level discussion about GIS and the mapping community, so in this particular crowd I was more interested in deep diving on some technical discussions of open source stuff weâve been using a lot of lately.
After meeting most of the guys from Development Seed, I knew I wanted to sit in on Tom MacWrightâs talk about Wax, their Javascript toolkit to extend functionality for Modest Maps, which makes it super easy to publish maps on the web. What theyâre doing with Wax will be the future of web mapping for a lot of people. Really the only open source alternative to the commercial Google Maps API at this stage is OpenLayers, which can be overly featureful, heavy, and slow for most developers who just want some simple maps on the web. Dane Springmeyer proposed a discussion around âMapnik Visioningâ, wherein we went around the room discussing the future of our favorite renderer of beautiful map tiles. Mapnik is a critical low-level platform component for generating tiles from custom data, a foundational piece of the open source web mapping puzzle, and it was refreshing to see such technical, in-depth discussion for where to go next with the Mapnik project. Takeaway: node.js and node-mapnik bindings are going to be the future of the platform. AJ Ashton spun up a discussion about TileMill, the map tile design studio that Mapbox has constructed to help cartographers make beautiful maps easily with open standards and their own custom data. TileMill has definitely added a huge capability for us to style up and distribute maps of our own data. The stack of tools that TileMill provides allows designers to create great cartography for map data quickly, and to export as a tileset for viewing on the web or mobile. TileMill has firmly planted itself in our arsenal as something weâll continue to use for a long time, a fantastic tool for designers.