Coleman McCormick

Archive of posts with tag 'Fulcrum'

Fulcrum Field Day

November 9, 2020 • #

A fun aspect of working on a business product with a product-led growth strategy that you get to use the your product for in your personal life. I’ve used Fulcrum for creating personal tracking databases, collecting video for OpenStreetMap, and even documenting my map collection.

There’s no better way to build an empathetic perspective of your customer’s life than to go and be one as often as you can.

Last week our team did an afternoon field day where the entire company went out on a scavenger hunt of sorts, using Fulcrum to log some basic neighborhood sightings. 42 people scattered across the US collected 1,230 records in about an hour, which is an impressive pace even if the use case was a simple one!

Data across the nation, and my own fieldwork in St. Pete
Data across the nation, and my own fieldwork in St. Pete

It’s unfortunate how easy it is to stray away from the realities of what customers deal with day in and day out. Any respectable product person has a deep appreciation for how their product works for customers on the ground, at least academically. What exercises like this help us do is to get out of the realm of academics and try to do a real job. With B2B software, especially the kind built for particular industrial or domain applications, it’s hard to do this frequently since you aren’t your canonical user; you have to contrive your own mock scenarios to tease out the pain points in workflow.

The problem is that manufactured tests can’t be representative of all the messy realities in utilities, construction, engineering, or the myriad other cases we serve.

There’s no silver bullet for this. Acknowledging imperfect data and remaining aware of the gaps in your knowledge is the foundation. Then fitting your solution to the right problem, at the right altitude, is the way to go.

Exercises like ours last week are always energizing, though. Anytime you can rally attention around what your customers go through every day it’s a worthy cause. The list of observations and feedback is a mile long, and all high value stuff to investigate.

✦

Workflows in Fulcrum

August 25, 2020 • #

Fulcrum’s been the best tool out there for quite a few years for building your own apps and collecting data with mobile forms (we were doing low-code before it was cool). Our product focus for a long time was on making it as simple and as fast as possible to go from ideas to reality to get working on a data collection process. For any sort of work you would’ve previously done with a pen and paper, or a spreadsheet on a tablet, you can rapidly build and deploy a Fulcrum app to your team for things like inspections, audits, and inventory applications.

Workflows in Fulcrum

For the last 8 months or so we’ve been focused on improving what you can do with data after collection. We’re great at speed to build and collect, but had not been focused yet on the rest of a customer workflow. Since the beginning we’ve had an open API (even for SQL, what we call the Query API), code libraries, and other tools. In July we launched our Report Builder, which was a big step in the direction of self-service reporting and process improvement tools

This week we’ve just launched Workflows, which is all about providing users an extensible framework for adding their own business logic of events and actions that need to happen on your data.

If you’re familiar with tools like Zapier or Integromat, you’ll recognize the concept. Workflows is similar in design, but focused on events within the scope of Fulcrum. Here’s how it works:

A workflow listens for an event (a “trigger”) based on data coming through the system. Currently you can trigger on new data created or updated.

When a trigger happens, the relevant record data gets passed to the next step, a “filter” where you can set criteria to funnel it through. Like cases where I want to trigger on new data, but only where a “Status” = Critical.

Any record making it through is passed to an “action”, and at launch we have actions to:

  • Send an email
  • Send an SMS message
  • Send a webhook (man is this one powerful)

We’re excited to see what users build with this initial set of options. There are plans in the works for a lot of interesting things like custom SMTP (for high volume email needs), geofencing, push notifications, and much more.

This is just the beginning of what will become a pillar product. Our Workflow engine will continue to evolve with new actions, filters, and triggers over time as we extend it to be more flexible for designing your business data decision steps and data flows.

✦

Fulcrum's Report Builder

July 5, 2020 • #

After about 6-8 months of forging, shaping, research, design, and engineering, we’ve launched the Fulcrum Report Builder. One of the key use cases with Fulcrum has always been using the platform to design your own data collection processes with our App Builder, perform inspections with our mobile app, then generate results through our Editor, raw data integrations, and, commonly, generating PDF reports from inspections.

Fulcrum Report Builder

For years we’ve offered a basic report template along with an ability to customize the reports through our Professional Services team. What was missing was a way to expose our report-building tools to customers.

With the Report Builder, we now have two modes available: a Basic mode that allows any customer to configure some parameters about the report output through settings, and an Advanced mode that provides a full IDE for building your own fully customized reports with markup and JavaScript, plus a templating engine for pulling in and manipulating data.

Under the hood, we overhauled the generator engine using a library called Puppeteer, a headless Chrome node.js API for doing many things, including converting web pages to documents or screenshots. It’s lightning fast and allows for a live preview of your reports as you’re working on your template customization.

Feedback so far has been fantastic, as this has been of the most requested capabilities on the platform. I can’t wait to see all of the ways people end up using it.

We’ve got a lot more in store for the future. Stay tuned to see what else we add to it.

✦
✦

2020 Ready: Field Data Collection with Fulcrum

February 4, 2020 • #

Today we hosted a webinar in conjunction with our friends at NetHope and Team Rubicon to give an overview of Fulcrum and what we’re collectively doing in disaster relief exercises.

Both organizations deployed to support recent disaster events for Cyclone Idai and Hurricane Dorian (the Bahamas) and used Fulcrum as a critical piece of their workflow.

Always enjoyable to get to show more about what we’re doing to support impactful efforts like this.

✦
✦
✦

Balancing Power and Usability

November 18, 2019 • #

This is another one from the archives, written for the Fulcrum blog back in 2016.

Engineering is the art of building things within constraints. If you have no constraints, you aren’t really doing engineering. Whether it’s cost, time, attention, tools, or materials, you’ve always got constraints to work within when building things. Here’s an excerpt describing the challenge facing the engineer:

The crucial and unique task of the engineer is to identify, understand, and interpret the constraints on a design in order to produce a successful result. It is usually not enough to build a technically successful product; it must also meet further requirements.

In the development of Fulcrum, we’re always working within tight boundaries. We try to balance power and flexibility with practicality and usability. Working within constraints produces a better finished product — if (by force) you can’t have everything, you think harder about what your product won’t do to fit within the constraints.

Microsoft Office, exemplifying 'feature creep'
Microsoft Office, exemplifying 'feature creep'

The practice of balancing is also relevant to our customers. Fulcrum is used by hundreds of organizations in the context of their own business rules and processes. Instead of engineering a software product, our users are engineering a solution to their problem using the Fulcrum app builder, custom workflow rules, reporting, and analysis, all customizable to fit the goals of the business. When given a box of tools to build yourself a solution to a problem, the temptation is high to try to make it do and solve everything. But with each increase in power or complexity, usability of your system takes a hit in the form of added burden on your end users to understand the complex system — they’re there to use your tool for a task, finish the job, and go home.

This balance between power and usability is related to my last post on treating causes rather than symptoms of pain. Trying too hard to make a tool solve every potential problem in one step can (and almost always does) lead to overcomplicating the result, to the detriment of everyone.

In our case as a product development and design team, a powerful suite of options without extremely tight attention on implementation runs the risk of becoming so complex that the lion’s share of users can’t even figure it out. GitHub’s Ben Balter recently wrote a great piece on the risks of optimizing your product for edge cases1:

No product is going to satisfy 100% of user needs, although it’s sure tempting to try. If a 20%-er requests a feature that isn’t going to be used by the other 80%, there’s no harm in just making it a non-default option, right?

We have a motto at GitHub, part of the GitHub Zen, that “anything added dilutes everything else”. In reality, there is always a non-zero cost to adding that extra option. Most immediately, it’s the time you spend building feature A, instead of building feature B. A bit beyond that, it’s the cognitive burden you’ve just added to each user’s onboarding experience as they try to grok how to use the thing you’ve added (and if they should). In the long run, it’s much more than maintenance. Complexity begets complexity, meaning each edge case you account for today, creates many more edge cases down the line.

This is relevant to anyone building something to solve a problem, not just software products. Put this in the context of a Fulcrum data collection workflow. The steps might look something like this:

  1. Analyze your requirements to figure out what data is required at what stage in the process.
  2. Build an app in Fulcrum around those needs.
  3. Deploy to field teams.
  4. Collect data.
  5. Run reports or analysis.

What we notice a surprising amount of the time is an enormous investment in step 2, sometimes to the exclusion of much effort on the other stages of the workflow. With each added field on a survey, requirement for data entry, overly-specific validation, you add potential hang-ups for end users responsible for actually collecting data. With each new requirement, usability suffers. People do this for good reason — they’re trying to accommodate those edge cases, the occasions where you do need to collect this one additional piece of info, or validate something against a specific requirement. Do this enough times, however, and your implementation is all about addressing the edge problems, not the core problem.

When you’re building a tool to solve a problem, think about how you may be impacting the core solution when you add knobs and settings for the edge cases. Best-fit solutions require testing your product against the complete ideal life cycle of usage. Start with something simple and gradually add complexity as needed, rather than the reverse.

  1. Ben’s blog is an excellent read if you’re into software and the relationship to government and enterprise. â†Š

✦

Fall All Hands 2019

November 9, 2019 • #

We just wrapped up our Fall “all hands” week at the office. Another good week to see everyone from out of town, and an uncommonly productive one at that. We got a good amount of planning discussion done for future product roadmap additions, did some testing on new stuff in the lab, fixed some bugs, shared some knowledge, and ate (a lot).

Looking forward to the next one!

✦
✦

San Juan

October 21, 2019 • #

We’re in San Juan this week for the NetHope Global Summit. Through our partnership with NetHope, a non-profit devoted to bringing technology to disaster relief and humanitarian projects, we’re hosting a hands-on workshop on Fulcrum on Thursday.

NetHope Summit

We’ve already connected with several of the other tech companies in NetHope’s network — Okta, Box, Twilio, and others — leading to some interesting conversations on working together more closely on integrated deployments for humanitarian work.

Fortin San Geronimo de Boqueron
Fortin San Geronimo de Boqueron

Looking forward to an exciting week, and maybe some exploring of Old San Juan. Took a walk last night out to dinner along the north shore overlooking the Atlantic.

✦
✦

Data as a Living Asset

September 20, 2019 • #

This is a post from the Fulcrum archives I wrote 3 years back. I like this idea and there’s more to be written on the topic of how companies treat their archives of data. Especially in data-centric companies like those we work with, it’s remarkable to see how quickly it often is thrown on a shelf, atrophies, and is never used again.

In the days of pen and paper collection, data was something to collect, transcribe, and stuff into a file cabinet to be stored for a minimum of 5 years (lest those auditors come knocking). With advances in digital data capture — through all methods including forms software, spreadsheets, or sensors — many organizations aren’t rethinking their processes and thus, haven’t come much further. The only difference is that the file cabinet’s been replaced with an Access database (or gasp a 10 year old spreadsheet!).

Many organizations collect troves of legacy data in their operations, or at least as much as they can justify the cost in collecting. But because data management is a complicated domain in and of itself, often times the same data is re-collected over and over, with all cost and no benefit. Once data makes its way into corporate systems somewhere after its initial use, it’s forgotten and left on the virtual shelf.

Data as a Living Asset

Data is your company’s memory. It’s the living, institutional knowledge you’ve invested in over years or decades of doing business, full of latent value.

But there are a number of challenges that stand in the way when trying to make use of historical data:

  • Compatibility â€” File formats and versions. Can I read my old data with current tools?
  • Access â€” Data silos and where your data is published. Can my staff get to archives they need access to without heartburn?
  • Identification â€” A process for knowing what pieces are valuable down the road. Within these gigabytes of data, what is useful?

If you give consideration to these issues up-front as you’re designing a data collection workflow, you’ll make your life much simpler down the road when your future colleagues are trying to leverage historical data assets.

Let’s dive deeper on each of these issues.

Formats and Compatibility

I call this the “Lotus 1-2-3” problem, which happens whenever data is stored in a format that dies off and loses tool compatibility1. Imagine the staggering amount of historical corporate data locked up in formats that no one can open anymore. This is one area where paper can be an advantage: if stored properly, you can always open the file.

Of course there’s no way to know the future potential of a data format on the day you select it as your format of choice. We don’t have the luxury of that kind of hindsight. I’m sure no one would’ve selected Lotus’s .123 format back in ‘93 had they known that Excel would come to dominate the world of spreadsheets. Look for well-supported open standards like CSV or JSON for long term archival. Another good practice is to revisit your data archives as a general “hygiene” practice every few years. Are your old files still usable? The faster you can convert dead formats into something more future-proof, the better.

Accessibility

This is one of the most important issues when it comes to using archives of historical data. Presuming a user can open files of 10 year old data because you’ve stored it effectively in open formats — is the data somewhere that staff can get it? Is it published somewhere in a shared workspace for easy access? Most often data isn’t squirreled away in a hard-to-reach place intentionally. It’s often done for the sake of organization, cleanliness, or savings on storage.

Anyone that works frequently with data has heard of “data silos”, which arise when data is holed up in a place where it doesn’t get shared, only accessible by individual departments or groups. Avoiding this issue can also involve internal corporate policy shifts or revisiting your data security policies. In larger organizations I’ve worked in, however, the tendency is toward over-securing data to the point of uselessness. In some cases it might as well be deleted since it’s effectively invisible to the entire company. This is a mistake and a waste of large past investments in collecting that data in the first place.

Look for publishing tools that make your data easy to get to without sacrificing controls over access and security. But resist the urge to continuously wall off past data from your team.

Identifying the Useful Things

Now, assuming your data is in a useful format and it’s easily accessible, you’re almost there. When working with years of historical records it can be difficult to extract the valuable bits of information, but that’s often because the first two challenges (compatibility and accessibility) have already been standing in your way. If your data collection process is built around your data as an evergreen asset rather than a single-purpose resource, it becomes much easier to think of areas where a dataset could be useful 5 or 6 years down the road.

For instance, if your data collection process includes documenting inspections with thorough before-and-after photographs, those could be indispensable in the event of a dispute or a future issue in years time. With ease of access and an open format, it could take two clicks to resolve a potentially thorny issue with a past client. That is if you’ve planned your process around your data becoming a valuable corporate resource.

A quick story to demonstrate these practices:

I’m currently working with a construction company on re-roofing my house, and they’ve been in business for 50+ years. Over that time span, they’ve performed site visits and accurately measured so many roofs in the area that when they get calls for quotes, they often can pull a file from 35 years ago when they went out and measured a property. That simple case is an excellent example of realizing latent value in a prior investment in data: if they didn’t organize, archive, and store that information effectively, they’d be redoing field visits every week. Though they aren’t digital with most of their process, they’ve nailed a workflow that works for them. They use formats that work, make that data accessible to their people, and know exactly what information they’ll find useful over the long term.

Data has value beyond its immediate use case, but you have to consider this up front. Design sustainable workflows that allow you to continuously update data, and make use of archival data over time. You’ve spent a lot to create it, you should be leveraging it to its fullest extent.

  1. Lotus 1-2-3 was a spreadsheet application popular in the 80s and 90s. It succumbed to the boom of Microsoft Office and Excel in the 1990s. â†Š

✦
✦
✦

Shipping the Right Product

August 14, 2019 • #

This is one from the archives, originally written for the Fulcrum blog back in early 2017. I thought I’d resurface it here since I’ve been thinking more about continual evolution of our product process. I liked it back when I wrote it; still very relevant and true. It’s good to look back in time to get a sense for my thought process from a couple years ago.

In the software business, a lot of attention gets paid to “shipping” as a badge of honor if you want to be considered an innovator. Like any guiding philosophy, it’s best used as a general rule than as the primary yardstick by which you measure every individual decision. Agile, scrum, TDD, BDD — they’re all excellent practices to keep teams focused on results. After all, the longer you’re polishing your work and not putting it in the hands of users, the less you know about how they’ll be using it once you ship it!

These systems followed as gospel (particularly with larger projects or products) can lead to attention on the how rather than the what — thinking about the process as shipping “lines of code” or what text editor you’re using rather than useful results for users. Loops of user feedback are essential to building the right solution for the problem you’re addressing with your product.

Shipping the right product

Thinking more deeply about aligning the desires to both ship _something_ rapidly while ensuring it aligns with product goals, it brings to mind a few questions to reflect on:

  • What are you shipping?
  • Is what you’re shipping actually useful to your user?
  • How does the structure of your team impact your resulting product?

How can a team iterate and ship fast, while also delivering the product they’re promising to customers, that solves the expressed problem?

Defining product goals

In order to maintain a high tempo of iteration without simply measuring numbers of commits or how many times you push to production each day, the goals need to be oriented around the end result, not the means used to get there. Start by defining what success looks like in terms of the problem to be solved. Harvard Business School professor Clayton Christensen developed the jobs-to-be-done framework to help businesses break down the core linkages between a user and why they use a product or service1. Looking at your product or project through the lens of the “jobs” it does for the consumer helps clarify problems you should be focused on solving.

Most of us that create products have an idea of what we’re trying to achieve, but do we really look at a new feature, new project, or technique and truly tie it back to a specific job a user is expecting to get done? I find it helpful to frequently zoom out from the ground level and take a wider view of all the distinct problems we’re trying to solve for customers. The JTBD concept is helpful to get things like technical architecture out of your way and make sure what’s being built is solving the big problems we set out to solve. All the roadmaps, Gantt charts, and project schedules in the world won’t guarantee that your end result solves a problem2. Your product could become an immaculately built ship that’s sailing in the wrong direction. For more insight into the jobs-to-be-done theory, check out This is Product Management’s excellent interview with its co-creator, Karen Dillon.

Understanding users

On a similar thread as jobs-to-be-done, having a deep understanding of what the user is trying to achieve is essential in defining what to build.

This quote from the article gets to the heart of why it matters to understand with empathy what a user is trying to accomplish, it’s not always about our engineering-minded technical features or bells and whistles:

Jobs are never simply about function — they have powerful social and emotional dimensions.

The only way to unroll what’s driving a user is to have conversations and ask questions. Figure out the relationships between what the problem is and what they think the solution will be. Internally we talk a lot about this as “understanding pain”. People “hire” a product, tool, or person to reduce some sort of pain. Deep questioning to get to the root causes of pain is essential. Often times people want to self-prescribe their solution, which may not be ideal. Just look how often a patient browses WebMD, then goes to the doctor with a preconceived diagnosis, without letting the expert do their job.

On the flip side, product creators need to enter these conversations with an open mind, and avoid creating a solution looking for a problem. Doctors shouldn’t consult patients and make assumptions about the underlying causes of a patient’s symptoms! They’d be in for some serious legal trouble.

Organize the team to reflect goals

One of my favorite ideas in product development comes from Steven Sinofsky, former Microsoft product chief of Office and Windows:

“Don’t ship the org chart.”

Org chart

The salient point being that companies have a tendency to create products that align with areas of responsibility within the company3. However, the user doesn’t care at all about the dividing lines within your company, only the resulting solutions you deliver.

A corollary to this idea is that over time companies naturally begin to look like their customers. It’s clearly evident in the federal contracting space: federal agencies are big, slow, and bureaucratic, and large government contracting companies start to reflect these qualities in their own products, services, and org structures.

With our product, we see three primary points to make sure our product fits the set of problems we’re solving for customers:

  • For some, a toolbox — For small teams with focused problems, Fulcrum should be seamless to set up, purchase, and self-manage. Users should begin relieving their pains immediately.
  • For others, a total solution — For large enterprises with diverse use cases and many stakeholders, Fulcrum can be set up as a total turnkey solution for the customer’s management team to administer. Our team of in-house experts consults with the customer for training and on-boarding, and the customer ends up with a full solution and the toolbox.
  • Integrations as the “glue” — Customers large and small have systems of record and reporting requirements with which Fulcrum needs to integrate. Sometimes this is simple, sometimes very complex. But always the final outcome is a unique capability that can’t be had another way without building their own software from scratch.

Though we’re still a small team, we’ve tried to build up the functional areas around these objectives. As we advance the product and grow the team, it’s important to keep this in mind so that we’re still able to match our solution to customer problems.

For more on this topic, Sinofsky’s post on “Functional vs. Unit Organizations” analyzes the pros, cons, and trade offs of different org structures and the impacts on product. A great read.

Continued reflection, onward and upward 📈

In order to stay ahead of the curve and Always Be Shipping (the Right Product), it’s important to measure user results, constantly and honestly. The assumption should be that any feature could and should be improved, if we know enough from empirical evidence how we can make those improvements. With this sort of continuous reflection on the process, hopefully we’ll keep shipping the Right Product to our users.

  1. Christensen is most well known for his work on disruption theory↩

  2. Not to discount the value of team planning. It’s a crucial component of efficiency. My point is the clean Gantt chart on its own isn’t solving a customer problem! â†Š

  3. Of course this problem is only minor in small companies. It’s of much greater concern to the Amazons and Microsofts of the world. â†Š

✦
✦

Fulcrum as a Personal Database

July 29, 2019 • #

I use Fulcrum all the time for collecting data around hobbies of mine. Sometimes it’s for fun or interests, sometimes for mapping side projects, or even just for testing the product as we develop new features.

Here are a few of my key every day apps I use for personal tracking. I’m always tinkering around with other things as we expand the product, but each of these I’ve been using for years pretty consistently.

Gas Mileage

Of course there are apps out there devoted to this task, but I like the idea of having my own raw data input for this. Piping this to a spreadsheet lets me run some calculations on it to see MPG, total spend, and total miles driven over time.

Gas mileage tracker

Maps Collection

I’m a collector of paper maps, and some time back I built out a tracker in Fulcrum to inventory my collection. One day I plan to add some other details to this for year, publisher, and the like, but it works for now as a basic inventory of what I’ve got.

Maps database

Workouts

I’ve been lax this year with the routine, but I’d built out a log for tracking my workout sessions at the gym — mostly to track doing the “Runner 360” workout. It works great and provides a way to build some charts with progress on efforts over time.

Home Inventory

In order to have a reliable log of all of the expensive stuff in my house, I created this so that there’s some prayer of having a tight evidence log of what I own if there’s ever a flood, hurricane, or fire (or even theft) that requires a homeowners insurance claim. I figured it can’t hurt to have photographic evidence of what’s in the house if it came to needing to prove it.

Home inventory

Football Clubs

This one is more of an experiment in using Fulcrum (and its API) as a cloud-based PostGIS database. I created a simple schema for each team, league, and stadium location. I had this idea to use these coordinates for generating a poster of stadiums from satellite images. One day I might have time for that, but there’s also an open database you can download of all the locations as geojson.

Football clubs map

There are a few others I’ve got in “R&D” mode right now testing out. Always on the hunt for new and interesting things I can make Fulcrum do. It’s a true power tool for data entry and data management.

✦

Weekend Reading: Rhythmic Breathing, Drowned Lands, and Fulcrum SSO

July 20, 2019 • #

🏃🏻‍♂️ Everything You Need to Know About Rhythmic Breathing

I tried this out the other night on a run. The technique makes some intiutive sense that it’d reduce impact (or level it out side to side anyway). Surely to notice any result you’d have to do it over distance consistently. But I’ve had some right knee soreness that I don’t totally know the origin of, so thought I’d start trying this out. I found it takes a lot of concentration to keep it up consistently. I’ll keep testing it out.

🏞 Terrestrial Warfare, Drowned Lands

A neat historical, geographical story from BLDGBLOG:

Briefly, anyone interested in liminal landscapes should find Snell’s description of the Drowned Lands, prior to their drainage, fascinating. The Wallkill itself had no real path or bed, Snell explains, the meadows it flowed through were naturally dammed at one end by glacial boulders from the Ice Age, the whole place was clogged with “rank vegetation,” malarial pestilence, and tens of thousands of eels, and, what’s more, during flood season “the entire valley from Denton to Hamburg became a lake from eight to twenty feet deep.”

Turns out there was local disagreement on flood control:

A half-century of “war” broke out among local supporters of the dams and their foes: “The dam-builders were called the ‘beavers’; the dam destroyers were known as ‘muskrats.’ The muskrat and beaver war was carried on for years,” with skirmishes always breaking out over new attempts to dam the floods.

Here’s one example, like a scene written by Victor Hugo transplanted to New York State: “A hundred farmers, on the 20th of August, 1869, marched upon the dam to destroy it. A large force of armed men guarded the dam. The farmers routed them and began the work of destruction. The ‘beavers’ then had recourse to the law; warrants were issued for the arrest of the farmers. A number of their leaders were arrested, but not before the offending dam had been demolished. The owner of the dam began to rebuild it; the farmers applied for an injunction. Judge Barnard granted it, and cited the owner of the dam to appear and show cause why the injunction should not be made perpetual. Pending a final hearing, high water came and carried away all vestige of the dam.”

🔐 Fulcrum SAML SSO with Azure and Okta

This is something we launched a few months back. There’s nothing terribly exciting about building SSO features in a SaaS product — it’s table stakes to move up in the world with customers. But for me personally it’s a signal of success. Back in 2011, imagining that we’d ever have customers large enough to need SAML seemed so far in the future. Now we’re there and rolling it out for enterprise customers.

✦
✦

The Second Phase: allinspections

June 3, 2019 • #

This post is part 3 in a series about my history in product development. Check out the intro in part 1 and all about our first product, Geodexy, in part 2.

Back in 2010 we decide to halt our development of Geodexy and regroup to focus on a narrower segment of the marketplace. With what we’d learned in our go-to-market attempt on Geodexy, we wanted to isolate a specific industry we could focus our technology around. Our tech platform was strong, we were confident in that. But at the peak of our efforts with taking Geodexy to market, we were never able to reach a state of maturity to create traction and growth in any of the markets we were targeting. Actually targeting is the wrong word — truthfully that was the issue: we weren’t “targeting” anything because we had too many targets to shoot at.

We needed to take our learnings, regroup on what was working and what wasn’t, and create a single focal point we could center all of our effort around, not just the core technology, but also our go-to-market approach, marketing strategy, sales, and customer development.

I don’t remember the specific genesis of the idea (I think it was part internal idea generation, part serendipity), but we connected on the notion of field data collection for the property inspection market. So we launched allinspections.

allinspections

That industry had the hallmarks of one ripe for us to show up with disruptive technology:

  • Low current investment in technology — Most folks were doing things on paper with lots of transcribing and printing.
  • Lots of regulatory basis in the workflow — Many inspections are done as a requirement by a regulatory body. This meant consistent, widespread needs that crossed geographic boundaries, and an “always-on” use case for a technology solution.
  • Phased workflow with repetitive process and “decision tree” problems — a perfect candidate for digitizing the process.
  • Very few incumbent technologies to replace — if there were competitors at all, they were Excel and Acrobat.
  • Smartphones ready to amplify a mobile-heavy workflow — Inspections of all sorts happen in-situ somewhere in the field.

While the market for facility and property inspections is immense, we opted to start on the retail end of the space: home inspections for residential real estate. There was a lot to like about this strategy for a technology company looking to build something new. We could identify individual early adopters, gradually understand what made their business tick, and index on capability that empowered them. There was no need immediately to worry about selling to massive enterprise organizations, which would’ve put a heavy burden on us to build “box-checking” features like hosting customization, access controls, single sign-on, and the like. We used a freemium model which helped attract early usage, then shifted to a free trial one later on after some early traction.

Overall the biggest driver that attracted us to residential was the consistency of the work. While anyone who’s bought property is familiar with the process of getting a house inspected before closing. That sort of inspection is low volume compared to those associated with insurance underwriting. Our first mission was this: to build the industry-standard tool for performing these regulated inspections in Florida — wind mitigation, 4-point, and roof certification. These were (and still are) done by the thousands every day. They were perfect candidates for us for the reasons listed above: simple, standard, ubiquitous, and required1. There was a built-in market for automating the workflow around them and improving the data collected, which we could use as a beachhead to get folks used to using an app to conduct their inspections.

Our hypothesis was that we could apply the technology for mobile data collection we’d built in Geodexy and “verticalize” it around the specialty of property inspection with features oriented around that problem set. Once we could spin up enough technology adoption for home inspection use cases at the individual level, we could then bridge into the franchise operations and institutions (even the insurance companies themselves) to standardize on allinspections for all of their work.

We had good traction in the early days with inspectors. It didn’t take us long before we connected with a half-dozen tech-savvy inspectors in the area to work with as guinea pigs to help us advance the technology. Using their domain expertise in exchange for usage of the product, we were able to fast-forward on our understanding of the inspection workflow — from original request handling and scheduling, to inspecting on-site, then report delivery to customer. Within a year we had a pretty slick solution and 100 or so customers that swore by the tool for getting their work done.

But it didn’t take us long to run into friction. Once we’d exhausted the low-hanging fruit of the early adopter community, it became harder and harder to find more of the tech savvy crowd willing to splash some money on something new and different. As you might expect, the community of inspectors we were targeting were not technologists. Many of these folks were perfectly content with their paperwork process and enjoyed working solo. Many had no interest in building a true business around their operation, not interested in growing into a company with multiple inspectors covering wider geographies. Others were general contractors doing inspections as a side gig, so it wasn’t even their core day to day job. With that kind of fragmentation, it was difficult to reach the economies of scale we were looking for to be able to sell something at the price point where we needed to be. We had some modest success pursuing the larger nationwide franchise organizations, but our sales and onboarding strategy wasn’t conducive to getting those deals beyond the small pilot stage. It was still too early for that. We wanted to get to B2B customer sizes and margins, but were ultimately still selling a B2C application. Yes, a home inspector has a business that we were selling to, but the fundamentals of the relationship share far more in common with a consumer product relationship than a corporate one.

By early 2012 we’d stalled out on growth at the individual level. A couple opportunities to partner with inspection companies on a comprehensive solution for carriers failed, partially for technical reasons, but also immaturity of our existing market. We didn’t have a reference base sizable enough to jump all the way up to selling 10,000 seats without enormous burden and too much overpromising on what we could do.

We shut down operations on allinspections in early 2012. We had suspected this would have to happen for a while, so it wasn’t a sudden decision. But it always hurts to have to walk away from something you poured so much time and energy into.

I think the biggest takeaway for me at the time, and in the early couple years of success on Fulcrum, was how relatively little the specifics of your technology matter if you mess up the product-market fit and go-to-market steps in the process. The silver lining in the whole affair was (like many things in product companies) that there was plenty to salvage and carry on to our next effort. We learned an enormous amount about what goes into building a SaaS offering and marketing it to customers. Coming from Geodexy where we never even reached the stage of having a real “customer success” process to deal with, allinspections gave us a jolt in appreciation for things like identifying the “aha moment” in the product, increasing usage of a product, tracking usage of features to diagnose engagement gaps, and ultimately, getting on the same page as the customer when it comes to the final deliverable. It takes working with customers and learning the deep corners of the workflow to identify where the pressure points are in the value chain, the things that keep the customer up at night when they don’t have a solution.

And naturally there was plenty of technology to bring forward with us to our next adventure. The launch of Fulcrum actually pre-dates the end of allinspections, which tells you something about how we were thinking at the time. At the time we weren’t thinking of Fulcrum as the “next evolution” of allinspections necessarily, but we were thinking about going bigger while fixing some of the mistakes made a year or two prior. While most of Fulcrum was built ground-up, we brought some code but a whole boatload of lessons learned on systems, methods, and architecture that helped us launch and grow Fulcrum as quickly as we did.

Retrospectives like this help me to think back on past decisions and process some of what we did right and wrong with some separation. That separation can be a blessing in being able to remove personal emotion or opinion from what happened and look at it objectively, so it can serve as a valuable learning experience. Sometime down the road I’ll write about this next evolution that led to where we are today.

  1. Since the mid-2000s, all three of these inspection types are required for insurance policies in Florida. â†Š

✦

Weekend Reading: Real Time Analytics, Georeferencing, and Fulcrum Code

June 1, 2019 • #

📉 Whom the Gods Would Destroy, They First Give Real-time Analytics

I thought this was a great post on how unnecessary “real-time” analytics can be when misused. As the author points out, it’s almost never necessary to have data that current. With current software it’s possible to have infinite analytics on everything, and as a result it’s irresistable to many people to think of those metrics as essential for decision making.

This line of thinking is a trap. It’s important to divorce the concepts of operational metrics and product analytics. Confusing how we do things with how we decide which things to do is a fatal mistake.

🗺 Georeferencing Vermont’s Historic Aerial Imagery in QGIS

This is a great step-by-step guide to how to georeference data. I spent time years ago figuring this out but still never was able to do it very well. This guide is all you need to be able to georeference old maps.

🔺 Fulcrum Code Editor

We rebuilt the code editing environment in the Fulcrum App Designer, which is part of both the Data Events and Calculation Expression editing views. The team (led by Emily) did some great work on this using TypeScript and Microsoft’s Monaco project, with IntelliSense code completion. It’s a great addition for our many power users to write better automations on top of Fulcrum.

✦
✦

Weekend Reading: Hurricanes, Long Games, and AirPods

March 30, 2019 • #

⛈ Hurricane Season 2017: A Coordinated Reconnaissance Effort

The NSF StEER program has been using Fulcrum Community for a couple of years now, ever since Hurricane Harvey landed on the Texas coast, followed by Irma and Maria later that fall. They’ve built a neat program on top of our platform that lets them respond quickly with volunteers on the ground conducting structure assessments post-disaster:

The large, geographically distributed effort required the development of unified data standards and digital workflows to enable the swift collection and curation of perishable data in DesignSafe. Auburn’s David Roueche, the team’s Data Standards Lead, was especially enthusiastic about the team’s customized Fulcrum mobile smartphone applications to support standardized assessments of continental U.S. and Caribbean construction typologies, as well as observations of hazard intensity and geotechnical impacts.

It worked so well that the team transitioned their efforts into a pro-bono Fulcrum Community site that supports crowdsourced damage assessments from the public at large with web-based geospatial visualization in real time. This feature enabled coordination with teams from NIST, FEMA, and ASCE/SEI. Dedicated data librarians at each regional node executed a rigorous QA/QC process on the backside of the Fulcrum database, led by Roueche.

🧘🏻‍♂️ The Surprising Power of the Long Game

Ever since my health issues in 2017, the value of the little things has become much more apparent. I came out of that with a renewed interest in investing in mental and physical health for the future. Reading about, thinking about, and practicing meditation have really helped to put the things that matter in perspective when I consider consciously how I spend my time. This piece is a simple reminder of the comparative value of the “long game”.

🎧 AiriPods

In this piece analyst Horace Dediu calls AirPods Apple’s “new iPod”, drawing similarities to the cultural adoption patterns.

The Apple Watch is now bigger than the iPod ever was. As the most popular watch of all time, it’s clear that the watch is a new market success story. However it isn’t a cultural success. It has the ability to signal its presence and to give the wearer a degree of individuality through material and band choice but it is too discreet. It conforms to norms of watch wearing and it is too easy to miss under a sleeve or in a pocket.

Not so for AirPods. These things look extremely different. Always white, always in view, pointed and sharp. You can’t miss someone wearing AirPods. They practically scream their presence.

I still maintain this is their best product in years. I hope it becomes a new platform for voice interfaces, once they’re reliable enough.

✦
✦

Entering Product Development: Geodexy

March 27, 2019 • #

I started with the first post in this series back in January, describing my own entrance into product development and management.

When I joined the company we were in the very early stages of building a data collection tool, primarily for internal use to improve speed and efficiency on data project work. That product was called Geodexy, and the model was similar to Fulcrum in concept, but in execution and tech stack, everything was completely different. A few years back, Tony wrote up a retrospective post detailing out the history of what led us down the path we took, and how Geodexy came to be:

After this experience, I realized there was a niche to carve out for Spatial Networks but I’d need to invest whatever meager profits the company made into a capability to allow us to provide high fidelity data from the field, with very high quality, extremely fast and at a very low cost (to the company). I needed to be able to scale up or down instantly, given the volatility in the project services space, and I needed to be able to deploy the tools globally, on-demand, on available mobile platforms, remotely and without traditional limitations of software CDs.

Tony’s post was an excellent look back at the business origin of the product — the “why” we decided to do it piece. What I wanted to cover here was more on the product technology end of things, and our go-to-market strategy (where you could call it that). Prior to my joining, the team had put together a rough go-to-market plan trying to guesstimate TAM, market fit, customer need, and price points. Of course without real market feedback (as in, will someone actually buy what you’ve built, versus say they would buy it one day), it’s hard to truly gauge the success potential.

Geodexy

Back then, modern web frameworks in use today were around, but there were very few and not yet mature, like Rails and it’s peers. It’s astonishing to think back on the tech stack we were using in the first iteration of Geodexy, circa 2008. That first version was built on a combination of Flex, Flash, MySQL, and Windows Mobile1. It all worked, but was cumbersome to iterate on even back then. This was not even that long ago, and back then that was a reasonable suite of tooling; now it looks antiquated, and Flex was abandoned and donated to Apache Foundation a long time ago. We had success with that product version for our internal efforts; it powered dozens of data collection projects in 10+ countries around the world, allowing us to deliver higher-quality data than we could before. The mobile application (which was the key to the entire product achieving its goals) worked, but still lacked the native integration of richer data sources — primarily for photos and GPS data. The former could be done with some devices that had native cameras, but the built-in sensors were too low quality on most devices. The latter almost always required an external Bluetooth GPS device to integrate the location data. It was all still an upgrade from pen, paper, and data transcription, but not free from friction on the ground at the point of data collection. Being burdened by technology friction while roaming the countryside collecting data doesn’t make for the smoothest user experience or prevent problems. We still needed to come up with a better way to make it happen, for ourselves and absolutely before we went to market touting the workflow advantages to other customers.

Geodexy Windows Mobile

In mid-2009 we spun up an effort to reset on more modern technology we could build from, learning from our first mistakes and able to short-circuit a lot of the prior experimentation. The new stack was Rails, MongoDB, and PostgreSQL, which looking back from 10 years on sounds like a logical stack to use even today, depending on the product needs. Much of what we used back then still sits at the core of Fulcrum today.

What we never got to with the ultimate version of Geodexy was a modern mobile client for the data collection piece. That was still the early days of the App Store, and I don’t recall how mature the Android Market (predecessor to Google Play) was back then, but we didn’t have the resources to start of with 2 mobile clients anyway. We actually had a functioning Blackberry app first, which tells you how different the mobile platform landscape looked a decade ago2.

Geodexy’s mobile app for iOS was, on the other hand, an excellent window into the potential iOS development unlocked for us as a platform going forward. In a couple of months one of our developers that knew his way around C++ learned some Objective-C and put together a version that fully worked — offline support for data collection, automatic GPS integration, photos, the whole nine yards of the core toolset we always wanted. The new wave of platform with a REST API, online form designer, and iOS app allowed us to up our game on Foresight data collection efforts in a way that we knew would have legs if we could productize it right.

We didn’t get much further along with the Geodexy platform as it was before we refocused our SaaS efforts around a new product concept that’d tie all of the technology stack we’d built around a single, albeit large, market: the property inspection business. That’s what led us to launch allinspections, which I’ll continue the story on later.

In an odd way, it’s pleasing to think back on the challenges (or things we considered challenges) at the time and think about how they contrast with today. We focused so much attention on things that, in the long run, aren’t terribly important to the lifeblood of a business idea (tech stack and implementation), and not enough on the things worth thinking about early on (market analysis, pricing, early customer development). Part of that I think stems from our indexing on internal project support first, but also from inexperience with go-to-market in SaaS. The learnings ended up being invaluable for future product efforts, and still help to inform decision making today.

  1. As painful as this sounds we actually had a decent tool built on WM. But the usability of it was terrible, which if you can recall the time period was par for the course for mobile applications of all stripes. â†Š

  2. That was a decade ago. Man. â†Š

✦
✦

Getting to 1,000

February 22, 2019 • #

I saw this tweet a couple of days back that I thought was interesting:

The topic of “how we got to 1000 users” is an interesting one I thought I could take a stab at…

Fulcrum’s first lines of code were written in the summer of 2011. Initially we put together a basic drag-and-drop form builder interface, the simplest possible authentication system, and a simple iPhone app that let you collect records. There was no concept of multiuser membership within accounts, and we only had a free version. The idea (with little to no planning at all) was to cut loose a free app for basic data collection and see what the traction looked like. We did have in our heads the idea that when we had “Group” account capability, that would be the time to monetize. “Fulcrum Pro”, as we called it then. That launched in around March of 2012.

I don’t recall exactly when we hit the 1,000 user mark, but from some brief investigation of data, early 2013 seems like where we crossed that milestone. About a year and a half from 0 to 1,000.

So what techniques did we use to get there?

At the beginning, the team working on Fulcrum was tiny — maybe 2 doing all the dev work, and 3 (including me) doing part-time effort on all other fronts like customer support, product planning, design, marketing, etc. There wasn’t much there in terms of resources to go around, so we had to do the bare minimum to make something customers could self-serve on their own, that was of some minimum utility.

The only driver for all of our users in those early days, probably the first entire year and a half, was inbound marketing, and really only of two types. Since each of us had a decent sized footprint in the geo Twitterverse back then, we had at least a captive audience of like-minded folks that would kick the tires, help promote, and give us feedback. I’d count that user-base in the dozens, though, so not a huge solo contributor to the first 1,000.

I would attribute reaching the first 1,000 to a hybrid of content marketing through a blog, word of mouth, and (often forgotten) an actual useful product that was filling a void left by the other more mature “competitors” in the space. With a high volume of blog posts, some passable SEO-friendly web content, and a consistent feed of useful material, we attracted early adopters in engineering firms, GIS shops, humanitarian organizations, and some electric utilities.

Fast forward to 2019 and things have changed quite a bit! Not only have we eclipsed well over 100,000 individual users, more importantly we’re approaching the 2,000 paid customer mark. Spanning anywhere from 1 to over 1,000 individual users per customer, it’s safe to call it a repeatable, successful thing at this point. Back where we crossed 1,000 users, we were only hitting the very beginning of true product-market fit.

Building something that catches on and keeping after it are hard. A key learning of mine over the course of this process is to never think you’ve got it all figured out, that you’ve cracked the code. There’s always more to be done to break past inflection points and reach the next level on the step function of successful SaaS business.

✦
✦
✦

Weekend Reading: Fulcrum in Santa Barbara, Point Clouds, Building Footprints

February 2, 2019 • #

👨🏽‍🚒 Santa Barbara County Evac with Fulcrum Community

Our friends over at the Santa Barbara County Sheriff have been using a deployment of Fulcrum Community over the last month to log and track evacuations for flooding and debris flow risk throughout the county. They’ve deployed over 100 volunteers so far to go door-to-door and help residents evacuate safely. In their initial pilot they visited 1,500 residents. With this platform the County can monitor progress in real-time and maximize their resources to the areas that need the most attention.

“This app not only tremendously increase the accountability of our door-to-door notifications but also gave us a real time tracking on the progress of our teams. We believe it also reduced the time it has historically taken to complete such evacuation notices.”

This is exactly what we’re building Community to do: to help enable groups to collaborate and share field information rapidly for coordination, publish information to the public, and gather quantities of data through citizens and volunteers they couldn’t get on their own.

☁️ USGS 3DEP LiDAR Point Clouds Dataset

From Howard Butler is this amazing public dataset of LiDAR data from the USGS 3D Elevation Program. There’s an interactive version here where you can browse what’s available. Using this WebGL-based viewer you can even pan and zoom around in the point clouds. More info here in the open on GitHub.

🏢 US Building Footprints

Microsoft published this dataset of computer-generated building footprints, 125 million in all. Pretty incredible considering how much labor it’d take to produce with manual digitizing.

✦
✦
✦

Fulcrum Desktop

January 4, 2019 • #

A frequent desire for Fulcrum customers is to maintain locally a version of the data they collect with our platform, in their database system of choice. With our export tool, it’s simple to pull out extracts in formats like CSV, shapefile, SQLite, and even PostGIS or GeoPackage. What this doesn’t allow, though, is an automatable way to keep a local version of data on your own server. You’d have to extract data manually on some schedule and append new stuff to existing tables you’ve already got.

A while back we built and released a tool called Fulcrum Desktop, with the goal of alleviating this problem. It’s an open source command line utility that harnesses our API to synchronize content from your Fulcrum account into a local database. It supports PostgreSQL (with PostGIS), Microsoft SQL Server, and even GeoPackage.

Other than the primary advantage of providing a way to clone your data to your own system, one of the cool things you can do with Desktop is easily make your data available to your GIS users in a tool like QGIS. It also has a plugin architecture to support other cool things like:

  • Media management — syncing photos, videos, audio, signatures
  • S3 — storing media files in your own Amazon S3 bucket
  • Reports — Generating PDF reports

If you have the Fulcrum Developer Pack with your account, you have access to all of the APIs, so you need that to get Desktop set up (though it is available on the free trial tier).

We’ve also built another utility called fulcrum-sync that makes it easy to set up Desktop using Docker. This is great for version management, syncing data for multiple organizations, and overall simplifying dependencies and local library management. With Docker “containerizing” the installation, you don’t have to worry about conflicting libraries or fiddle with your local setup. All of the FD installation is segmented to its own container. This utility also makes it easier to install and manage FD plugins.

✦

Video Mapping in OpenStreetMap with Fulcrum

December 16, 2018 • #

With tools like Mapillary and OpenStreetCam, it’s pretty easy now to collect street-level images with a smartphone for OpenStreetMap editing. Point of interest data is now the biggest quality gap for OSM as compared to other commercial map data providers. It’s hard to compete with the multi-billion dollar investments in street mapping and the bespoke equipment of Google or Apple. There’s promise for OSM to be a deep, current source of this level of detail, but it requires true mass-market crowdsourcing to get there.

The businesses behind platforms like Mapillary and OpenStreetCam aren’t primarily based on improving OSM. Though Telenav does build OSC as a means to contribute, their business is in automotive mapping powered by OSM, not the collection tool itself. Mapillary on the other hand is a computer vision technology company. They want data, so opening the content for OSM mapping attracts contributors.

I’ve been collecting street-level imagery for years using windshield mounts in my car, typically for my own purposes to add detail in OSM. Since we launched our SpatialVideo feature in Fulcrum (over 4 years ago now!), I’ve used that for most of my data collection. While the goals of that feature in Fulcrum are wider than just vehicle-based data capture, the GPS tracking data with SpatialVideo makes it easier to scrub through spatially to find what’s missing from the map. My personal workflow is usually centered on adding points of interest, but street furniture, power infrastructure, and signage are also present everywhere and typically unmapped. You can often see addresses on buildings, and I rarely find new area where the point of interest data is already rich. There’s so much to be filled in or updated.

This is a quick sample of what video looks like from my dash mount. It’s fairly stable, and the mounts are low-cost. This is the SV player in the Fulcrum Editor review tool:

One of the cool things about the Fulcrum format is that it’s video, so that smoothness can help make sure you’ve got each frame needed — particularly on high speed thoroughfares. We built in a feature to control the frame rate and resolution of the video recording, so what I do is maximize the resolution but drop the frame rate well below 30 fps. This helps tremendously to minimize the data quantity that’s got to get back to the server. Even 3 or 5 fps can be plenty for mapping purposes. I usually go with 10 or so just to smooth it out a little bit; the size doesn’t get too bad until you go past 15 or so.

Of course the downside is that this content isn’t available to the public easily for others to map from. Not a huge deal to me, but with Fulcrum Community we’re looking at some ways to open this system up to use for contribution, a la Mapillary or OSC.

✦
✦

A Product Origin Story

September 11, 2018 • #

Fulcrum, our SaaS product for field data collection, is coming up on its 7th birthday this year. We’ve come a long way: from a bootstrapped, barely-functional system at launch in 2011 to a platform with over 1,800 customers, healthy revenue, and a growing team expanding it to ever larger clients around the world. I thought I’d step back and recall its origins from a product management perspective.

We created Fulcrum to address a need we had in our business, and quickly realized its application to dozens of other markets with a slightly different color of the same issue: getting accurate field reporting from a deskless, mobile workforce back to a centralized hub for reporting and analysis. While we knew it wasn’t a brand new invention to create a data collection platform, we knew we could bring a novel solution combining our strengths, and that other existing tools on the market had fundamental holes we saw as essential to our own business. We had a few core ideas, all of which combined would give us a unique and powerful foundation we didn’t see elsewhere:

  1. Use a mobile-first design approach — Too many products at the time still considered their mobile offerings afterthoughts (if they existed at all).
  2. Make disconnected, offline use seamless to a mobile user — They shouldn’t have to fiddle. Way too many products in 2011 (and many still today) took the simpler engineering approach of building for always-connected environments. (requires #1)
  3. Put location data at the core — Everything geolocated. (requires #1)
  4. Enable business analysis with spatial relationships — Even though we’re geographers, most people don’t see the world through a geo lens, but should. (requires #3)
  5. Make it cloud-centric — In 2011 desktop software was well on the way out, so we wanted an platform we could cloud host with APIs for everything. Creating from building block primitives let us horizontally scale on the infrastructure.

Regardless of the addressable market for this potential solution, we planned to invest and build it anyway. At the beginning, it was critical enough to our own business workflow to spend the money to improve our data products, delivery timelines, and team efficiency. But when looking outward to others, we had a simple hypothesis: if we feel these gaps are worth closing for ourselves, the fusion of these ideas will create a new way of connecting the field to the office seamlessly, while enhancing the strengths of each working context. Markets like utilities, construction, environmental services, oil and gas, and mining all suffer from a similar body of logistical and information management challenges we did.

Fulcrum wasn’t our first foray into software development, or even our first attempt to create our own toolset for mobile mapping. Previously we’d built a couple of applications: one never went to market, was completely internal-only, and one we did bring to market for a targeted industry (building and home inspections). Both petered out, but we took away revelations about how to do it better and apply what we’d done to a wider market. In early 2011 we went back to the whiteboard and conceptualized how to take what we’d learned the previous years and build something new, with the foundational approach above as our guidebook.

We started building in early spring, and launched in September 2011. It was free accounts only, didn’t have multi-user support, there was only a simple iOS client and no web UI for data management — suffice it to say it was early. But in my view this was essential to getting where we are today. We took our infant product to FOSS4G 2011 to show what we were working on to the early adopter crowd. Even with such an immature system we got great feedback. This was the beginning of learning a core competency you need to make good products, what I’d call “idea fusion”: the ability to aggregate feedback from users (external) and combine with your own ideas (internal) to create something unified and coherent. A product can’t become great without doing these things in concert.

I think it’s natural for creators to favor one path over the other — either falling into the trap of only building specifically what customers ask for, or creating based solely on their own vision in a vacuum with little guidance from customers on what pains actually look like. The key I’ve learned is to find a pleasant balance between the two. Unless you have razor sharp predictive capabilities and total knowledge of customer problems, you end up chasing ghosts without course correction based on iterative user feedback. Mapping your vision to reality is challenging to do, and it assumes your vision is perfectly clear.

On the other hand, waiting at the beck and call of your user to dictate exactly what to build works well in the early days when you’re looking for traction, but without an opinion about how the world should be, you likely won’t do anything revolutionary. Most customers view a problem with a narrow array of options to fix it, not because they’re uninventive, but because designing tools isn’t their mission or expertise. They’re on a path to solve a very specific problem, and the imagination space of how to make their life better is viewed through the lens of how they currently do it. Like the quote (maybe apocryphally) attributed to Henry Ford: “If I’d asked customers what they wanted, they would’ve asked for a faster horse.” In order to invent the car, you have to envision a new product completely unlike the one your customer is even asking for, sometimes even requiring other industry to build up around you at the same time. When automobiles first hit the road, an entire network of supporting infrastructure existed around draft animals, not machines.

We’ve tried to hold true to this philosophy of balance over the years as Fulcrum has matured. As our team grows, the challenge of reconciling requests from paying customers and our own vision for the future of work gets much harder. What constitutes a “big idea” gets even bigger, and the compulsion to treat near term customer pains becomes ever more attractive (because, if you’re doing things right, you have more of them, holding larger checks).

When I look back to the early ‘10s at the genesis of Fulcrum, it’s amazing to think about how far we’ve carried it, and how evolved the product is today. But while Fulcrum has advanced leaps and bounds, it also aligns remarkably closely with our original concept and hypotheses. Our mantra about the problem we’re solving has matured over 7 years, but hasn’t fundamentally changed in its roots.

✦

Weekly Links: OSM on AWS, Fulcrum Editor, & Real-time Drone Maps

April 21, 2017 • #

Querying OpenStreetMap with Amazon Athena 🗺

Using Amazon’s Athena service, you can now interactively query OpenStreetMap data right from an interactive console. No need to use the complicated OSM API, this is pure SQL. I’ve taken a stab at building out a replica OSM database before and it’s a beast. The dataset now clocks in at 56 GB zipped. This post from Seth Fitzsimmons gives a great overview of what you can do with it:

Working with “the planet” (as the data archives are referred to) can be unwieldy. Because it contains data spanning the entire world, the size of a single archive is on the order of 50 GB. The format is bespoke and extremely specific to OSM. The data is incredibly rich, interesting, and useful, but the size, format, and tooling can often make it very difficult to even start the process of asking complex questions.

Heavy users of OSM data typically download the raw data and import it into their own systems, tailored for their individual use cases, such as map rendering, driving directions, or general analysis. Now that OSM data is available in the Apache ORC format on Amazon S3, it’s possible to query the data using Athena without even downloading it.

Introducing the New Fulcrum Editor 🔺

Personal plug here, this is something that’s been in the works for months. We just launched Editor, the completely overhauled data editing toolset in Fulcrum. I can’t wait for the follow up post to explain the nuts and bolts of how this is put together. The power and flexibility is truly amazing.

Real-time Drone Mapping with FieldScanner 🚁

The team at DroneDeploy just launched the first live aerial imagery product for drones. Pilots can now fly imagery and get a live, processed, mosaicked result right on a tablet immediately when their mission is completed. This is truly next level stuff for the burgeoning drone market:

The poor connectivity and slow internet speeds that have long posed a challenge for mapping in remote areas don’t hamper Fieldscanner. Designed for use the fields, Fieldscanner can operate entirely offline, with no need for cellular or data coverage. Fieldscanner uses DroneDeploy’s existing automatic flight planning for DJI drones and adds local processing on the drone and mobile device to create a low-resolution Fieldscan as the drone is flying, instead of requiring you to process imagery into a map at a computer after the flight.

✦
✦
✦