Thereâs no better way to build an empathetic perspective of your customerâs life than to go and be one as often as you can.
Last week our team did an afternoon field day where the entire company went out on a scavenger hunt of sorts, using Fulcrum to log some basic neighborhood sightings. 42 people scattered across the US collected 1,230 records in about an hour, which is an impressive pace even if the use case was a simple one!
Data across the nation, and my own fieldwork in St. Pete
Itâs unfortunate how easy it is to stray away from the realities of what customers deal with day in and day out. Any respectable product person has a deep appreciation for how their product works for customers on the ground, at least academically. What exercises like this help us do is to get out of the realm of academics and try to do a real job. With B2B software, especially the kind built for particular industrial or domain applications, itâs hard to do this frequently since you arenât your canonical user; you have to contrive your own mock scenarios to tease out the pain points in workflow.
The problem is that manufactured tests canât be representative of all the messy realities in utilities, construction, engineering, or the myriad other cases we serve.
Thereâs no silver bullet for this. Acknowledging imperfect data and remaining aware of the gaps in your knowledge is the foundation. Then fitting your solution to the right problem, at the right altitude, is the way to go.
Exercises like ours last week are always energizing, though. Anytime you can rally attention around what your customers go through every day itâs a worthy cause. The list of observations and feedback is a mile long, and all high value stuff to investigate.
Fulcrumâs been the best tool out there for quite a few years for building your own apps and collecting data with mobile forms (we were doing low-code before it was cool). Our product focus for a long time was on making it as simple and as fast as possible to go from ideas to reality to get working on a data collection process. For any sort of work you wouldâve previously done with a pen and paper, or a spreadsheet on a tablet, you can rapidly build and deploy a Fulcrum app to your team for things like inspections, audits, and inventory applications.
For the last 8 months or so weâve been focused on improving what you can do with data after collection. Weâre great at speed to build and collect, but had not been focused yet on the rest of a customer workflow. Since the beginning weâve had an open API (even for SQL, what we call the Query API), code libraries, and other tools. In July we launched our Report Builder, which was a big step in the direction of self-service reporting and process improvement tools
This week weâve just launched Workflows, which is all about providing users an extensible framework for adding their own business logic of events and actions that need to happen on your data.
If youâre familiar with tools like Zapier or Integromat, youâll recognize the concept. Workflows is similar in design, but focused on events within the scope of Fulcrum. Hereâs how it works:
A workflow listens for an event (a âtriggerâ) based on data coming through the system. Currently you can trigger on new data created or updated.
When a trigger happens, the relevant record data gets passed to the next step, a âfilterâ where you can set criteria to funnel it through. Like cases where I want to trigger on new data, but only where a âStatusâ = Critical.
Any record making it through is passed to an âactionâ, and at launch we have actions to:
Send an email
Send an SMS message
Send a webhook (man is this one powerful)
Weâre excited to see what users build with this initial set of options. There are plans in the works for a lot of interesting things like custom SMTP (for high volume email needs), geofencing, push notifications, and much more.
This is just the beginning of what will become a pillar product. Our Workflow engine will continue to evolve with new actions, filters, and triggers over time as we extend it to be more flexible for designing your business data decision steps and data flows.
After about 6-8 months of forging, shaping, research, design, and engineering, weâve launched the Fulcrum Report Builder. One of the key use cases with Fulcrum has always been using the platform to design your own data collection processes with our App Builder, perform inspections with our mobile app, then generate results through our Editor, raw data integrations, and, commonly, generating PDF reports from inspections.
For years weâve offered a basic report template along with an ability to customize the reports through our Professional Services team. What was missing was a way to expose our report-building tools to customers.
With the Report Builder, we now have two modes available: a Basic mode that allows any customer to configure some parameters about the report output through settings, and an Advanced mode that provides a full IDE for building your own fully customized reports with markup and JavaScript, plus a templating engine for pulling in and manipulating data.
Under the hood, we overhauled the generator engine using a library called Puppeteer, a headless Chrome node.js API for doing many things, including converting web pages to documents or screenshots. Itâs lightning fast and allows for a live preview of your reports as youâre working on your template customization.
Feedback so far has been fantastic, as this has been of the most requested capabilities on the platform. I canât wait to see all of the ways people end up using it.
Weâve got a lot more in store for the future. Stay tuned to see what else we add to it.
As if the COVID-19 mayhem wasnât enough, the Nashville area is dealing with a series of tornadoes that ripped through the area, with a death toll of 26 so far.
The StEER network, who have been long-time users deploying in past disasters, have been active on the ground assessing structural damage.
Today we hosted a webinar in conjunction with our friends at NetHope and Team Rubicon to give an overview of Fulcrum and what weâre collectively doing in disaster relief exercises.
Both organizations deployed to support recent disaster events for Cyclone Idai and Hurricane Dorian (the Bahamas) and used Fulcrum as a critical piece of their workflow.
Always enjoyable to get to show more about what weâre doing to support impactful efforts like this.
Today we announced this investment from Kayne and Kennet in Spatial Networks, to help us keep scaling Fulcrum in 2020 and beyond. This effort has been one of my main missions for the better part of 2019, so itâs rewarding to get to this milestone to build from. Our new partners at Kayne and Kennet each bring unique perspectives and experience to help us move faster and expand.
Spatial Networks, the creator of Fulcrum, the leading geospatial data collection and analysis platform for field operations, today announced that it has closed an investment of $42.5 million led by Kayne Partners, the growth equity group of Kayne Anderson Capital Advisors, L.P., and Kennet Partners, Ltd. The funding will primarily be used to scale the companyâs sales and marketing capabilities, accelerate its product development roadmap, and further expand the Fulcrum data collection platform into international markets. The company has appointed Jim Grady CEO to oversee all aspects of the companyâs strategy and execution globally.
Iâve spent the majority of the last few months working on our expansion strategy going forward next year and beyond. Weâve got some big ideas and plans for the product that Iâm excited about in 2020.
Donayle put together this summary of what weâve accomplished this year through our Fulcrum Community initiative. Some great stuff here:
During Cyclones Idai and Kenneth, Team Rubiconâs Medic team went to Mozambique with Fulcrum in hand and served over 1,000 injured during those cyclonic episodes, using our tools to document and communicate those injuries to the World Health Organization.
Our NetHope partners responded to the Colombia-Venezuela border crisis with internet connectivity and communications support, restoring communication for thousands of displaced families while using Fulcrum to share installation information.
Instead of fireworks, an earthquake shook Searles Valley, California on the Fourth of July, and Earthquake Engineering Research Institute (EERI) was there, conducting damage assessments and reporting dangerous conditions to first responders using Fulcrum.
This is another one from the archives, written for the Fulcrum blog back in 2016.
Engineering is the art of building things within constraints. If you have no constraints, you arenât really doing engineering. Whether itâs cost, time, attention, tools, or materials, youâve always got constraints to work within when building things. Hereâs an excerpt describing the challenge facing the engineer:
The crucial and unique task of the engineer is to identify, understand, and interpret the constraints on a design in order to produce a successful result. It is usually not enough to build a technically successful product; it must also meet further requirements.
In the development of Fulcrum, weâre always working within tight boundaries. We try to balance power and flexibility with practicality and usability. Working within constraints produces a better finished product â if (by force) you canât have everything, you think harder about what your product wonât do to fit within the constraints.
Microsoft Office, exemplifying 'feature creep'
The practice of balancing is also relevant to our customers. Fulcrum is used by hundreds of organizations in the context of their own business rules and processes. Instead of engineering a software product, our users are engineering a solution to their problem using the Fulcrum app builder, custom workflow rules, reporting, and analysis, all customizable to fit the goals of the business. When given a box of tools to build yourself a solution to a problem, the temptation is high to try to make it do and solve everything. But with each increase in power or complexity, usability of your system takes a hit in the form of added burden on your end users to understand the complex system â theyâre there to use your tool for a task, finish the job, and go home.
This balance between power and usability is related to my last post on treating causes rather than symptoms of pain. Trying too hard to make a tool solve every potential problem in one step can (and almost always does) lead to overcomplicating the result, to the detriment of everyone.
In our case as a product development and design team, a powerful suite of options without extremely tight attention on implementation runs the risk of becoming so complex that the lionâs share of users canât even figure it out. GitHubâs Ben Balter recently wrote a great piece on the risks of optimizing your product for edge cases1:
No product is going to satisfy 100% of user needs, although itâs sure tempting to try. If a 20%-er requests a feature
that isnât going to be used by the other 80%, thereâs no harm in just making it a non-default option, right?
We have a motto at GitHub, part of the GitHub Zen, that âanything added dilutes everything elseâ. In reality, there is
always a non-zero cost to adding that extra option. Most immediately, itâs the time you spend building feature A,
instead of building feature B. A bit beyond that, itâs the cognitive burden youâve just added to each userâs
onboarding experience as they try to grok how to use the thing youâve added (and if they should). In the long run,
itâs much more than maintenance. Complexity begets complexity, meaning each edge case you account for today, creates
many more edge cases down the line.
This is relevant to anyone building something to solve a problem, not just software products. Put this in the context of a Fulcrum data collection workflow. The steps might look something like this:
Analyze your requirements to figure out what data is required at what stage in the process.
Build an app in Fulcrum around those needs.
Deploy to field teams.
Collect data.
Run reports or analysis.
What we notice a surprising amount of the time is an enormous investment in step 2, sometimes to the exclusion of much effort on the other stages of the workflow. With each added field on a survey, requirement for data entry, overly-specific validation, you add potential hang-ups for end users responsible for actually collecting data. With each new requirement, usability suffers. People do this for good reason â theyâre trying to accommodate those edge cases, the occasions where you do need to collect this one additional piece of info, or validate something against a specific requirement. Do this enough times, however, and your implementation is all about addressing the edge problems, not the core problem.
When youâre building a tool to solve a problem, think about how you may be impacting the core solution when you add knobs and settings for the edge cases. Best-fit solutions require testing your product against the complete ideal life cycle of usage. Start with something simple and gradually add complexity as needed, rather than the reverse.
Benâs blog is an excellent read if youâre into software and the relationship to government and enterprise. âŠ
We just wrapped up our Fall âall handsâ week at the office. Another good week to see everyone from out of town, and an uncommonly productive one at that. We got a good amount of planning discussion done for future product roadmap additions, did some testing on new stuff in the lab, fixed some bugs, shared some knowledge, and ate (a lot).
I wrote this wrap-up summary of our hands-on workshop we did at the NetHope Summit in San Juan. It was a great joint session with Mikel from Mapbox and John from NetHope. Iâd love to do more of these in the future. Hands-on sessions where we can get outside and see our stuff in action always teach you a lot about how your UX works in practice.
I got to see more of what Kepler can do, too â the open source GIS toolkit built by the Uber team. Pretty slick stuff.
Weâre in San Juan this week for the NetHope Global Summit. Through our partnership with NetHope, a non-profit devoted to bringing technology to disaster relief and humanitarian projects, weâre hosting a hands-on workshop on Fulcrum on Thursday.
Weâve already connected with several of the other tech companies in NetHopeâs network â Okta, Box, Twilio, and others â leading to some interesting conversations on working together more closely on integrated deployments for humanitarian work.
Fortin San Geronimo de Boqueron
Looking forward to an exciting week, and maybe some exploring of Old San Juan. Took a walk last night out to dinner along the north shore overlooking the Atlantic.
Bryan wrote this up about the latest major release of Fulcrum, which added Views to the Editor tool. This is a cool feature that allows users doing QA and data analysis to save sets of columns and filters, akin to how views work in databases like PostgreSQL. We have some plans next to let users share or publish Views, and also to expose them via our Query API, with the underlying data functioning just like a database view does.
Thisâll be a foundational feature for a lot of upcoming neat stuff.
This is a post from the Fulcrum archives I wrote 3 years back. I like this idea and thereâs more to be written on the topic of how companies treat their archives of data. Especially in data-centric companies like those we work with, itâs remarkable to see how quickly it often is thrown on a shelf, atrophies, and is never used again.
In the days of pen and paper collection, data was something to collect, transcribe, and stuff into a file cabinet to be stored for a minimum of 5 years (lest those auditors come knocking). With advances in digital data capture â through all methods including forms software, spreadsheets, or sensors â many organizations arenât rethinking their processes and thus, havenât come much further. The only difference is that the file cabinetâs been replaced with an Access database (or gasp a 10 year old spreadsheet!).
Many organizations collect troves of legacy data in their operations, or at least as much as they can justify the cost in collecting. But because data management is a complicated domain in and of itself, often times the same data is re-collected over and over, with all cost and no benefit. Once data makes its way into corporate systems somewhere after its initial use, itâs forgotten and left on the virtual shelf.
Data is your companyâs memory. Itâs the living, institutional knowledge youâve invested in over years or decades of doing business, full of latent value.
But there are a number of challenges that stand in the way when trying to make use of historical data:
Compatibility â File formats and versions. Can I read my old data with current tools?
Access â Data silos and where your data is published. Can my staff get to archives they need access to without heartburn?
Identification â A process for knowing what pieces are valuable down the road. Within these gigabytes of data, what is useful?
If you give consideration to these issues up-front as youâre designing a data collection workflow, youâll make your life much simpler down the road when your future colleagues are trying to leverage historical data assets.
Letâs dive deeper on each of these issues.
Formats and Compatibility
I call this the âLotus 1-2-3â problem, which happens whenever data is stored in a format that dies off and loses tool compatibility1. Imagine the staggering amount of historical corporate data locked up in formats that no one can open anymore. This is one area where paper can be an advantage: if stored properly, you can always open the file.
Of course thereâs no way to know the future potential of a data format on the day you select it as your format of choice. We donât have the luxury of that kind of hindsight. Iâm sure no one wouldâve selected Lotusâs .123 format back in â93 had they known that Excel would come to dominate the world of spreadsheets. Look for well-supported open standards like CSV or JSON for long term archival. Another good practice is to revisit your data archives as a general âhygieneâ practice every few years. Are your old files still usable? The faster you can convert dead formats into something more future-proof, the better.
Accessibility
This is one of the most important issues when it comes to using archives of historical data. Presuming a user can open files of 10 year old data because youâve stored it effectively in open formats â is the data somewhere that staff can get it? Is it published somewhere in a shared workspace for easy access? Most often data isnât squirreled away in a hard-to-reach place intentionally. Itâs often done for the sake of organization, cleanliness, or savings on storage.
Anyone that works frequently with data has heard of âdata silosâ, which arise when data is holed up in a place where it doesnât get shared, only accessible by individual departments or groups. Avoiding this issue can also involve internal corporate policy shifts or revisiting your data security policies. In larger organizations Iâve worked in, however, the tendency is toward over-securing data to the point of uselessness. In some cases it might as well be deleted since itâs effectively invisible to the entire company. This is a mistake and a waste of large past investments in collecting that data in the first place.
Look for publishing tools that make your data easy to get to without sacrificing controls over access and security. But resist the urge to continuously wall off past data from your team.
Identifying the Useful Things
Now, assuming your data is in a useful format and itâs easily accessible, youâre almost there. When working with years of historical records it can be difficult to extract the valuable bits of information, but thatâs often because the first two challenges (compatibility and accessibility) have already been standing in your way. If your data collection process is built around your data as an evergreen asset rather than a single-purpose resource, it becomes much easier to think of areas where a dataset could be useful 5 or 6 years down the road.
For instance, if your data collection process includes documenting inspections with thorough before-and-after photographs, those could be indispensable in the event of a dispute or a future issue in years time. With ease of access and an open format, it could take two clicks to resolve a potentially thorny issue with a past client. That is if youâve planned your process around your data becoming a valuable corporate resource.
A quick story to demonstrate these practices:
Iâm currently working with a construction company on re-roofing my house, and theyâve been in business for 50+ years. Over that time span, theyâve performed site visits and accurately measured so many roofs in the area that when they get calls for quotes, they often can pull a file from 35 years ago when they went out and measured a property. That simple case is an excellent example of realizing latent value in a prior investment in data: if they didnât organize, archive, and store that information effectively, theyâd be redoing field visits every week. Though they arenât digital with most of their process, theyâve nailed a workflow that works for them. They use formats that work, make that data accessible to their people, and know exactly what information theyâll find useful over the long term.
Data has value beyond its immediate use case, but you have to consider this up front. Design sustainable workflows that allow you to continuously update data, and make use of archival data over time. Youâve spent a lot to create it, you should be leveraging it to its fullest extent.
Lotus 1-2-3 was a spreadsheet application popular in the 80s and 90s. It succumbed to the boom of Microsoft Office and Excel in the 1990s. âŠ
This is a cool post on a study done by a research team in the City of Saskatoon, looking at the perceptions of safety in a downtown area. They used Fulcrum to collect survey data using a safety audit developed to capture the on-the-ground intelligence from residents:
Because we were interested in perceptions and fear at a very micro-level, the study area was confined to the blocks and laneways within a four block area. We used our new app to collect information from 108 micro-spatial locations within a radius of 30 meters (100 feet) of each location, and then we also collected 596 additional intercept surveys with members of the public on the street at the time.
The urban design strategy known as Crime Prevention Through Environmental Design (CPTED) is about creating safer neighborhoods through specifically constructing the built environment. From their takeaways in the study:
Interestingly, the respondentsâ night-time perceptions did not appear as negative as we expected. Some parts were so inactive at night that we obtained very few interview responses. While CPTED surveys conducted by one team concluded these underactive areas were anxiety-provoking, when late-night social events and festivals activated the area, it positively influenced the perceptions in our surveys with the public.
Through Fulcrum Community, weâve been working with the team from NetHope to support their needs in responding to disasters around the world. In their work, they help first-responders in humanitarian crises around the world with connectivity and communications when itâs knocked out â cellular coverage, phone communications, and internet access.
This week theyâre hosting an event in the hills of central California, mocking up a disaster scenario to experiment in how relief organizations can embrace technology and collaborate with one another.
The DRT event is conducted over a five-day period with trainers from CiscoTacOps, emergency.lu, Ericsson Response, Facebook, and Redline Communications. The first three days of each module focus on theory and practical hands-on training on NH deployed network, P2P, power, VSAT, mobile SatCom and TVWS solutions.
The participants from Google, Facebook, AWS, and Team Rubicon all go through the classroom training and are then deployed to operate in a 48-hour emergency Simulated Exercise (SIMEX). Participants employ NetHopeâs mobilization procedures as in a âreal-lifeâ emergency to determine how well they can apply the recently learned technical skills in the field while submerged in austere living conditions.
Joe is out there from our team to give the rundown on how Fulcrum can be deployed in disaster environments, as weâve helped with dozens of times around the world. Itâs cool to see this engagement with our tech for such positive work.
This is one from the archives, originally written for the Fulcrum blog back in early 2017. I thought Iâd resurface it here since Iâve been thinking more about continual evolution of our product process. I liked it back when I wrote it; still very relevant and true. Itâs good to look back in time to get a sense for my thought process from a couple years ago.
In the software business, a lot of attention gets paid to âshippingâ as a badge of honor if you want to be considered an innovator. Like any guiding philosophy, itâs best used as a general rule than as the primary yardstick by which you measure every individual decision. Agile, scrum, TDD, BDD â theyâre all excellent practices to keep teams focused on results. After all, the longer youâre polishing your work and not putting it in the hands of users, the less you know about how theyâll be using it once you ship it!
These systems followed as gospel (particularly with larger projects or products) can lead to attention on the how rather than the what â thinking about the process as shipping âlines of codeâ or what text editor youâre using rather than useful results for users. Loops of user feedback are essential to building the right solution for the problem youâre addressing with your product.
Thinking more deeply about aligning the desires to both ship _something_ rapidly while ensuring it aligns with product goals, it brings to mind a few questions to reflect on:
What are you shipping?
Is what youâre shipping actually useful to your user?
How does the structure of your team impact your resulting product?
How can a team iterate and ship fast, while also delivering the product theyâre promising to customers, that solves the expressed problem?
Defining product goals
In order to maintain a high tempo of iteration without simply measuring numbers of commits or how many times you push to production each day, the goals need to be oriented around the end result, not the means used to get there. Start by defining what success looks like in terms of the problem to be solved. Harvard Business School professor Clayton Christensen developed the jobs-to-be-done framework to help businesses break down the core linkages between a user and why they use a product or service1. Looking at your product or project through the lens of the âjobsâ it does for the consumer helps clarify problems you should be focused on solving.
Most of us that create products have an idea of what weâre trying to achieve, but do we really look at a new feature, new project, or technique and truly tie it back to a specific job a user is expecting to get done? I find it helpful to frequently zoom out from the ground level and take a wider view of all the distinct problems weâre trying to solve for customers. The JTBD concept is helpful to get things like technical architecture out of your way and make sure whatâs being built is solving the big problems we set out to solve. All the roadmaps, Gantt charts, and project schedules in the world wonât guarantee that your end result solves a problem2. Your product could become an immaculately built ship thatâs sailing in the wrong direction. For more insight into the jobs-to-be-done theory, check out This is Product Managementâs excellent interview with its co-creator, Karen Dillon.
Understanding users
On a similar thread as jobs-to-be-done, having a deep understanding of what the user is trying to achieve is essential in defining what to build.
This quote from the article gets to the heart of why it matters to understand with empathy what a user is trying to accomplish, itâs not always about our engineering-minded technical features or bells and whistles:
Jobs are never simply about function â they have powerful social and emotional dimensions.
The only way to unroll whatâs driving a user is to have conversations and ask questions. Figure out the relationships between what the problem is and what they think the solution will be. Internally we talk a lot about this as âunderstanding painâ. People âhireâ a product, tool, or person to reduce some sort of pain. Deep questioning to get to the root causes of pain is essential. Often times people want to self-prescribe their solution, which may not be ideal. Just look how often a patient browses WebMD, then goes to the doctor with a preconceived diagnosis, without letting the expert do their job.
On the flip side, product creators need to enter these conversations with an open mind, and avoid creating a solution looking for a problem. Doctors shouldnât consult patients and make assumptions about the underlying causes of a patientâs symptoms! Theyâd be in for some serious legal trouble.
Organize the team to reflect goals
One of my favorite ideas in product development comes from Steven Sinofsky, former Microsoft product chief of Office and Windows:
âDonât ship the org chart.â
The salient point being that companies have a tendency to create products that align with areas of responsibility within the company3. However, the user doesnât care at all about the dividing lines within your company, only the resulting solutions you deliver.
A corollary to this idea is that over time companies naturally begin to look like their customers. Itâs clearly evident in the federal contracting space: federal agencies are big, slow, and bureaucratic, and large government contracting companies start to reflect these qualities in their own products, services, and org structures.
With our product, we see three primary points to make sure our product fits the set of problems weâre solving for customers:
For some, a toolbox â For small teams with focused problems, Fulcrum should be seamless to set up, purchase, and self-manage. Users should begin relieving their pains immediately.
For others, a total solution â For large enterprises with diverse use cases and many stakeholders, Fulcrum can be set up as a total turnkey solution for the customerâs management team to administer. Our team of in-house experts consults with the customer for training and on-boarding, and the customer ends up with a full solution and the toolbox.
Integrations as the âglueâ â Customers large and small have systems of record and reporting requirements with which Fulcrum needs to integrate. Sometimes this is simple, sometimes very complex. But always the final outcome is a unique capability that canât be had another way without building their own software from scratch.
Though weâre still a small team, weâve tried to build up the functional areas around these objectives. As we advance the product and grow the team, itâs important to keep this in mind so that weâre still able to match our solution to customer problems.
For more on this topic, Sinofskyâs post on âFunctional vs. Unit Organizationsâ analyzes the pros, cons, and trade offs of different org structures and the impacts on product. A great read.
Continued reflection, onward and upward đ
In order to stay ahead of the curve and Always Be Shipping (the Right Product), itâs important to measure user results, constantly and honestly. The assumption should be that any feature could and should be improved, if we know enough from empirical evidence how we can make those improvements. With this sort of continuous reflection on the process, hopefully weâll keep shipping the Right Product to our users.
Not to discount the value of team planning. Itâs a crucial component of efficiency. My point is the clean Gantt chart on its own isnât solving a customer problem! âŠ
Of course this problem is only minor in small companies. Itâs of much greater concern to the Amazons and Microsofts of the world. âŠ
This is the kind of stuff that gets you out of bed in the morning and really gets the motivators up to do things like Fulcrum Community to support disaster relief efforts.
When Cyclones Idai and Kenneth steamrolled into East Africa beginning in March, the crew from Team Rubicon was deployed to help with EMT response and recovery in Beira and Matarara, Mozambique. They used Fulcrum to record patient data after prior experience with another partner of ours, NetHope:
Earlier in 2019, Team Rubicon deployed with NetHope to install wireless access points on the Colombia/Venezuela border to provide Venezuelan refugees with access to news information and offers of assistance. NetHope utilized Fulcrum to track access-point install requests as well as record successful installs. Team Rubicon received training on Fulcrum during the deployment and grew to like the offline form entry capability, ability to change forms on the fly, and manipulate and analyze the collected information via the console. When faced with the need to collect medical information in an internet-disconnected environment and transmit reports later when within range of internet, Team Rubicon reached out to Fulcrum for support. Fulcrum provided a community disaster grant to facilitate Team Rubiconâs Cyclone Idai response, including developer assistance to rapidly publish medical forms.
Fulcrum did exactly what we designed it for, providing a robust data platform in an austere, low-comms environment easy for field EMTs to use:
Fulcrum allowed Team Rubicon to quickly enter patient data as well as the multi-sector needs assessment data in a completely disconnected environment. And since the Fulcrum app works on a cellular phone, there was no longer a need to pack bulky and heavy paper forms. Once back at base camp, and with access to internet, Team Rubicon could transmit the daily collected information to its National Operations Center so they could generate the EMT/CC daily reports in the format the EMT/CC preferred.
Weâre already looking at other ways we can help the amazing folks from Team Rubicon on future missions with data, reporting, and impact analysis.
I use Fulcrum all the time for collecting data around hobbies of mine. Sometimes itâs for fun or interests, sometimes for mapping side projects, or even just for testing the product as we develop new features.
Here are a few of my key every day apps I use for personal tracking. Iâm always tinkering around with other things as we expand the product, but each of these Iâve been using for years pretty consistently.
Gas Mileage
Of course there are apps out there devoted to this task, but I like the idea of having my own raw data input for this. Piping this to a spreadsheet lets me run some calculations on it to see MPG, total spend, and total miles driven over time.
Maps Collection
Iâm a collector of paper maps, and some time back I built out a tracker in Fulcrum to inventory my collection. One day I plan to add some other details to this for year, publisher, and the like, but it works for now as a basic inventory of what Iâve got.
Workouts
Iâve been lax this year with the routine, but Iâd built out a log for tracking my workout sessions at the gym â mostly to track doing the âRunner 360â workout. It works great and provides a way to build some charts with progress on efforts over time.
Home Inventory
In order to have a reliable log of all of the expensive stuff in my house, I created this so that thereâs some prayer of having a tight evidence log of what I own if thereâs ever a flood, hurricane, or fire (or even theft) that requires a homeowners insurance claim. I figured it canât hurt to have photographic evidence of whatâs in the house if it came to needing to prove it.
Football Clubs
This one is more of an experiment in using Fulcrum (and its API) as a cloud-based PostGIS database. I created a simple schema for each team, league, and stadium location. I had this idea to use these coordinates for generating a poster of stadiums from satellite images. One day I might have time for that, but thereâs also an open database you can download of all the locations as geojson.
There are a few others Iâve got in âR&Dâ mode right now testing out. Always on the hunt for new and interesting things I can make Fulcrum do. Itâs a true power tool for data entry and data management.
I tried this out the other night on a run. The technique makes some intiutive sense that itâd reduce impact (or level it out side to side anyway). Surely to notice any result youâd have to do it over distance consistently. But Iâve had some right knee soreness that I donât totally know the origin of, so thought Iâd start trying this out. I found it takes a lot of concentration to keep it up consistently. Iâll keep testing it out.
A neat historical, geographical story from BLDGBLOG:
Briefly, anyone interested in liminal landscapes should find Snellâs description of the Drowned Lands, prior to their drainage, fascinating. The Wallkill itself had no real path or bed, Snell explains, the meadows it flowed through were naturally dammed at one end by glacial boulders from the Ice Age, the whole place was clogged with ârank vegetation,â malarial pestilence, and tens of thousands of eels, and, whatâs more, during flood season âthe entire valley from Denton to Hamburg became a lake from eight to twenty feet deep.â
Turns out there was local disagreement on flood control:
A half-century of âwarâ broke out among local supporters of the dams and their foes: âThe dam-builders were called the âbeaversâ; the dam destroyers were known as âmuskrats.â The muskrat and beaver war was carried on for years,â with skirmishes always breaking out over new attempts to dam the floods.
Hereâs one example, like a scene written by Victor Hugo transplanted to New York State: âA hundred farmers, on the 20th of August, 1869, marched upon the dam to destroy it. A large force of armed men guarded the dam. The farmers routed them and began the work of destruction. The âbeaversâ then had recourse to the law; warrants were issued for the arrest of the farmers. A number of their leaders were arrested, but not before the offending dam had been demolished. The owner of the dam began to rebuild it; the farmers applied for an injunction. Judge Barnard granted it, and cited the owner of the dam to appear and show cause why the injunction should not be made perpetual. Pending a final hearing, high water came and carried away all vestige of the dam.â
This is something we launched a few months back. Thereâs nothing terribly exciting about building SSO features in a SaaS product â itâs table stakes to move up in the world with customers. But for me personally itâs a signal of success. Back in 2011, imagining that weâd ever have customers large enough to need SAML seemed so far in the future. Now weâre there and rolling it out for enterprise customers.
Our friend and colleague Kurt Menke of Birdâs Eye View GIS recently conducted a workshop in Hawaii working with folks from the Pacific Islands (Samoa, Marianas, Palau, and others) to teach Fulcrum data collection and QGIS for mapping. Seeing our tech have these kinds of impacts is always enjoyable to read about:
The week was a reminder of how those of us working with technology day-to-day sometimes take it for granted. Everyone was super excited to have this training. It was also a lesson in how resource rich we are on the continent. One of my goals with Birdâs Eye View is to use technology to help make the world a better place. (Thus my focus on conservation, public health and education.) One of the goals of the Community Health Maps program is to empower people with technology. This week fulfilled both and was very gratifying.
Most of the trainees had little to no GIS training yet instantly knew how mapping could apply to their work and lives. They want to map everything related to hurricane relief, salt water resistant taro farms, infrastructure related to mosquito outbreaks etc. A benefit of having the community do this is that they can be in charge of their own data and it helps build community relationships.
This post is part 3 in a series about my history in product development. Check out the intro in part 1 and all about our first product, Geodexy, in part 2.
Back in 2010 we decide to halt our development of Geodexy and regroup to focus on a narrower segment of the marketplace. With what weâd learned in our go-to-market attempt on Geodexy, we wanted to isolate a specific industry we could focus our technology around. Our tech platform was strong, we were confident in that. But at the peak of our efforts with taking Geodexy to market, we were never able to reach a state of maturity to create traction and growth in any of the markets we were targeting. Actually targeting is the wrong word â truthfully that was the issue: we werenât âtargetingâ anything because we had too many targets to shoot at.
We needed to take our learnings, regroup on what was working and what wasnât, and create a single focal point we could center all of our effort around, not just the core technology, but also our go-to-market approach, marketing strategy, sales, and customer development.
I donât remember the specific genesis of the idea (I think it was part internal idea generation, part serendipity), but we connected on the notion of field data collection for the property inspection market. So we launched allinspections.
That industry had the hallmarks of one ripe for us to show up with disruptive technology:
Low current investment in technology â Most folks were doing things on paper with lots of transcribing and printing.
Lots of regulatory basis in the workflow â Many inspections are done as a requirement by a regulatory body. This meant consistent, widespread needs that crossed geographic boundaries, and an âalways-onâ use case for a technology solution.
Phased workflow with repetitive process and âdecision treeâ problems â a perfect candidate for digitizing the process.
Very few incumbent technologies to replace â if there were competitors at all, they were Excel and Acrobat.
Smartphones ready to amplify a mobile-heavy workflow â Inspections of all sorts happen in-situ somewhere in the field.
While the market for facility and property inspections is immense, we opted to start on the retail end of the space: home inspections for residential real estate. There was a lot to like about this strategy for a technology company looking to build something new. We could identify individual early adopters, gradually understand what made their business tick, and index on capability that empowered them. There was no need immediately to worry about selling to massive enterprise organizations, which wouldâve put a heavy burden on us to build âbox-checkingâ features like hosting customization, access controls, single sign-on, and the like. We used a freemium model which helped attract early usage, then shifted to a free trial one later on after some early traction.
Overall the biggest driver that attracted us to residential was the consistency of the work. While anyone whoâs bought property is familiar with the process of getting a house inspected before closing. That sort of inspection is low volume compared to those associated with insurance underwriting. Our first mission was this: to build the industry-standard tool for performing these regulated inspections in Florida â wind mitigation, 4-point, and roof certification. These were (and still are) done by the thousands every day. They were perfect candidates for us for the reasons listed above: simple, standard, ubiquitous, and required1. There was a built-in market for automating the workflow around them and improving the data collected, which we could use as a beachhead to get folks used to using an app to conduct their inspections.
Our hypothesis was that we could apply the technology for mobile data collection weâd built in Geodexy and âverticalizeâ it around the specialty of property inspection with features oriented around that problem set. Once we could spin up enough technology adoption for home inspection use cases at the individual level, we could then bridge into the franchise operations and institutions (even the insurance companies themselves) to standardize on allinspections for all of their work.
We had good traction in the early days with inspectors. It didnât take us long before we connected with a half-dozen tech-savvy inspectors in the area to work with as guinea pigs to help us advance the technology. Using their domain expertise in exchange for usage of the product, we were able to fast-forward on our understanding of the inspection workflow â from original request handling and scheduling, to inspecting on-site, then report delivery to customer. Within a year we had a pretty slick solution and 100 or so customers that swore by the tool for getting their work done.
But it didnât take us long to run into friction. Once weâd exhausted the low-hanging fruit of the early adopter community, it became harder and harder to find more of the tech savvy crowd willing to splash some money on something new and different. As you might expect, the community of inspectors we were targeting were not technologists. Many of these folks were perfectly content with their paperwork process and enjoyed working solo. Many had no interest in building a true business around their operation, not interested in growing into a company with multiple inspectors covering wider geographies. Others were general contractors doing inspections as a side gig, so it wasnât even their core day to day job. With that kind of fragmentation, it was difficult to reach the economies of scale we were looking for to be able to sell something at the price point where we needed to be. We had some modest success pursuing the larger nationwide franchise organizations, but our sales and onboarding strategy wasnât conducive to getting those deals beyond the small pilot stage. It was still too early for that. We wanted to get to B2B customer sizes and margins, but were ultimately still selling a B2C application. Yes, a home inspector has a business that we were selling to, but the fundamentals of the relationship share far more in common with a consumer product relationship than a corporate one.
By early 2012 weâd stalled out on growth at the individual level. A couple opportunities to partner with inspection companies on a comprehensive solution for carriers failed, partially for technical reasons, but also immaturity of our existing market. We didnât have a reference base sizable enough to jump all the way up to selling 10,000 seats without enormous burden and too much overpromising on what we could do.
We shut down operations on allinspections in early 2012. We had suspected this would have to happen for a while, so it wasnât a sudden decision. But it always hurts to have to walk away from something you poured so much time and energy into.
I think the biggest takeaway for me at the time, and in the early couple years of success on Fulcrum, was how relatively little the specifics of your technology matter if you mess up the product-market fit and go-to-market steps in the process. The silver lining in the whole affair was (like many things in product companies) that there was plenty to salvage and carry on to our next effort. We learned an enormous amount about what goes into building a SaaS offering and marketing it to customers. Coming from Geodexy where we never even reached the stage of having a real âcustomer successâ process to deal with, allinspections gave us a jolt in appreciation for things like identifying the âaha momentâ in the product, increasing usage of a product, tracking usage of features to diagnose engagement gaps, and ultimately, getting on the same page as the customer when it comes to the final deliverable. It takes working with customers and learning the deep corners of the workflow to identify where the pressure points are in the value chain, the things that keep the customer up at night when they donât have a solution.
And naturally there was plenty of technology to bring forward with us to our next adventure. The launch of Fulcrum actually pre-dates the end of allinspections, which tells you something about how we were thinking at the time. At the time we werenât thinking of Fulcrum as the ânext evolutionâ of allinspections necessarily, but we were thinking about going bigger while fixing some of the mistakes made a year or two prior. While most of Fulcrum was built ground-up, we brought some code but a whole boatload of lessons learned on systems, methods, and architecture that helped us launch and grow Fulcrum as quickly as we did.
Retrospectives like this help me to think back on past decisions and process some of what we did right and wrong with some separation. That separation can be a blessing in being able to remove personal emotion or opinion from what happened and look at it objectively, so it can serve as a valuable learning experience. Sometime down the road Iâll write about this next evolution that led to where we are today.
Since the mid-2000s, all three of these inspection types are required for insurance policies in Florida. âŠ
I thought this was a great post on how unnecessary âreal-timeâ analytics can be when misused. As the author points out, itâs almost never necessary to have data that current. With current software itâs possible to have infinite analytics on everything, and as a result itâs irresistable to many people to think of those metrics as essential for decision making.
This line of thinking is a trap. Itâs important to divorce the concepts of operational metrics and product analytics. Confusing how we do things with how we decide which things to do is a fatal mistake.
This is a great step-by-step guide to how to georeference data. I spent time years ago figuring this out but still never was able to do it very well. This guide is all you need to be able to georeference old maps.
We rebuilt the code editing environment in the Fulcrum App Designer, which is part of both the Data Events and Calculation Expression editing views. The team (led by Emily) did some great work on this using TypeScript and Microsoftâs Monaco project, with IntelliSense code completion. Itâs a great addition for our many power users to write better automations on top of Fulcrum.
Weâve been supporting the Santa Barbara County Sheriff through Fulcrum Community this year for evacuation reporting during emergency preparation and response. It feels great to have technology that can have real-world immediate impact like this. The gist of their workflow (right now) is using the app to log where evacuation orders were posted, where they havenât notified yet, and tracking that with the slim resources available even in time of need. Centralizing the reporting has made a big difference:
All of this information is uploaded in real time and is accessible to incident commanders who can follow the progress as an evacuation order is implemented.
âItâs really sped up the process, and given us more accurate information,â said Nelson Trichler, an incident commander for the sheriffâs Search and Rescue Team. âItâs a tool we can go back to statistically to see who is responding to these evacuations.
The NSF StEER program has been using Fulcrum Community for a couple of years now, ever since Hurricane Harvey landed on the Texas coast, followed by Irma and Maria later that fall. Theyâve built a neat program on top of our platform that lets them respond quickly with volunteers on the ground conducting structure assessments post-disaster:
The large, geographically distributed effort required the development of unified data standards and digital workflows to enable the swift collection and curation of perishable data in DesignSafe. Auburnâs David Roueche, the teamâs Data Standards Lead, was especially enthusiastic about the teamâs customized Fulcrum mobile smartphone applications to support standardized assessments of continental U.S. and Caribbean construction typologies, as well as observations of hazard intensity and geotechnical impacts.
It worked so well that the team transitioned their efforts into a pro-bono Fulcrum Community site that supports crowdsourced damage assessments from the public at large with web-based geospatial visualization in real time. This feature enabled coordination with teams from NIST, FEMA, and ASCE/SEI. Dedicated data librarians at each regional node executed a rigorous QA/QC process on the backside of the Fulcrum database, led by Roueche.
Ever since my health issues in 2017, the value of the little things has become much more apparent. I came out of that with a renewed interest in investing in mental and physical health for the future. Reading about, thinking about, and practicing meditation have really helped to put the things that matter in perspective when I consider consciously how I spend my time. This piece is a simple reminder of the comparative value of the âlong gameâ.
In this piece analyst Horace Dediu calls AirPods Appleâs ânew iPodâ, drawing similarities to the cultural adoption patterns.
The Apple Watch is now bigger than the iPod ever was. As the most popular watch of all time, itâs clear that the watch is a new market success story. However it isnât a cultural success. It has the ability to signal its presence and to give the wearer a degree of individuality through material and band choice but it is too discreet. It conforms to norms of watch wearing and it is too easy to miss under a sleeve or in a pocket.
Not so for AirPods. These things look extremely different. Always white, always in view, pointed and sharp. You canât miss someone wearing AirPods. They practically scream their presence.
I still maintain this is their best product in years. I hope it becomes a new platform for voice interfaces, once theyâre reliable enough.
We just finished up a several-monthâs-long effort updating the design and branding of Fulcrum, from the logo to typefaces, to web design and all. As happens with these things, it took longer than we wanted it to when we started, but Iâm very pleased with the results.
Timâs post here covers the background and approach we took to doing this refresh:
Sometimes it seems companies change their logos like people change their socks. Maybe they got a new marketing director who wanted to shake things up or a designer came up with something cool while experimenting after hours. We, on the other hand, have never changed our logo. The brief came down the pipeline in 2011 to create a logo for a new initiative called Fulcrum. Many pages of sketches and a few Adobe Illustrator iterations later, the only logo Fulcrum would know for 8 years was born.
We donât take projects for rebranding lightly. Changing this kind of thing too often doesnât impact the bottom line value to your users, can be a confusing moving target for brand recognition in the marketplace, and just plain takes time away from more valuable things. But in our case the need was two-fold: bring the look and feel in line with our family of other brands, and clean it all up after 8 years with our old look.
I started with the first post in this series back in January, describing my own entrance into product development and management.
When I joined the company we were in the very early stages of building a data collection tool, primarily for internal use to improve speed and efficiency on data project work. That product was called Geodexy, and the model was similar to Fulcrum in concept, but in execution and tech stack, everything was completely different. A few years back, Tony wrote up a retrospective post detailing out the history of what led us down the path we took, and how Geodexy came to be:
After this experience, I realized there was a niche to carve out for Spatial Networks but Iâd need to invest whatever meager profits the company made into a capability to allow us to provide high fidelity data from the field, with very high quality, extremely fast and at a very low cost (to the company). I needed to be able to scale up or down instantly, given the volatility in the project services space, and I needed to be able to deploy the tools globally, on-demand, on available mobile platforms, remotely and without traditional limitations of software CDs.
Tonyâs post was an excellent look back at the business origin of the product â the âwhyâ we decided to do it piece. What I wanted to cover here was more on the product technology end of things, and our go-to-market strategy (where you could call it that). Prior to my joining, the team had put together a rough go-to-market plan trying to guesstimate TAM, market fit, customer need, and price points. Of course without real market feedback (as in, will someone actually buy what youâve built, versus say they would buy it one day), itâs hard to truly gauge the success potential.
Back then, modern web frameworks in use today were around, but there were very few and not yet mature, like Rails and itâs peers. Itâs astonishing to think back on the tech stack we were using in the first iteration of Geodexy, circa 2008. That first version was built on a combination of Flex, Flash, MySQL, and Windows Mobile1. It all worked, but was cumbersome to iterate on even back then. This was not even that long ago, and back then that was a reasonable suite of tooling; now it looks antiquated, and Flex was abandoned and donated to Apache Foundation a long time ago. We had success with that product version for our internal efforts; it powered dozens of data collection projects in 10+ countries around the world, allowing us to deliver higher-quality data than we could before. The mobile application (which was the key to the entire product achieving its goals) worked, but still lacked the native integration of richer data sources â primarily for photos and GPS data. The former could be done with some devices that had native cameras, but the built-in sensors were too low quality on most devices. The latter almost always required an external Bluetooth GPS device to integrate the location data. It was all still an upgrade from pen, paper, and data transcription, but not free from friction on the ground at the point of data collection. Being burdened by technology friction while roaming the countryside collecting data doesnât make for the smoothest user experience or prevent problems. We still needed to come up with a better way to make it happen, for ourselves and absolutely before we went to market touting the workflow advantages to other customers.
In mid-2009 we spun up an effort to reset on more modern technology we could build from, learning from our first mistakes and able to short-circuit a lot of the prior experimentation. The new stack was Rails, MongoDB, and PostgreSQL, which looking back from 10 years on sounds like a logical stack to use even today, depending on the product needs. Much of what we used back then still sits at the core of Fulcrum today.
What we never got to with the ultimate version of Geodexy was a modern mobile client for the data collection piece. That was still the early days of the App Store, and I donât recall how mature the Android Market (predecessor to Google Play) was back then, but we didnât have the resources to start of with 2 mobile clients anyway. We actually had a functioning Blackberry app first, which tells you how different the mobile platform landscape looked a decade ago2.
Geodexyâs mobile app for iOS was, on the other hand, an excellent window into the potential iOS development unlocked for us as a platform going forward. In a couple of months one of our developers that knew his way around C++ learned some Objective-C and put together a version that fully worked â offline support for data collection, automatic GPS integration, photos, the whole nine yards of the core toolset we always wanted. The new wave of platform with a REST API, online form designer, and iOS app allowed us to up our game on Foresight data collection efforts in a way that we knew would have legs if we could productize it right.
We didnât get much further along with the Geodexy platform as it was before we refocused our SaaS efforts around a new product concept thatâd tie all of the technology stack weâd built around a single, albeit large, market: the property inspection business. Thatâs what led us to launch allinspections, which Iâll continue the story on later.
In an odd way, itâs pleasing to think back on the challenges (or things we considered challenges) at the time and think about how they contrast with today. We focused so much attention on things that, in the long run, arenât terribly important to the lifeblood of a business idea (tech stack and implementation), and not enough on the things worth thinking about early on (market analysis, pricing, early customer development). Part of that I think stems from our indexing on internal project support first, but also from inexperience with go-to-market in SaaS. The learnings ended up being invaluable for future product efforts, and still help to inform decision making today.
As painful as this sounds we actually had a decent tool built on WM. But the usability of it was terrible, which if you can recall the time period was par for the course for mobile applications of all stripes. âŠ
Weâve spent the last 6 months or so working with the team at the US Census Bureau on something called The Opportunity Project, a recurring initiative quarterbacked by the Census to bring together creators, government, and local communities to collaboratively build tools to tackle various large issues in the nation. Specifically weâve been testing out the ability for communities in need to deploy Fulcrum Community for collecting address data. While to an outsider it may seem like address data is a âsolved problem,â thatâs far from the case in certain locales and rural communities. Also, even in urban areas, the need for an address mapping toolkit and quality data rises up when disasters strike and throw municipalities into disarray.
This post from Bryan comprehensively covers what weâve been doing and what the potential impacts look like for appropriate application of technology to specific problems:
Each year, The Opportunity Project (TOP) connects government agencies, technologists, and communities to collaboratively work through 12-week technology development sprints to tackle a variety of problem statements, using open data, and create digital tools that help strengthen American economic opportunity. Technologists work with stakeholders, end users, and product advisors to build tools and apps, which are unveiled at the end of the sprint during Demo Day, at the Census headquarters.
This past year, Spatial Networks had the privilege to work on a sprint team tasked with working closely with the U.S. Census Bureau and U.S. Department of Transportation to help Tribal, State, and Local Governments with local address data collection. More specifically, our challenge was to develop resources to help stakeholders create and maintain open address point data.
Folks from FEMA, the City of New Orleans, several tribal councils, and other federal agencies are keenly interested in crowdsourcing tools like what weâve been building, to apply to dozens of different problems where better data can play a role. Weâre proud of what weâve accomplished so far and are doubling down on Community in the coming months.
The topic of âhow we got to 1000 usersâ is an interesting one I thought I could take a stab atâŚ
Fulcrumâs first lines of code were written in the summer of 2011. Initially we put together a basic drag-and-drop form builder interface, the simplest possible authentication system, and a simple iPhone app that let you collect records. There was no concept of multiuser membership within accounts, and we only had a free version. The idea (with little to no planning at all) was to cut loose a free app for basic data collection and see what the traction looked like. We did have in our heads the idea that when we had âGroupâ account capability, that would be the time to monetize. âFulcrum Proâ, as we called it then. That launched in around March of 2012.
I donât recall exactly when we hit the 1,000 user mark, but from some brief investigation of data, early 2013 seems like where we crossed that milestone. About a year and a half from 0 to 1,000.
So what techniques did we use to get there?
At the beginning, the team working on Fulcrum was tiny â maybe 2 doing all the dev work, and 3 (including me) doing part-time effort on all other fronts like customer support, product planning, design, marketing, etc. There wasnât much there in terms of resources to go around, so we had to do the bare minimum to make something customers could self-serve on their own, that was of some minimum utility.
The only driver for all of our users in those early days, probably the first entire year and a half, was inbound marketing, and really only of two types. Since each of us had a decent sized footprint in the geo Twitterverse back then, we had at least a captive audience of like-minded folks that would kick the tires, help promote, and give us feedback. Iâd count that user-base in the dozens, though, so not a huge solo contributor to the first 1,000.
I would attribute reaching the first 1,000 to a hybrid of content marketing through a blog, word of mouth, and (often forgotten) an actual useful product that was filling a void left by the other more mature âcompetitorsâ in the space. With a high volume of blog posts, some passable SEO-friendly web content, and a consistent feed of useful material, we attracted early adopters in engineering firms, GIS shops, humanitarian organizations, and some electric utilities.
Fast forward to 2019 and things have changed quite a bit! Not only have we eclipsed well over 100,000 individual users, more importantly weâre approaching the 2,000 paid customer mark. Spanning anywhere from 1 to over 1,000 individual users per customer, itâs safe to call it a repeatable, successful thing at this point. Back where we crossed 1,000 users, we were only hitting the very beginning of true product-market fit.
Building something that catches on and keeping after it are hard. A key learning of mine over the course of this process is to never think youâve got it all figured out, that youâve cracked the code. Thereâs always more to be done to break past inflection points and reach the next level on the step function of successful SaaS business.
Weâve been doing prototyping over the last 6 months using Figma, a tool for building mockups and making them interactive for testing UX designs. This post from Caleb covers some basics in how it works with some great examples of what you can do with it.
Design is an iterative process that involves a continuous cycle of researching, designing, prototyping, and testing as well as communicating with stakeholders along the way.
I wrote up a post for the Fulcrum blog on using OpenAerialMap with Fulcrum Community. As we invest more in building out our Community platform this year, Iâm excited to do more with integrating OAM into our tools.
For response deployments using Fulcrum Community, you can also add OpenAerialMap datasets to your Fulcrum account as layers to use in the Fulcrum mobile app.
Using Fulcrumâs map layer feature, you can add OAM datasets using the âTile XYZâ format. These layers also become available on the mobile app, so contributors on the ground doing damage assessment, for example, can have access to current, high-resolution data for reference. OAM also makes data available in WMTS, and integrates directly with the iD editor and JOSM OpenStreetMap editing tools to simplify tracing buildings, roads, and water bodies.
Our friends over at the Santa Barbara County Sheriff have been using a deployment of Fulcrum Community over the last month to log and track evacuations for flooding and debris flow risk throughout the county. Theyâve deployed over 100 volunteers so far to go door-to-door and help residents evacuate safely. In their initial pilot they visited 1,500 residents. With this platform the County can monitor progress in real-time and maximize their resources to the areas that need the most attention.
âThis app not only tremendously increase the accountability of our door-to-door notifications but also gave us a real time tracking on the progress of our teams. We believe it also reduced the time it has historically taken to complete such evacuation notices.â
This is exactly what weâre building Community to do: to help enable groups to collaborate and share field information rapidly for coordination, publish information to the public, and gather quantities of data through citizens and volunteers they couldnât get on their own.
From Howard Butler is this amazing public dataset of LiDAR data from the USGS 3D Elevation Program. Thereâs an interactive version here where you can browse whatâs available. Using this WebGL-based viewer you can even pan and zoom around in the point clouds. More info here in the open on GitHub.
Microsoft published this dataset of computer-generated building footprints, 125 million in all. Pretty incredible considering how much labor itâd take to produce with manual digitizing.
Iâm headed out to San Jose, CA next week for the SaaStr Annual conference. Itâll be my third in a row; definitely one of the events I most look forward to nowadays. It always brings a great combo of interesting content, energy, diverse attendees, and fun side events to enjoy.
I wrote up this preview of sessions Iâm looking forward to this time around. They do a great job touching on some of the same things year over year (helpful for tracking industry trends) but also mixing in plenty of new voices each time around.
The last several months Iâve been spending quite a bit of time working on this: our geospatial data and analytical product line called Foresight. Weâve been in this business dating back to 2000 in various forms and using the technologies of the era, but empowered by todayâs technology, decision support tools, and the open source geo stack, itâs evolved to something novel and unmatched for our customers.
At its core itâs âdata-as-a-serviceâ designed to give customers the insights they need to do more, spend less, decide faster, and reduce their uncertainty, with a focus on international geospatial markets.
As Tony put it succinctly in his post, which sums it up nicely:
The ability to know before you go or even, in some cases, eliminate the need to go at all is a unique hallmark to our Foresight products.
Weâre working on some example products right now thatâll tell a concrete, compelling story about how Foresight works in practice. Iâll be interested to share more about that down the road once we get them out there.
A frequent desire for Fulcrum customers is to maintain locally a version of the data they collect with our platform, in their database system of choice. With our export tool, itâs simple to pull out extracts in formats like CSV, shapefile, SQLite, and even PostGIS or GeoPackage. What this doesnât allow, though, is an automatable way to keep a local version of data on your own server. Youâd have to extract data manually on some schedule and append new stuff to existing tables youâve already got.
A while back we built and released a tool called Fulcrum Desktop, with the goal of alleviating this problem. Itâs an open source command line utility that harnesses our API to synchronize content from your Fulcrum account into a local database. It supports PostgreSQL (with PostGIS), Microsoft SQL Server, and even GeoPackage.
Other than the primary advantage of providing a way to clone your data to your own system, one of the cool things you can do with Desktop is easily make your data available to your GIS users in a tool like QGIS. It also has a plugin architecture to support other cool things like:
If you have the Fulcrum Developer Pack with your account, you have access to all of the APIs, so you need that to get Desktop set up (though it is available on the free trial tier).
Weâve also built another utility called fulcrum-sync that makes it easy to set up Desktop using Docker. This is great for version management, syncing data for multiple organizations, and overall simplifying dependencies and local library management. With Docker âcontainerizingâ the installation, you donât have to worry about conflicting libraries or fiddle with your local setup. All of the FD installation is segmented to its own container. This utility also makes it easier to install and manage FD plugins.
With tools like Mapillary and OpenStreetCam, itâs pretty easy now to collect street-level images with a smartphone for OpenStreetMap editing. Point of interest data is now the biggest quality gap for OSM as compared to other commercial map data providers. Itâs hard to compete with the multi-billion dollar investments in streetmapping and the bespoke equipment of Google or Apple. Thereâs promise for OSM to be a deep, current source of this level of detail, but it requires true mass-market crowdsourcing to get there.
The businesses behind platforms like Mapillary and OpenStreetCam arenât primarily based on improving OSM. Though Telenav does build OSC as a means to contribute, their business is in automotive mapping powered by OSM, not the collection tool itself. Mapillary on the other hand is a computer vision technology company. They want data, so opening the content for OSM mapping attracts contributors.
Iâve been collecting street-level imagery for years using windshield mounts in my car, typically for my own purposes to add detail in OSM. Since we launched our SpatialVideo feature in Fulcrum (over 4 years ago now!), Iâve used that for most of my data collection. While the goals of that feature in Fulcrum are wider than just vehicle-based data capture, the GPS tracking data with SpatialVideo makes it easier to scrub through spatially to find whatâs missing from the map. My personal workflow is usually centered on adding points of interest, but street furniture, power infrastructure, and signage are also present everywhere and typically unmapped. You can often see addresses on buildings, and I rarely find new area where the point of interest data is already rich. Thereâs so much to be filled in or updated.
This is a quick sample of what video looks like from my dash mount. Itâs fairly stable, and the mounts are low-cost. This is the SV player in the Fulcrum Editor review tool:
One of the cool things about the Fulcrum format is that itâs video, so that smoothness can help make sure youâve got each frame needed â particularly on high speed thoroughfares. We built in a feature to control the frame rate and resolution of the video recording, so what I do is maximize the resolution but drop the frame rate well below 30 fps. This helps tremendously to minimize the data quantity thatâs got to get back to the server. Even 3 or 5 fps can be plenty for mapping purposes. I usually go with 10 or so just to smooth it out a little bit; the size doesnât get too bad until you go past 15 or so.
Of course the downside is that this content isnât available to the public easily for others to map from. Not a huge deal to me, but with Fulcrum Community weâre looking at some ways to open this system up to use for contribution, a la Mapillary or OSC.
This was a long time in the making. Weâve launched our latest big feature in Fulcrum: photo annotations.
This feature was an interesting thing to take on. Rather than doing it the quick and dirty way, we did it right and built customized framework we could use across platforms. Because the primary interfaces for annotating are iOS and Android, the library is built in JavaScript and cross-compiled to each native mobile environment, which allows us to lean on a single centralized codebase to support both of our mobile platforms. We even have plans to build annotation support into our web-based Editor eventually, using the same core.
This was an exciting effort to watch come together. From architecting how itâd all work, to building the core, to winnowing down the list of edge cases and quirks, and finally shipping the final shiny new release.
Our entire engineering team â from the core web dev team to mobile â should be commended for the collaborative effort that brought this together. Thereâs nothing like the feeling of shipping new features that are accretive and valuable to our platform.
Fulcrum, our SaaS product for field data collection, is coming up on its 7th birthday this year. Weâve come a long way: from a bootstrapped, barely-functional system at launch in 2011 to a platform with over 1,800 customers, healthy revenue, and a growing team expanding it to ever larger clients around the world. I thought Iâd step back and recall its origins from a product management perspective.
We created Fulcrum to address a need we had in our business, and quickly realized its application to dozens of other markets with a slightly different color of the same issue: getting accurate field reporting from a deskless, mobile workforce back to a centralized hub for reporting and analysis. While we knew it wasnât a brand new invention to create a data collection platform, we knew we could bring a novel solution combining our strengths, and that other existing tools on the market had fundamental holes we saw as essential to our own business. We had a few core ideas, all of which combined would give us a unique and powerful foundation we didnât see elsewhere:
Use a mobile-first design approach â Too many products at the time still considered their mobile offerings afterthoughts (if they existed at all).
Make disconnected, offline use seamless to a mobile user â They shouldnât have to fiddle. Way too many products in 2011 (and many still today) took the simpler engineering approach of building for always-connected environments. (requires #1)
Put location data at the core â Everything geolocated. (requires #1)
Enable business analysis with spatial relationships â Even though weâre geographers, most people donât see the world through a geo lens, but should. (requires #3)
Make it cloud-centric â In 2011 desktop software was well on the way out, so we wanted an platform we could cloud host with APIs for everything. Creating from building block primitives let us horizontally scale on the infrastructure.
Regardless of the addressable market for this potential solution, we planned to invest and build it anyway. At the beginning, it was critical enough to our own business workflow to spend the money to improve our data products, delivery timelines, and team efficiency. But when looking outward to others, we had a simple hypothesis: if we feel these gaps are worth closing for ourselves, the fusion of these ideas will create a new way of connecting the field to the office seamlessly, while enhancing the strengths of each working context. Markets like utilities, construction, environmental services, oil and gas, and mining all suffer from a similar body of logistical and information management challenges we did.
Fulcrum wasnât our first foray into software development, or even our first attempt to create our own toolset for mobile mapping. Previously weâd built a couple of applications: one never went to market, was completely internal-only, and one we did bring to market for a targeted industry (building and home inspections). Both petered out, but we took away revelations about how to do it better and apply what weâd done to a wider market. In early 2011 we went back to the whiteboard and conceptualized how to take what weâd learned the previous years and build something new, with the foundational approach above as our guidebook.
We started building in early spring, and launched in September 2011. It was free accounts only, didnât have multi-user support, there was only a simple iOS client and no web UI for data management â suffice it to say it was early. But in my view this was essential to getting where we are today. We took our infant product to FOSS4G 2011 to show what we were working on to the early adopter crowd. Even with such an immature system we got great feedback. This was the beginning of learning a core competency you need to make good products, what Iâd call âidea fusionâ: the ability to aggregate feedback from users (external) and combine with your own ideas (internal) to create something unified and coherent. A product canât become great without doing these things in concert.
I think itâs natural for creators to favor one path over the other â either falling into the trap of only building specifically what customers ask for, or creating based solely on their own vision in a vacuum with little guidance from customers on what pains actually look like. The key Iâve learned is to find a pleasant balance between the two. Unless you have razor sharp predictive capabilities and total knowledge of customer problems, you end up chasing ghosts without course correction based on iterative user feedback. Mapping your vision to reality is challenging to do, and it assumes your vision is perfectly clear.
On the other hand, waiting at the beck and call of your user to dictate exactly what to build works well in the early days when youâre looking for traction, but without an opinion about how the world should be, you likely wonât do anything revolutionary. Most customers view a problem with a narrow array of options to fix it, not because theyâre uninventive, but because designing tools isnât their mission or expertise. Theyâre on a path to solve a very specific problem, and the imagination space of how to make their life better is viewed through the lens of how they currently do it. Like the quote (maybe apocryphally) attributed to Henry Ford: âIf Iâd asked customers what they wanted, they wouldâve asked for a faster horse.â In order to invent the car, you have to envision a new product completely unlike the one your customer is even asking for, sometimes even requiring other industry to build up around you at the same time. When automobiles first hit the road, an entire network of supporting infrastructure existed around draft animals, not machines.
Weâve tried to hold true to this philosophy of balance over the years as Fulcrum has matured. As our team grows, the challenge of reconciling requests from paying customers and our own vision for the future of work gets much harder. What constitutes a âbig ideaâ gets even bigger, and the compulsion to treat near term customer pains becomes ever more attractive (because, if youâre doing things right, you have more of them, holding larger checks).
When I look back to the early â10s at the genesis of Fulcrum, itâs amazing to think about how far weâve carried it, and how evolved the product is today. But while Fulcrum has advanced leaps and bounds, it also aligns remarkably closely with our original concept and hypotheses. Our mantra about the problem weâre solving has matured over 7 years, but hasnât fundamentally changed in its roots.
Using Amazonâs Athena service, you can now interactively query OpenStreetMap data right from an interactive console. No need to use the complicated OSM API, this is pure SQL. Iâve taken a stab at building out a replica OSM database before and itâs a beast. The dataset now clocks in at 56 GBzipped. This post from Seth Fitzsimmons gives a great overview of what you can do with it:
Working with âthe planetâ (as the data archives are referred to) can be unwieldy. Because it contains data spanning
the entire world, the size of a single archive is on the order of 50 GB. The format is bespoke and extremely specific
to OSM. The data is incredibly rich, interesting, and useful, but the size, format, and tooling can often make it
very difficult to even start the process of asking complex questions.
Heavy users of OSM data typically download the raw data and import it into their own systems, tailored for their
individual use cases, such as map rendering, driving directions, or general analysis. Now that OSM data is available
in the Apache ORC format on Amazon S3, itâs possible to query the data using Athena without even downloading it.
Personal plug here, this is something thatâs been in the works for months. We just launched Editor, the completely overhauled data editing toolset in Fulcrum. I canât wait for the follow up post to explain the nuts and bolts of how this is put together. The power and flexibility is truly amazing.
The team at DroneDeploy just launched the first live aerial imagery product for drones. Pilots can now fly imagery and get a live, processed, mosaicked result right on a tablet immediately when their mission is completed. This is truly next level stuff for the burgeoning drone market:
The poor connectivity and slow internet speeds that have long posed a challenge for mapping in remote areas donât hamper Fieldscanner. Designed for use the fields, Fieldscanner can operate entirely offline, with no need for cellular or data coverage. Fieldscanner uses DroneDeployâs existing automatic flight planning for DJI drones and adds local processing on the drone and mobile device to create a low-resolution Fieldscan as the drone is flying, instead of requiring you to process imagery into a map at a computer after the flight.