Image Data Processing

To minimize number of steps users need to accomplish operational goals, minimize data processing errors, and improve application design aesthetic. 

SaaS, B2B, Product Design

Product

Measure Ground Control Web Application

Project

Product redesign of image data processing work flow for drone software. 

Role

Product Management
UX/UI Research & Design
Business Development

Background

Measure Ground Control is an industry leading drone software solution, which offers a complete suite of tools for businesses that operate and manage a drone program, or for drone service providers hired by companies that do not have their own internal drone program.

One of the many functions made available to Measure’s enterprise account and pro subscription users is an image data processing pipeline, which allows a user to process large image data sets captured by their drones, and transform the aerial imagery into 2D maps or 3D models. This image processing backbone is powered by an API integration with Pix4D, one of the industry’s top photogrammetry solutions.

Ground Control’s price point is very competitive partly due to leveraging integrations with partners like Pix4D, which frees up development bandwidth to focus on features that our main competitors don’t otherwise offer. However, reliance on a third party integration can come at a cost to the customer experience.

The Problem

Scrutiny of Google Analytics, image processing failures, quality reports, and customer support tickets revealed issues with the existing photogrammetry processing pipeline. Too often the raw image data uploads failed, got stuck in the queue with no further information, or the final image data products yielded poor results. While this was not all users, and primarily ones that were processing much larger data sets, these users were not happy. 

User Research

Once the problem was identified, I conducted several rounds of user interviews with the enterprise accounts that were experiencing the most errors. Secondary rounds of interviews were conducted with enterprise users that are more reliant on photogrammetry, despite not having reported any issues with their processed image data sets.
 
I made sure to get an even round of distribution of industries, including our primary business verticals consisting of construction, agriculture, utilities, and forestry. While we cater to other business sectors, I focused on these primary industries as they tend to use our data processing pipeline to its fullest potential. 

These interviews were conducted over video chat, which gave the users the opportunity to screen share their experience as they were in the act of processing large data sets. This also gave us the opportunity to look at the in-situation errors or processed data sets that returned with less than acceptable results, from the user’s precise vantage.

Research Results

Through these interviews and observations I was able to conclude several things. Beyond just being supportive and giving the user the focused attention that they deserved in this time of frustration, I was able to identify some weak points in the design and user experience that with a little help could solve some of the user errors. In my mind this would no doubt increase customer satisfaction, and minimize the time spent with customer support troubleshooting. 

The first thing that I noticed was that users were having to go back and forth between several screens in the processing flow. This seemed to occur for a couple of reasons. One was that they misunderstood the outcome of specific selections they were making, and would have to go back and reset the option from a previous screen. The second was that they were missing sub selections that were buried in forms with long scrolls, where the lower selections on the list were unnecessarily hidden from view (possibly implemented as a way to save screen real estate). 

I captured these notes, and memorialized the information in Confluence to discuss with the relevant internal stakeholders. In agreeing that the issue had become too large and too common to ignore, the internal decision was made to reimagine the image data processing work flow. 

Competitor Analysis

There are not a lot of competitors is the photogrammetry space, but the two major ones Drone Deploy and Agisoft Metshape. Both options are more expensive, and each one tailors to a different audience. 

Drone Deploy undoubtedly has a fantastic user experience, and its photogrammetry algorithm produces images of the highest quality compared to all of the other solutions. However, it comes at such a premium that most drone programs or drone service providers simply cannot afford the cost. The biggest users of this software are industries like construction, where larger budgets exist to offset the cost. 

MetaShape leaves something to be desired from a user experience and design point of view, and the final produced maps are not of the same quality as Drone Deploy. While it’s significantly cheaper than Drone Deploy, it lacks a lot of other features that make it a compelling tool for people that manage a drone program, that need more than just a photogrammetry tool. 

I was able to glean insights from our users that did have experience using one of both of our competitor products, and it only seemed to validate the reviews I was reading online, as well as my own conclusions from interacting with the tools myself. 

Research Conclusions

While we were not going to be able to fix or prevent every error from ever happening, there were several steps that we could take to seriously mitigate user errors by streamlining the forms and user input options, reducing the steps it takes to complete the data processing setup tasks, and in general provide a better user experience thru a more aesthetically pleasing interface. The other remedy that we put into place was to work directly with our integration partner, Pix4D, alert them of the issues, and see if they could work with us on improving their cloud computing API to be more stable for the larger data sets that were giving our users processing errors. They were very receptive to the feedback, as it also benefitted their customer base. 

Design Solution

I spent the next few weeks considering the guiding principles gleaned from our user interviews as I iterated on the designs. I kept the three take aways close to heart: reduce steps it takes to complete the tasks, streamline the forms to reduce the opportunity for mistakes, and punch up the design aesthetic to make the general experience more enjoyable.

Keeping the internal stakeholders closely in the loop for early feedback, I came up with several iterations of an improved data processing workflow. Each iteration was improved thru internal design reviews, until it a prototype was ready to share with some of the enterprise users that I had initially interviewed. 

The feedback was overwhelmingly positive, and a dramatic improvement from the version that the users had experience so much grief in using. I had completely eliminated two screens from the workflow, thereby shaving valuable time from a daily routine conducted by our enterprise users.

Additionally, I had shifted form elements around using design psychology principles that help inform best practices for layout and content grouping. This removed any need to go back forth between screens, as users were able to see the affect one selection would have upon another, such as child form element and help text content that changes based on they way its parent was set.

This dramatically improved the data processing selection options, which in turn yielded better overall processing results. By focusing on the aesthetic usability affect, 100% of our users polled remarked on how much they preferred the new design over the old one. By improving the aesthetics, the perceived enjoyment of using the app increased. 

Lastly, our integration partners were able to find areas in their code base that allowed them to improve the performance of their API cloud computing algorithms. These improvements were overwhelmingly well received, and any additional feedback was advice on new features to add, rather than how to improve the new version, or errors they were encountering. 

Things I wish I knew then...

Initially, I comped designs that were too aggressive of a change. This would have added a lot of extra development time, and would have cannibalized the a lot of other parts of the site. With the current development capacity, while we needed to improve the design, we needed to keep it at a practical minimum to add value while adhering to internal bandwidth and capabilities. There were other technical limitations for how inputs had to be entered before the next screen could load, so there was only so many screens that I could reduce the flow to while staying compliant to the API requirements. In a perfect world I would have blown up the whole flow, and reimagined it from the ground up, but the world is full of healthy compromise.