Skip to main content
Back to blog

Software development

Adding more insight with Spot robot’s Autowalk Mission Evaluator (AME)

Robin Kurtz
Jul 11, 2022 ∙ 4 mins
Spot Robot Camera

If you haven't read our blog post about the workflow service tool we developed for Spot robot that offers a simple UI to walk through a mission’s data and provide feedback to other team members, I suggest you read this first then come back to this article. If you are already familiar with the Autowalk Mission Evaluator (AME), keep reading!

Once manual inspections have been replaced by automated inspections in a company's workflow, it’s pertinent to have an efficient way to parse the captured data and reliably record historical feedback on said data. However, this base functionality is just that—a base that can be expanded upon.

We’ve updated the AME platform to include some new powerful features which will help users understand and compare audio data and annotate captured photos.

Introducing audio support

In the case of a factory or manufacturing plant, ensuring the health of the various machines running their processes is vital. Understanding this, we decided to add audio data support to AME so that clients can capture snippets of audio, just as they would with images. Engineers can then play the captured audio within the browser in order to listen for abnormalities or oddities (e.g. any hissing, changing rhythms, clunking, etc. that isn’t supposed to be there). The captured audio is compared against a “control” audio snippet so that even employees who are less familiar with the equipment being inspected can benefit from the tool.

This feature has a real impact. A maintenance team could catch potentially harmful issues early, such as a bearing giving out or identifying loose parts that might cause unexpected noises. Matched with the Spot’s ability to capture thermal images (and AME’s ability to view them), there is potential to prevent extended downtime.

Because audio data requires the user to take the time to listen to the snippet in full before gaining any insight on the situation, we wanted to add a more efficient way to parse the data. For this reason, we added in a visual waveform of each audio snippet. These waveforms can provide some quick insights based on the patterns we see or extreme values.

Audio Side by Side

Furthermore, we’ve ensured users can compare the audio waveform with their control just as they can with images, allowing them to quickly compare audio tracks and see smaller differences which might not be obvious when looking at only the current data.

Because audio snippets can be initiated at any point in a machine’s cycle, it’s important to consider that there can be variances between the actual and the control data (especially when comparing the waveforms, as their patterns will likely be offset by a varying amount). In the future, we plan to introduce a parser to analyze the two audio snippets, find patterns and align them when possible.

Moving forward…

With this new feature in place, the AME tool can be fitted with client-specific logic to analyze the collected audio data to help provide more insights on the current state of whatever environment you are inspecting.

Let’s annotate

When providing insight on detected issues within visual media, it’s all well and good to record a simple description of the issue, however this isn’t very extendable. If we ask the user to properly annotate the data as they reject it, we can collect more usable information. These annotations can then be used to train a machine learning model to eventually detect the issues itself, making the AME tool more powerful and requiring less user input.

In order to collect such data, the AME platform is now equipped with an annotation tool which is presented to users when they “reject” an image datapoint. Here the user can click and drag a box over an area of the image and input short text to describe the issue, object, etc.

Annotation 1

Once a user has submitted an annotation, the anomaly is then highlighted on the image and the corresponding annotation text is displayed in a list below the image. A user can then add more annotations, if needed, and finally submit their rejection when ready.

Annotation 2

Once submitted, the annotation will be tied to the image in the predetermined location and thus can be seen in the initial viewing of the datapoint. This means that users will see annotations already made on any given image data and, of course, historically.

For more strict contexts, the AME can be fitted with a pre-approved list of “expected” annotations, leaving the user to simply select an annotation rather than provide it themselves. This approach would work well when the environment is well controlled and would only have a certain set of anomalies to detect.

Moving forward…

Now that the AME has such information in its database, machine learning tools can be used to leverage such added information to help provide further insights before users see the data.

Your turn!

Spot is pretty cool in and of itself, but one of its true powers comes in how it can be extended. We built AME to help free you up from routine evaluator tasks so you can focus on your business.

If you’re interested in hearing more about how Spot’s autowalk feature can help you, or how our AME tool can be integrated into your workflow to automate data collection, contact us!