October 17, 2022

3 User Insights that Helped Us Redesign Bolt

3 User Insights that Helped Us Redesign Bolt

When we started building BOLT a year ago, we had a clear mission: to build a platform for machine learning (ML) teams to get data labeled in a quick and easy manner. 

Since then, our app has grown in complexity and the needs for BOLT have evolved. We no longer just label data, but also provide quality insights to help data and computer vision engineers get trusted, usable data. BOLT wasn't initially designed for our expanded focus, but this became a clear need. 

What we’ve learned along the way from our users:

1. Our MVP approach of having everything in one place was confusing.

2. Our users’ most memorable experience on the platform is seeing annotations being completed live.

3. Our users want access to services beyond just getting data labeled.


So, here’s how we redesigned the end-to-end flow on BOLT:

1. Only present info relevant to the user journey

When we first launched in March this year, we built the end-to-end flow from uploading images to getting data exported all in one place for simplicity. However, it wasn’t clear to new users what the required steps to start the project were. As we added more features to the platform, this problem got worse. 

In  the new redesign, you will now only see the 3 required steps to get your project started – Data, Labels and Instructions.

Before:
After:
2. View annotations right after the user starts a project

Our users told us that what they found most memorable was seeing the work done live after starting a project. As their images are being annotated, they can review the work in real time and update instructions with the edge cases found to reduce annotation errors. 

However, our users could only access this experience through a button hidden in one of the tabs, resulting in a sub-optimal user experience. 

With the new update, once you’ve started your project, you will be brought to the View Tasks page. This eliminated the need to jump through hoops to view your completed tasks.

Before:
After:
3. Provide quality insights for our users

When most people today think of BOLT, they think of pushing a button and getting data labeled. However, since launch we’ve seen increasing needs from users and the wider ML market for solving data labeling’s last mile problem - measuring, verifying and trusting the annotation quality. That has inspired us to create new ways to generate quality insights to better fit their needs.

Skipped tasks is a great example of this: We built this feature after we saw how some users didn’t have a way to  gauge our annotators’ understanding about the project. This not only created communication gaps, but also increased the number of annotation errors. Skipped tasks fixes that – it empowers our annotators to proactively provide feedback to users.  

Our recent changes to BOLT make it easy for us to expand on new features to review annotation quality and obtain useful insights.. In the next few months, we will be releasing a feedback loop for users to provide feedback to annotators and have tasks relabeled. 

Redesigning Bolt: The Early Chapters of Accelerating AI Model Development

Redesigning the flow of our platform is still just the beginning of us learning more about how ML teams want to use BOLT to accelerate their model development. This is a big step forward into the future, and we’re excited to see how the rest of the story unfolds, as our team continues to solve data labeling’s last mile problem. 

Check out our new flow here!

Bryce Wilson
Data Engineer at Black.ai

Consistent support

If there's one thing that makes SUPA stand out, it's their commitment to providing consistent support throughout the data labeling process. The team actively and efficiently engaged with us to ensure any ambiguity in the dataset was cleared up.

Jonas Olausson
Data Engineer at Black AI
The best interface for self-service labeling.

Everything from uploading data to seeing it labeled in real time was really cool. This is just way simpler to use compared to Amazon Sagemaker and LabelBox. I was also very impressed with how the platform delivered exactly what we needed in terms of label quality.

Sravan Bhagavatula
Director of Computer Vision at Greyscale AI
Launch a revised batch within hours

I was also able to view the labels as they were being generated, which gave me quick feedback about the label quality, rather than waiting for the whole batch. This replaced my standard manual QA process using external tools like Voxel's Fiftyone, as the labels were clear and easy to parse through in real-time.

Sparsh Shankar
Associate ML Engineer at Sprinklr
Really quick

The annotators were really quick. I would upload and 5 minutes later - 10 images done. I checked 5 minutes later - 100 images done.

Puneet Garg
Head of Data Science at Carousell
Good quality judgments

The team at [SUPA] has been very professional & easy to work with since we started our collaboration in 2019. They've provided us with good quality judgments to train, tune, and validate our Search & Recommendations models.

Book a demo

Let us walk you through the entire data labeling experience, from set up to export

Schedule a chat