When we started building BOLT a year ago, we had a clear mission: to build a platform for machine learning (ML) teams to get data labeled in a quick and easy manner.
Since then, our app has grown in complexity and the needs for BOLT have evolved. We no longer just label data, but also provide quality insights to help data and computer vision engineers get trusted, usable data. BOLT wasn't initially designed for our expanded focus, but this became a clear need.
1. Our MVP approach of having everything in one place was confusing.
2. Our users’ most memorable experience on the platform is seeing annotations being completed live.
3. Our users want access to services beyond just getting data labeled.
When we first launched in March this year, we built the end-to-end flow from uploading images to getting data exported all in one place for simplicity. However, it wasn’t clear to new users what the required steps to start the project were. As we added more features to the platform, this problem got worse.
In the new redesign, you will now only see the 3 required steps to get your project started – Data, Labels and Instructions.
Our users told us that what they found most memorable was seeing the work done live after starting a project. As their images are being annotated, they can review the work in real time and update instructions with the edge cases found to reduce annotation errors.
However, our users could only access this experience through a button hidden in one of the tabs, resulting in a sub-optimal user experience.
With the new update, once you’ve started your project, you will be brought to the View Tasks page. This eliminated the need to jump through hoops to view your completed tasks.
When most people today think of BOLT, they think of pushing a button and getting data labeled. However, since launch we’ve seen increasing needs from users and the wider ML market for solving data labeling’s last mile problem - measuring, verifying and trusting the annotation quality. That has inspired us to create new ways to generate quality insights to better fit their needs.
Skipped tasks is a great example of this: We built this feature after we saw how some users didn’t have a way to gauge our annotators’ understanding about the project. This not only created communication gaps, but also increased the number of annotation errors. Skipped tasks fixes that – it empowers our annotators to proactively provide feedback to users.
Our recent changes to BOLT make it easy for us to expand on new features to review annotation quality and obtain useful insights.. In the next few months, we will be releasing a feedback loop for users to provide feedback to annotators and have tasks relabeled.
Redesigning the flow of our platform is still just the beginning of us learning more about how ML teams want to use BOLT to accelerate their model development. This is a big step forward into the future, and we’re excited to see how the rest of the story unfolds, as our team continues to solve data labeling’s last mile problem.
Check out our new flow here!
Level up your data-centric AI journey with quality insights.
Contact Us