August 31, 2022

SUPABOLT vs Amazon MTurk/SageMaker

SUPABOLT vs Amazon MTurk/SageMaker

Curious about how BOLT stacks up against Amazon SageMaker? Isaac, our Head of Product decided to try out SageMaker for himself to share his experience with us.

1. Set up

SageMaker's project setup process isn't straightforward. Isaac found it tedious and frustrating. He also had to set up other AWS services like S3 for uploads and figure out the right AWS permissions – this was painful.

In comparison, BOLT's interface is intuitive and easy to use. With BOLT, we were able to set up the same project within just a few minutes. We designed and built BOLT for simplicity so our users can focus on what they need: annotated data.

2. Feedback loop

Based on our experience, SageMaker doesn't provide feedback between you and the annotation process. Once SageMaker Ground Truth completes your task annotation tasks, the only action you're allowed to take is to export the data. It's up to you to analyze the quality of your tasks. The platform doesn't provide insights or information that will help you accelerate the learning and iteration process.

Insights and learnings are a collaborative process on BOLT. The feedback between our users and annotators helps to progressively improve annotation quality and reduce label noise with every iteration.

Sneak peak of a new update to BOLT's feedback loop

BOLT is designed specifically to improve the efficiency of iterating based on insights. Our annotators don’t just annotate, they add value to your data by highlighting issues such as taxonomy conflicts and edge cases within your data. When they find an issue with a task, they will also put the task on hold and provide feedback to avoid inconsistent data quality.  

BOLT provides you with actionable insights from our professional annotators. We help you surface discrepancies and anomalies in your data so each batch on BOLT just gets better.

3. Quality insights

In SageMaker, there aren't any tools for analyzing and evaluating annotations. You have to export your data to a separate platform in order to do this. Understanding your data is tedious and painful. It would be good to pair SageMaker with a platform like Voxel51 to helps you visualize and explore your annotated data.

BOLT provides a better data exploration system. Use our built-in task viewer to interactively sort, filter and view your data. Tag any task or collection of tasks at any time to note interesting observations for note taking, communicating with your team, or for improving your quality later.

How you view your tasks on BOLT

When you want to perform detailed data explorations, just create sample datasets by applying tags or filters on attributes and metadata.  BOLT can also create sample datasets for you by randomly extracting tasks using our smart sampling algorithm. If you would like to have a quantitative understanding of the quality for a dataset, perform QE using our interface and obtain a detailed breakdown of your dataset accuracy.

4. Cost certainty

SageMaker Pricing. The struggle is real.

Pricing on SageMaker is confusing. AWS SageMaker Ground Truth charges $0.08 per image to use their workflow. While this might seem trivial, this $0.08 charge is often more than the labor cost for easier tasks like bounding box annotations or image classification. Your overall costs will quickly double and add up when handling tens of thousands of images.

On top of that, there is a bit of variance on the price of a HIT. You must experiment to find the optimal amount to pay workers.

Compared to SageMaker, BOLT’s annotation costs are transparent. Using our platform is free and our annotation fees are fixed – pay as you go, with no minimum spend or commitment.

5. Trained annotators vs Unskilled crowdworkers

Two workers are assigned to each task. As you can see, the quality we received on SageMaker/MTurk was pretty bad.

SageMaker's work is performed by untrained crowdworkers on MTurk. Their platform relies heavily on getting quality through consensus – i.e. assigning the same tasks to multiple workers. When we tried this out, the results were inconsistent.

As we continued to experiment by increasing the number of workers on the same task, the results were better. We had to increase the number of annotators for each task to 4 before getting any usable data. 

Here’s how much we paid

  • 4 workers x 0.84 USD per task = 3.36 cost per dataset object
  • 4 dataset objects x 3.36 USD per dataset object = 13.44 MTurk task cost

The common misconception of MTurk is that you can label large datasets for pennies is most definitely false. The same project above would cost USD$2.88 on our platform, assuming clear instructions on how to annotate the work.

BOLT pricing 4 annotations per task x 4 tasks x $0.18 USD per polygon = $2.88 🚀

TL;DR

Use AWS SageMaker if

  1. You are an existing AWS customer that’s used to AWS’ user interface.
  2. You don't mind spending engineering time setting up workflows and integrations. 
  3. You plan to use AWS solutions for model deployment and hosting.
  4. You want a platform that is fully integrated with other AWS services.
  5. You want different services for the entire ML lifecycle.

Use BOLT if 

  1. You want to set up a data labeling project in under ten minutes.
  2. You don’t want to spend weeks dealing with MLOps and setting up AWS. 
  3. You want quality insights surfaced for you to understand and trust your data.   
  4. You want a no code, easy to use platform.
  5. You want an easy way to visualize your data labeling results.