OncoKB™ API
  • Introduction
  • OncoKB™ Website
    • Architecture
    • API Info
    • API Benchmark
  • OncoKB™ Annotator
    • Variant Annotators
Powered by GitBook
On this page
  • Introduction
  • Annotate Mutation by HGVS
  • 10/24/2024
  • Datasets
  • Services Setup
  • Test Setup
  • Performance Benchmark Results
  • Genome Nexus VEP Benchark
  • Genome Nexus VEP
  • Ensembl REST API

Was this helpful?

  1. OncoKB™ Website

API Benchmark

PreviousAPI InfoNextVariant Annotators

Last updated 1 month ago

Was this helpful?

Introduction

The OncoKB Development team has conducted API performance tests to identify optimization opportunities and collect key metrics for evaluating overall API capabilities. You’ll find key performance indicators such as response times, throughput, and resource utilization across different endpoints.

Annotate Mutation by HGVS

10/24/2024

This test aims to determine the performance of the annotate/mutations/byHGVSg endpoint given that the variants have been annotated and cached by Genome Nexus.

Datasets

We have chosen the following studies for benchmarking:

  1. Whole Exome Sequencing Dataset:

    • 441,309 variants across 2,756 samples

  2. Whole Genome Sequencing Dataset:

    • 23,159,591 variants across 1,950 samples

Services Setup

Test Setup

As a prerequisite, all variants from WES and WGS datasets have been annotated (and cached in Genome Nexus) prior to benchmarking OncoKB HGVSg endpoint.

Running Tests
single_thread_test.py
from locust import HttpUser, task, events
import time

# Data to annotate for test
variants_list = []
with open("wgs.txt", "r") as data_file:
    variants_list = [{"hgvsg": row.strip()} for row in data_file
variant_batches = list(chunk_variants(variants_list))

# Global variables to track total time and batches
start_time = None
total_batches = len(variant_batches)
completed_batches = 0

# Batch the WES/WGS dataset into chunks of 100 variants
def chunk_variants(variants, chunk_size=100):
    for i in range(0, len(variants), chunk_size):
        yield variants[i:i + chunk_size]
        
# Locust test on start listener
@events.test_start.add_listener
def on_test_start(environment, **kwargs):
    global start_time
    start_time = time.time()
    
# When test stops, then print out how long it took
@events.test_stop.add_listener
def on_test_stop(environment, **kwargs):
    end_time = time.time()
    total_time = end_time - start_time
    print(f"Total time to process {total_batches * 100} variants: {total_time:.2f} seconds")


class OncoKBUser(HttpUser):
    @task
    def sendRequest(self):
        global completed_batches
        if completed_batches < total_batches:
            batch = variant_batches[completed_batches]
            # Send POST request with Authorization header
            response = self.client.post(
                "/api/v1/annotate/mutations/byHGVSg", 
                headers={"Content-Type": "application/json","Authorization": "Bearer <token>"},
                data=json.dumps(batch), timeout=600000
            )
            
            
            # Track batch completion
            completed_batches += 1
            
            # Check response time for each request
            print(f"Request {completed_batches}/{total_batches} completed in {response.elapsed.total_seconds()} seconds")
        else:
            self.environment.runner.quit()
multi_thread_test.py
from locust import HttpUser, task, events
import time

# Data to annotate for test
variants_list = []
with open("wgs.txt", "r") as data_file:
    variants_list = [{"hgvsg": row.strip()} for row in data_file
variant_batches = list(chunk_variants(variants_list))

# Global variables to track total time and batches
start_time = None
total_batches = len(variant_batches)
completed_batches = 0
lock = Lock()
active_users = 0

# Batch the WES/WGS dataset into chunks of 100 variants
def chunk_variants(variants, chunk_size=100):
    for i in range(0, len(variants), chunk_size):
        yield variants[i:i + chunk_size]
        
# Locust test on start listener
@events.test_start.add_listener
def on_test_start(environment, **kwargs):
    global start_time
    global active_users
    # Initialize active user count with the number of users in the test
    active_users = environment.runner.user_count
    start_time = time.time()
    
# When test stops, then print out how long it took
@events.test_stop.add_listener
def on_test_stop(environment, **kwargs):
    end_time = time.time()
    total_time = end_time - start_time
    print(f"Total time to process {total_batches * 100} variants: {total_time:.2f} seconds")


class OncoKBUser(HttpUser):
    def get_next_batch(self):
        global completed_batches
        with lock:
            print(completed_batches)
            if completed_batches < total_batches:
                batch = variant_batches[completed_batches]
                completed_batches += 1
                print(batch[0])
                return batch
            else:
                return None  # No more batches available

    @task
    def send5Threads(self):
        batch = self.get_next_batch()
        if batch is None:
            global active_users
            with lock:
                active_users -= 1  # Mark the user as finished

                # If all users are done processing, stop the test
                if active_users < 0:
                    print("All users finished processing. Stopping test.")
                    self.environment.runner.quit()  # Stop the entire test
                    signal.raise_signal(signal.SIGTERM)
            return
    
        response = self.client.post(
            "/api/v1/annotate/mutations/byHGVSg", 
            headers={"Content-Type": "application/json","Authorization": "Bearer <token>"},
            data=json.dumps(batch)
        )
        
        print(f"Request {completed_batches}/{total_batches} completed in {response.elapsed.total_seconds()} seconds")
$ locust -f <file_name>.py --run-time 24h --host=https://www.oncokb.dev.aws.mskcc.org

Performance Benchmark Results

Test 1: How long does it take to annotate each study using a single thread?

Redis caching was disabled for this test. Each thread was ran sequentially until the entire dataset was annotated.

Annotate 1 thread containing a POST request with 100 variants.

WES Dataset:

441,309 variants: 835 seconds or 14 minutes (528 variants/second)

WGS Dataset:

23,159,591 variants: 17,508seconds or 4hrs 52mins (1,322 variants/second)


Test 2: Do we gain a performance boost using 5 threads instead of 1?

Redis caching was disabled for this test.

Annotate up to 5 threads concurrently, each executing a POST request containing 100 variants.

WES Dataset:

441,309 variants: 151 seconds or 2.51minutes (2,922 variants/second)

WGS Dataset:

23,159,591 variants: 3,482seconds or 58mins (6,652 variants/second)

Increasing the number of threads to five boosted the throughput, allowing for a fivefold increase in the number of variants annotated per second.


Genome Nexus VEP Benchark

Genome Nexus VEP

# Variants
Run 1
Run 2
Run 3
Run 4
Run 5

1

772ms

755ms

748ms

708ms

723ms

5

1.25s

1.28s

1.14s

1.21s

1.29s

10

1.90s

1.73s

1.90s

1.93s

1.80s

50

4.21s

4.40s

4.36s

4.05s

4.05s

100

6.02s

6.71s

6.33s

6.46s

6.62s

1000

62.82s

62.10s

62.16s

62.10s

62.52s

Ensembl REST API

# Variants
Run 1
Run 2
Run 3
Run 4
Run 5

1

1.79s

1.27s

1.79s

1.31s

1.82s

5

4.54s

3.40s

3.81s

3.82s

3.62s

10

8.01s

6.95s

7.08s

7.60s

6.75s

50

39.52s

37.79s

41.06s

39.93s

38.82s

100

83.28s

87.60s

87.78s

68.82s

72.78s

1000 (Limit is 300)

NA

NA

NA

NA

NA

These tests will be conducted by replicating the production setup. All configurations can be found .

We will be using to write our performance tests.

When annotating mutations by genomic change or HGVSg, OncoKB uses to convert these formats to HGVSp for annotation, which leverages . Genome Nexus recommends 2 main configuration options for using VEP: 1. (Local, Recommended) 2. (Public) Both provide a REST API wrapper around the VEP command line interface and can be leveraged depending on the user's performance, security, and convenience needs. Below you can find the results of each tool when annotating POST requests of varying sizes using the vep/human/hgvs endpoint:

UCSC Xena: Simple Somatic Mutation (SNVs and indels) - Consensus Coding Mutations
UCSC Xena: Simple Somatic Mutation (SNVs and indels) - Whole Genome Mutations (Non-US Specimens)
here
Locust.io
Genome Nexus
Ensembl's Variant Effect Predictor (VEP)
Genome Nexus VEP
Ensembl REST API