Adding Custom Algorithms and Metrics to the Benchmark Pipeline

In this tutorial, we're diving into the advanced capabilities of LunaBench, highlighting how you can extend its functionality with custom algorithms and metrics. This tutorial is built for advanced users who want to take full advantage of all customization possibilities LunaBench offers. From integrating custom algorithms to defining unique metrics for evaluating results, we cover a spectrum of advanced customizations.

Before we begin, ensure you have installed the LunaBench Python SDK along with additional dependencies:

pip install numpy

Please note that LunaBench is provided through a separate SDK from the luna-quantum SDK and is currently available only with academic and commercial plans.

Upon completing this tutorial, you will be able to:

  1. Integrate and utilize your own algorithms within the LunaBench framework.
  2. Apply custom metrics for a personalized evaluation of your results.
  3. Navigate through the benchmarking process, adjusting it to fit your unique project needs.

1. Creating the Dataset

For this tutorial, we'll be working with a scenario from our use case library—the Binary Paint Shop Problem. If you're interested in a deeper dive into this and other use cases, we encourage you to explore our dedicated tutorial on the use case library.

# Define the problem type
problem_name = "bpsp"

# Define the dataset
dataset = {
    "id_00": {"sequence": [1, 2, 0, 1, 2, 0]},
    "id_01": {"sequence": [1, 0, 0, 1, 2, 2]},
    "id_02": {"sequence": [0, 1, 0, 1, 2, 2]},
}

2. Integrating Custom Algorithms

LunaBench simplifies the addition of custom algorithms with the @solver decorator. Attaching this decorator to your solving function enables LunaBench to process your dataset with the specified custom algorithm. For LunaBench to correctly track and allocate results, it's crucial to provide an identifier for your algorithm, like so: @solver(algorithm_id="YOUR_ALGORITHM").

For a seamless integration and processing of your dataset, the solving or minimizing function should include specific input arguments. These are qubo for QUBO problems, formatted as a list of lists or a 2D numpy array; circuit for quantum circuits, either as a string or a QuantumCircuit object; and lp for linear programming problems, as a string.

The output from your custom algorithm should be a well-structured dictionary containing at least the following information:

  • "solution": This should map to the list-formatted binary solution vector.
  • "energy": This value represents the solution value, usually in the form of (f(x) = xᵀQx), although it can be None if not applicable.
  • "runtime": This indicated the time it took for your algorithm to arrive at the solution.
import time
import numpy as np

from lunabench import solver

# Use the @solver decorator to define your custom algorithm
@solver(algorithm_id="random")
def random_solve(qubo):
    # Log the runtime
    start_time = time.perf_counter()

    # Get a random solution
    np_solution = np.random.randint(2, size=len(qubo))

    # Stop the runtime
    end_time = time.perf_counter()

    # Calculate the overall time in seconds that the algorithm took
    solve_time = end_time - start_time

    # Calculate the energy
    energy = float((np_solution.T @ qubo @ np_solution))
    solution = np_solution.tolist()

    # Return a dict containing "solution", "energy" and "runtime"
    result = {"solution": solution, "energy": energy, "runtime": solve_time}

    return result

3. Solving the Dataset

Once you've tailored your algorithm to fit the unique characteristics of your problem instances, including their specific input and output requirements, you're ready to put it to the test. With LunaBench, running your custom algorithm against your dataset is straightforward. All you need to do is add your algorithm's name to the solve_algorithms list or dictionary within the solve_dataset function. This inclusion tells LunaBench which algorithms you want to apply to your dataset, integrating your custom solution seamlessly into the benchmarking process.

from lunabench import solve_dataset

# Specify the algorithms
algorithms = ["sa", "random"]

# Define the number of runs for each algorithm
n_runs = 2

# Solve the complete dataset
solved, result_id = solve_dataset(
    problem_name=problem_name,
    dataset=dataset,
    solve_algorithms=algorithms,
    n_runs=n_runs,
)

4. Adding Custom Metrics

Just as you can integrate custom algorithms, LunaBench also supports the creation of custom metrics through the @metric decorator. This feature enables you to tailor the evaluation phase to your specific needs. By defining a custom metric with @metric(metric_id="YOUR_METRIC"), you guarantee that LunaBench accurately records and categorizes the outcomes of your benchmarks according to your defined criteria.

For your custom metric to function correctly, it's essential that its input parameters mirror the corresponding column names found in the solutions.csv file.

from lunabench import metric

# Use the @metric decorator to define your custom metric
@metric(metric_id="n_cars")
def count_cars(sequence, best_solution):
    # Compute the number of total cars
    if 2 * len(best_solution) == len(sequence):
        return len(sequence)
    return 2 * len(best_solution)

5. Evaluating the Results

Following the integration of your custom metrics, you can incorporate these metrics into the evaluation phase by including their names in the metrics_config. This inclusion directs LunaBench to utilize your specified custom metrics for analyzing the solutions or results.

from lunabench import evaluate_results

# Define the metrics
metrics = ["n_cars"]

# Run the evaluation
eval_data, plot_outs = evaluate_results(result_id=result_id, dataset=dataset, metrics_config=metrics)

eval_data
idsolvern_cars
0id_00sa6
1id_00random6
2id_01sa6
3id_01random6
4id_02sa6
5id_02random6

Was this page helpful?