classes package

classes.LRUCacheUnhashable module

class classes.LRUCacheUnhashable.LRUCacheUnhashable(orig_func: Callable | None = None, maxsize: int = 50000000)[source]

Bases: object

A decorator class that caches results for functions with unhashable arguments. Uses an OrderedDict to implement a simple LRU eviction policy.

Parameters:
  • orig_func (Callable | None)

  • maxsize (int)

clear_cache()[source]

Clear cache

static list_to_bit_string(bit_string_input: list[int]) str[source]

Convert list in format [0,1] to bit string eg ‘01’

Parameters:

bit_string_input (list[int])

Return type:

str

print_cache()[source]

Print cache

print_cache_stats()[source]

Print cache stats

report_cache_stats()[source]

Reports cache stats

classes.MyDataLogger module

class classes.MyDataLogger.MyDataLogger(runid: str = '20260216-20-06-46', graph_sub_path: Path = None, results_sub_path: Path = None, summary_results_filename: Path = None)[source]

Bases: object

Parent - header information for a group of data runs

Parameters:
  • runid (str)

  • graph_sub_path (Path)

  • results_sub_path (Path)

  • summary_results_filename (Path)

create_sub_graph_path()[source]

Create a folder for graphs

create_sub_results_path()[source]

Create a folder for results

find_summary_results_filename()[source]

Create the filepath for the summary results

graph_sub_path: Path = None
results_sub_path: Path = None
runid: str = '20260216-20-06-46'
summary_results_filename: Path = None
class classes.MyDataLogger.MySubDataLogger(runid: str = '20260216-20-06-46', graph_sub_path: Path = None, results_sub_path: Path = None, summary_results_filename: Path = None, subid: str = None, detailed_results_filename: Path = None, graph_filename: Path = None, quantum: bool = None, locations: int = None, slice: float = 1.0, shots: int = None, mode: str = None, iterations: int = None, gray: bool = None, hot_start: bool = None, gradient_type: str = None, formulation: str = None, layers: int = None, std_dev: float = None, lr: float = None, weight_decay: float = None, momentum: float = None, alpha: float = None, big_a: float = None, c: float = None, eta: float = None, gamma: float = None, s: float = None, qubits: int = None, elapsed: float = None, hot_start_dist: float = None, best_dist_found: float = None, best_dist: float = None, iteration_found: int = None, cache_max_size: int = None, cache_items: int = None, cache_hits: int = None, cache_misses: int = None, index_list: list = <factory>, average_list: list = <factory>, lowest_list: list = <factory>, sliced_list: list = <factory>, average_list_all: list = <factory>, lowest_list_all: list = <factory>, sliced_cost_list_all: list = <factory>, noise: bool = None, monte_carlo: bool = False, mps: bool = None)[source]

Bases: MyDataLogger

Child details of each data run

Parameters:
  • runid (str)

  • graph_sub_path (Path)

  • results_sub_path (Path)

  • summary_results_filename (Path)

  • subid (str)

  • detailed_results_filename (Path)

  • graph_filename (Path)

  • quantum (bool)

  • locations (int)

  • slice (float)

  • shots (int)

  • mode (str)

  • iterations (int)

  • gray (bool)

  • hot_start (bool)

  • gradient_type (str)

  • formulation (str)

  • layers (int)

  • std_dev (float)

  • lr (float)

  • weight_decay (float)

  • momentum (float)

  • alpha (float)

  • big_a (float)

  • c (float)

  • eta (float)

  • gamma (float)

  • s (float)

  • qubits (int)

  • elapsed (float)

  • hot_start_dist (float)

  • best_dist_found (float)

  • best_dist (float)

  • iteration_found (int)

  • cache_max_size (int)

  • cache_items (int)

  • cache_hits (int)

  • cache_misses (int)

  • index_list (list)

  • average_list (list)

  • lowest_list (list)

  • sliced_list (list)

  • average_list_all (list)

  • lowest_list_all (list)

  • sliced_cost_list_all (list)

  • noise (bool)

  • monte_carlo (bool)

  • mps (bool)

alpha: float = None
average_list: list
average_list_all: list
best_dist: float = None
best_dist_found: float = None
big_a: float = None
c: float = None
cache_hits: int = None
cache_items: int = None
cache_max_size: int = None
cache_misses: int = None
calculate_parameter_numbers() int[source]

Calculate the number of parameters in a variational quantum circuit

Return type:

int

detailed_results_filename: Path = None
elapsed: float = None
eta: float = None
find_detailed_results_filename()[source]

Create the filepath for the detailed results

find_graph_filename()[source]

Create the filepath for the graphs results

formulation: str = None
gamma: float = None
gradient_type: str = None
graph_filename: Path = None
gray: bool = None
hot_start: bool = None
hot_start_dist: float = None
index_list: list
iteration_found: int = None
iterations: int = None
layers: int = None
locations: int = None
lowest_list: list
lowest_list_all: list
lr: float = None
mode: str = None
momentum: float = None
monte_carlo: bool = False
mps: bool = None
noise: bool = None
quantum: bool = None
qubits: int = None
s: float = None
save_detailed_results()[source]

Save detailed data

save_plot()[source]

Plot results

save_results_to_csv()[source]

Save the results to a CSV file

shots: int = None
slice: float = 1.0
sliced_cost_list_all: list
sliced_list: list
std_dev: float = None
subid: str = None
update_cache_statistics(cost_fn: Callable)[source]

Update cache statistics

Parameters:

cost_fn (Callable)

update_constants_from_dict(data_dict: dict)[source]

Update the constants from a dictionary

Parameters:

data_dict (dict)

update_general_constants_from_config()[source]

Update general constants from the config file

update_ml_constants_from_config()[source]

Update constants needed for ML from config file

update_quantum_constants_from_config()[source]

Update constants needed for quantum from config file

validate_input()[source]

Validate the input fields

weight_decay: float = None

classes.MyModel module

class classes.MyModel.BinaryToCost(*args: Any, **kwargs: Any)[source]

Bases: Module

Convert a bit string to a cost in forwards, estimate gradient backwards

Parameters:

cost_fn (Callable[[list], int])

forward(x: torch.Tensor) torch.Tensor[source]

calculate cost forwards

Parameters:

x (torch.Tensor)

Return type:

torch.Tensor

class classes.MyModel.CostFunction(*args: Any, **kwargs: Any)[source]

Bases: Function

A custom autograd function to calculate the cost function and estimate the gradient

Parameters:
  • args (Any)

  • kwargs (Any)

Return type:

Any

static backward(ctx, grad_output)[source]
static forward(ctx, input, cost_fn)[source]
class classes.MyModel.MyModel(*args: Any, **kwargs: Any)[source]

Bases: Module

A simple feedforward neural network model for TSP

Parameters:

cost_fn (Callable[[list], int])

forward(x)[source]

Define the forward pass

class classes.MyModel.MySine(*args: Any, **kwargs: Any)[source]

Bases: Module

Returns a sine function symmetric about 0.5

forward(x)[source]
class classes.MyModel.Sample_Binary(*args: Any, **kwargs: Any)[source]

Bases: Module

Return probability in forward, linear backwards

forward(x: torch.Tensor) torch.Tensor[source]
Parameters:

x (torch.Tensor)

Return type:

torch.Tensor

classes.MyModel.estimate_cost_fn_gradient(my_input: torch.Tensor, output: torch.Tensor, cost_fn: Callable[[list], int]) torch.Tensor[source]

estimate the gradient of the cost function by changing each bit in turn and calculating the difference in cost function.

Parameters:
  • my_input (torch.Tensor) – The input tensor

  • output (torch.Tensor) – Contains a precalculated run of the cost function for performance reasons

  • cost_fn (Callable[[list], int]) – The cost function to be used

Returns:

The estimated gradient.

Return type:

torch.Tensor

Module contents