classes package
classes.LRUCacheUnhashable module
- class classes.LRUCacheUnhashable.LRUCacheUnhashable(orig_func: Callable | None = None, maxsize: int = 50000000)[source]
Bases:
objectA decorator class that caches results for functions with unhashable arguments. Uses an OrderedDict to implement a simple LRU eviction policy.
- Parameters:
orig_func (Callable | None)
maxsize (int)
classes.MyDataLogger module
- class classes.MyDataLogger.MyDataLogger(runid: str = '20260216-20-06-46', graph_sub_path: Path = None, results_sub_path: Path = None, summary_results_filename: Path = None)[source]
Bases:
objectParent - header information for a group of data runs
- Parameters:
runid (str)
graph_sub_path (Path)
results_sub_path (Path)
summary_results_filename (Path)
- graph_sub_path: Path = None
- results_sub_path: Path = None
- runid: str = '20260216-20-06-46'
- summary_results_filename: Path = None
- class classes.MyDataLogger.MySubDataLogger(runid: str = '20260216-20-06-46', graph_sub_path: Path = None, results_sub_path: Path = None, summary_results_filename: Path = None, subid: str = None, detailed_results_filename: Path = None, graph_filename: Path = None, quantum: bool = None, locations: int = None, slice: float = 1.0, shots: int = None, mode: str = None, iterations: int = None, gray: bool = None, hot_start: bool = None, gradient_type: str = None, formulation: str = None, layers: int = None, std_dev: float = None, lr: float = None, weight_decay: float = None, momentum: float = None, alpha: float = None, big_a: float = None, c: float = None, eta: float = None, gamma: float = None, s: float = None, qubits: int = None, elapsed: float = None, hot_start_dist: float = None, best_dist_found: float = None, best_dist: float = None, iteration_found: int = None, cache_max_size: int = None, cache_items: int = None, cache_hits: int = None, cache_misses: int = None, index_list: list = <factory>, average_list: list = <factory>, lowest_list: list = <factory>, sliced_list: list = <factory>, average_list_all: list = <factory>, lowest_list_all: list = <factory>, sliced_cost_list_all: list = <factory>, noise: bool = None, monte_carlo: bool = False, mps: bool = None)[source]
Bases:
MyDataLoggerChild details of each data run
- Parameters:
runid (str)
graph_sub_path (Path)
results_sub_path (Path)
summary_results_filename (Path)
subid (str)
detailed_results_filename (Path)
graph_filename (Path)
quantum (bool)
locations (int)
slice (float)
shots (int)
mode (str)
iterations (int)
gray (bool)
hot_start (bool)
gradient_type (str)
formulation (str)
layers (int)
std_dev (float)
lr (float)
weight_decay (float)
momentum (float)
alpha (float)
big_a (float)
c (float)
eta (float)
gamma (float)
s (float)
qubits (int)
elapsed (float)
hot_start_dist (float)
best_dist_found (float)
best_dist (float)
iteration_found (int)
cache_max_size (int)
cache_items (int)
cache_hits (int)
cache_misses (int)
index_list (list)
average_list (list)
lowest_list (list)
sliced_list (list)
average_list_all (list)
lowest_list_all (list)
sliced_cost_list_all (list)
noise (bool)
monte_carlo (bool)
mps (bool)
- alpha: float = None
- average_list: list
- average_list_all: list
- best_dist: float = None
- best_dist_found: float = None
- big_a: float = None
- c: float = None
- cache_hits: int = None
- cache_items: int = None
- cache_max_size: int = None
- cache_misses: int = None
- calculate_parameter_numbers() int[source]
Calculate the number of parameters in a variational quantum circuit
- Return type:
int
- detailed_results_filename: Path = None
- elapsed: float = None
- eta: float = None
- formulation: str = None
- gamma: float = None
- gradient_type: str = None
- graph_filename: Path = None
- gray: bool = None
- hot_start: bool = None
- hot_start_dist: float = None
- index_list: list
- iteration_found: int = None
- iterations: int = None
- layers: int = None
- locations: int = None
- lowest_list: list
- lowest_list_all: list
- lr: float = None
- mode: str = None
- momentum: float = None
- monte_carlo: bool = False
- mps: bool = None
- noise: bool = None
- quantum: bool = None
- qubits: int = None
- s: float = None
- shots: int = None
- slice: float = 1.0
- sliced_cost_list_all: list
- sliced_list: list
- std_dev: float = None
- subid: str = None
- update_cache_statistics(cost_fn: Callable)[source]
Update cache statistics
- Parameters:
cost_fn (Callable)
- update_constants_from_dict(data_dict: dict)[source]
Update the constants from a dictionary
- Parameters:
data_dict (dict)
- update_quantum_constants_from_config()[source]
Update constants needed for quantum from config file
- weight_decay: float = None
classes.MyModel module
- class classes.MyModel.BinaryToCost(*args: Any, **kwargs: Any)[source]
Bases:
ModuleConvert a bit string to a cost in forwards, estimate gradient backwards
- Parameters:
cost_fn (Callable[[list], int])
- class classes.MyModel.CostFunction(*args: Any, **kwargs: Any)[source]
Bases:
FunctionA custom autograd function to calculate the cost function and estimate the gradient
- Parameters:
args (Any)
kwargs (Any)
- Return type:
Any
- class classes.MyModel.MyModel(*args: Any, **kwargs: Any)[source]
Bases:
ModuleA simple feedforward neural network model for TSP
- Parameters:
cost_fn (Callable[[list], int])
- class classes.MyModel.MySine(*args: Any, **kwargs: Any)[source]
Bases:
ModuleReturns a sine function symmetric about 0.5
- class classes.MyModel.Sample_Binary(*args: Any, **kwargs: Any)[source]
Bases:
ModuleReturn probability in forward, linear backwards
- classes.MyModel.estimate_cost_fn_gradient(my_input: torch.Tensor, output: torch.Tensor, cost_fn: Callable[[list], int]) torch.Tensor[source]
estimate the gradient of the cost function by changing each bit in turn and calculating the difference in cost function.
- Parameters:
my_input (torch.Tensor) – The input tensor
output (torch.Tensor) – Contains a precalculated run of the cost function for performance reasons
cost_fn (Callable[[list], int]) – The cost function to be used
- Returns:
The estimated gradient.
- Return type:
torch.Tensor