agnapprox.utils
#
Approximate model helper functions
Submodules#
Package Contents#
Classes#
Workaround to make dataclasses JSON-serializable |
|
Container that holds the results of running an inference pass |
|
Multiplier Matching result for a single layer |
|
Multiplier Matching result for the entire model |
Functions#
|
Generate error mean and standard deviation using Monte Carlo |
|
Generate error mean and standard deviation using the |
|
Randomly select samples from a tensor that cover the receptive field of one neuron |
|
Generate prediction of mean and standard deviation using several |
|
Turn tensor of weights/activations into a frequency distribution (i.e. build a histogram) |
|
Calculate mean and standard deviation of the observed error between |
|
Write multiplier matching results to MLFlow tracking instance |
|
Utility function to set an attribute for all modules in a model |
|
Capture intermediate feature maps of a model's layer |
|
Computes the accuracy over the k top predictions for the specified values of k |
|
Capture intermediate feature maps of a model's layer |
|
Select a matching approximate multiplier for a single layer |
|
Deploy selected approximate multipliers to network |
|
Select matching Approximate Multipliers for all layers in a model |
Attributes#
- agnapprox.utils.single_dist_mc(emap: numpy.ndarray, x_dist: numpy.ndarray, w_dist: numpy.ndarray, fan_in: float, num_samples: int = int(100000.0)) Tuple[float, float] [source]#
Generate error mean and standard deviation using Monte Carlo approach as described in: https://arxiv.org/abs/1912.00700
- Parameters:
emap – The multiplier’s error map
x_dist – Operand distribution of activations
w_dist – Operand distribution of weights
fan_in – Incoming connections for layer
num_samples – Number of Monte Carlo simulation runs. Defaults to int(1e5).
- Returns:
Mean and standard deviation for a single operation
- agnapprox.utils.error_prediction(emap: numpy.ndarray, x_dist: numpy.ndarray, w_dist: numpy.ndarray, fan_in: float) Tuple[float, float] [source]#
Generate error mean and standard deviation using the global distribution of activations and weights
- Parameters:
emap – The multiplier’s error map
x_dist – Operand distribution of activations
w_dist – Operand distribution of weights
fan_in – Incoming connections for layer
- Returns:
Mean and standard deviation for a single operation
- agnapprox.utils.get_sample_population(tensor: numpy.ndarray, num_samples: int = 512) numpy.ndarray [source]#
Randomly select samples from a tensor that cover the receptive field of one neuron
- Parameters:
tensor – Tensor to draw samples from
num_samples – Number of samples to draw. Defaults to 512.
- Returns:
Sampled 2D Tensor of shape [num_samples, tensor.shape[-1]]
- agnapprox.utils.population_prediction(emap: numpy.ndarray, x_multidist: numpy.ndarray, w_dist: numpy.ndarray, fan_in: float) Tuple[float, float] [source]#
Generate prediction of mean and standard deviation using several sampled local distributions
- Parameters:
emap – The multiplier’s error map
x_multidist – Array of several operand distributions for activations
w_dist – Operand distribution of weights
fan_in – Incoming connections for layer
- Returns:
Mean and standard deviation for a single operation
- agnapprox.utils.to_distribution(tensor: Optional[numpy.ndarray], min_val: int, max_val: int) Tuple[numpy.ndarray, numpy.ndarray] [source]#
Turn tensor of weights/activations into a frequency distribution (i.e. build a histogram)
- Parameters:
tensor – Tensor to build histogram from
min_val – Lowest possible operand value in tensor
max_val – Highest possible operand value in tensor
- Returns:
Tuple of Arrays where first array contains the full numerical range between min_val and max_val inclusively and second array contains the relative frequency of each operand
- Raises:
ValueError – If run before features maps have been populated
by call to utils.model.get_feature_maps –
- agnapprox.utils.error_calculation(accurate: numpy.ndarray, approximate: numpy.ndarray, fan_in: float) Tuple[float, float] [source]#
Calculate mean and standard deviation of the observed error between accurate computation and approximate computation
- Parameters:
accurate – Accurate computation results
approximate – Approximate computation results
fan_in – Number of incoming neuron connections
- Returns:
Mean and standard deviation for a single operation
- class agnapprox.utils.EnhancedJSONEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]#
Bases:
json.JSONEncoder
Workaround to make dataclasses JSON-serializable https://stackoverflow.com/questions/51286748/make-the-python-json-encoder-support-pythons-new-dataclasses/51286749#51286749
- default(o)[source]#
Implement this method in a subclass such that it returns a serializable object for
o
, or calls the base implementation (to raise aTypeError
).For example, to support arbitrary iterators, you could implement default like this:
def default(self, o): try: iterable = iter(o) except TypeError: pass else: return list(iterable) # Let the base class default method raise the TypeError return JSONEncoder.default(self, o)
- agnapprox.utils.dump_results(result: agnapprox.utils.select_multipliers.MatchingInfo, lmbd: float)[source]#
Write multiplier matching results to MLFlow tracking instance
- Parameters:
result – Multiplier Matching Results
lmbd – Lambda value
- agnapprox.utils.set_all(model: Union[pytorch_lightning.LightningDataModule, torch.nn.Module], attr: str, value: Any)[source]#
Utility function to set an attribute for all modules in a model
- Parameters:
model – The model to set the value on
attr – Attribute name
value – Attribute value to set
- class agnapprox.utils.IntermediateLayerResults[source]#
Container that holds the results of running an inference pass on sample data with accurate multiplication as well as layer metadata For each target layer, we track: - fan_in: Number of incoming connections - features: Input activations into the layer for the sample run, squashed
to a single tensor
- outputs: Accurate results of the layer for the sample run, squashed
to a single tensor
weights: The layer’s weights tensor
- fan_in :int#
- features :Union[List[numpy.ndarray], numpy.ndarray]#
- outputs :Union[List[numpy.ndarray], numpy.ndarray]#
- weights :Optional[numpy.ndarray]#
- agnapprox.utils.get_feature_maps(model: pytorch_lightning.LightningModule, target_modules: List[Tuple[str, torch.nn.Module]], trainer: pytorch_lightning.Trainer, datamodule: pytorch_lightning.LightningDataModule) Dict[str, IntermediateLayerResults] [source]#
Capture intermediate feature maps of a model’s layer by attaching hooks and running sample data
- Parameters:
model – The neural network model to gather IFMs from
target_modules – List of modules in the network for which IFMs should be gathered
trainer – A PyTorch Lightning Trainer instance that is used to run the inference
datamodule – PyTorch Lightning DataModule instance that is used to generate input sample data
- Returns:
Dictionary with Input IFM, Output IFM, Weights Tensor and Fan-In for each target layer
- agnapprox.utils.topk_accuracy(output: torch.Tensor, target: torch.Tensor, topk=(1,)) List[float] [source]#
Computes the accuracy over the k top predictions for the specified values of k In top-5 accuracy you give yourself credit for having the right answer if the right answer appears in your top five guesses.
ref: - https://pytorch.org/docs/stable/generated/torch.topk.html - https://discuss.pytorch.org/t/imagenet-example-accuracy-calculation/7840 - https://gist.github.com/weiaicunzai/2a5ae6eac6712c70bde0630f3e76b77b - https://discuss.pytorch.org/t/top-k-error-calculation/48815/2 - https://stackoverflow.com/questions/59474987/how-to-get-top-k-accuracy-in-semantic-segmentation-using-pytorch
- Parameters:
output – output is the prediction of the model e.g. scores, logits, raw y_pred before normalization or getting classes
target – target is the truth
topk – tuple of topk’s to compute e.g. (1, 2, 5) computes top 1, top 2 and top 5. e.g. in top 2 it means you get a +1 if your models’s top 2 predictions are in the right label. So if your model predicts cat, dog (0, 1) and the true label was bird (3) you get zero but if it were either cat or dog you’d accumulate +1 for that example.
- Returns:
list of topk accuracy [top1st, top2nd, …] depending on your topk input
- agnapprox.utils.get_feature_maps(model: pytorch_lightning.LightningModule, target_modules: List[Tuple[str, torch.nn.Module]], trainer: pytorch_lightning.Trainer, datamodule: pytorch_lightning.LightningDataModule) Dict[str, IntermediateLayerResults] [source]#
Capture intermediate feature maps of a model’s layer by attaching hooks and running sample data
- Parameters:
model – The neural network model to gather IFMs from
target_modules – List of modules in the network for which IFMs should be gathered
trainer – A PyTorch Lightning Trainer instance that is used to run the inference
datamodule – PyTorch Lightning DataModule instance that is used to generate input sample data
- Returns:
Dictionary with Input IFM, Output IFM, Weights Tensor and Fan-In for each target layer
- agnapprox.utils.logger#
- agnapprox.utils.select_layer_multiplier(intermediate_results: agnapprox.utils.model.IntermediateLayerResults, multipliers: List[evoapproxlib.ApproximateMultiplier], max_noise: float, num_samples: int = 512) Tuple[str, float] [source]#
Select a matching approximate multiplier for a single layer
- Parameters:
layer_ref_data – Reference input/output data generated from a model run with accurate multiplication. This is used to calibrate the layer standard deviation for the error estimate and determine the distribution of numerical values in weights and activations.
multipliers – Approximate Multiplier Error Maps, Performance Metrics and name
max_noise – Learned allowable noise parameter (sigma_l)
num_samples – Number of samples to draw from features for multi-population prediction. Defaults to 512.
- Returns:
Dictionary with name and performance metric of selected multiplier
- class agnapprox.utils.LayerInfo[source]#
Multiplier Matching result for a single layer
- name :str#
- multiplier_name :str#
- multiplier_performance_metric :float#
- opcount :float#
- relative_opcount(total_opcount: float)[source]#
Calculate the relative contribution of this layer to the network’s total operations
- Parameters:
total_opcount – Number of operations in the entire networks
- Returns:
0: layer contributes no operations to the network’s opcount
1: layer conttibutes all operations to the network’s opcount
- Return type:
float between 0..1 where
- relative_energy_consumption(metric_max: float)[source]#
Relative energy consumption of selected approximate multiplier
- Parameters:
metric_max – Highest possible value for performance metric (typically that of the respective accurate multiplier)
- Returns:
0: selected multiplier consumes no energy
1: selected multiplier consumes the maximum amount of energy
- Return type:
float between 0..1 where
- class agnapprox.utils.MatchingInfo[source]#
Multiplier Matching result for the entire model
- layers :List[LayerInfo]#
- metric_max :float#
- opcount :float#
- property relative_energy_consumption#
Relative Energy Consumption compared to network without approximation achieved by the current AM configuration
- Returns:
sum relative energy consumption for each layer, weighted with the layer’s contribution to overall operations
- agnapprox.utils.deploy_multipliers(model: agnapprox.nets.ApproxNet, matching_result: MatchingInfo, library)[source]#
Deploy selected approximate multipliers to network
- Parameters:
model – Model to deploy multipliers to
matching_result – Results of multiplier matching
library – Library to load Lookup tables from
- agnapprox.utils.select_multipliers(model: agnapprox.nets.ApproxNet, datamodule: pytorch_lightning.LightningDataModule, multipliers: List[evoapproxlib.ApproximateMultiplier], trainer: pytorch_lightning.Trainer) MatchingInfo [source]#
Select matching Approximate Multipliers for all layers in a model
- Parameters:
model – Approximate Model with learned layer robustness parameters
datamodule – Data Module to use for sampling runs
library – Approximate Multiplier Library provider
trainer – PyTorch Lightning Trainer instance to use for sampling run
signed – Whether to select signed or unsigned instances from Multiplier library provide. Defaults to True.
- Returns:
Dictionary of Assignment results