agnapprox.utils.model#

Model-level utility functions

Module Contents#

Classes#

EnhancedJSONEncoder

Workaround to make dataclasses JSON-serializable

IntermediateLayerResults

Container that holds the results of running an inference pass

Functions#

dump_results(result, lmbd)

Write multiplier matching results to MLFlow tracking instance

set_all(model, attr, value)

Utility function to set an attribute for all modules in a model

get_feature_maps(→ Dict[str, IntermediateLayerResults])

Capture intermediate feature maps of a model's layer

topk_accuracy() → List[float])

Computes the accuracy over the k top predictions for the specified values of k

class agnapprox.utils.model.EnhancedJSONEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]#

Bases: json.JSONEncoder

Workaround to make dataclasses JSON-serializable https://stackoverflow.com/questions/51286748/make-the-python-json-encoder-support-pythons-new-dataclasses/51286749#51286749

default(o)[source]#

Implement this method in a subclass such that it returns a serializable object for o, or calls the base implementation (to raise a TypeError).

For example, to support arbitrary iterators, you could implement default like this:

def default(self, o):
    try:
        iterable = iter(o)
    except TypeError:
        pass
    else:
        return list(iterable)
    # Let the base class default method raise the TypeError
    return JSONEncoder.default(self, o)
agnapprox.utils.model.dump_results(result: agnapprox.utils.select_multipliers.MatchingInfo, lmbd: float)[source]#

Write multiplier matching results to MLFlow tracking instance

Parameters:
  • result – Multiplier Matching Results

  • lmbd – Lambda value

agnapprox.utils.model.set_all(model: Union[pytorch_lightning.LightningDataModule, torch.nn.Module], attr: str, value: Any)[source]#

Utility function to set an attribute for all modules in a model

Parameters:
  • model – The model to set the value on

  • attr – Attribute name

  • value – Attribute value to set

class agnapprox.utils.model.IntermediateLayerResults[source]#

Container that holds the results of running an inference pass on sample data with accurate multiplication as well as layer metadata For each target layer, we track: - fan_in: Number of incoming connections - features: Input activations into the layer for the sample run, squashed

to a single tensor

  • outputs: Accurate results of the layer for the sample run, squashed

    to a single tensor

  • weights: The layer’s weights tensor

fan_in :int#
features :Union[List[numpy.ndarray], numpy.ndarray]#
outputs :Union[List[numpy.ndarray], numpy.ndarray]#
weights :Optional[numpy.ndarray]#
agnapprox.utils.model.get_feature_maps(model: pytorch_lightning.LightningModule, target_modules: List[Tuple[str, torch.nn.Module]], trainer: pytorch_lightning.Trainer, datamodule: pytorch_lightning.LightningDataModule) Dict[str, IntermediateLayerResults][source]#

Capture intermediate feature maps of a model’s layer by attaching hooks and running sample data

Parameters:
  • model – The neural network model to gather IFMs from

  • target_modules – List of modules in the network for which IFMs should be gathered

  • trainer – A PyTorch Lightning Trainer instance that is used to run the inference

  • datamodule – PyTorch Lightning DataModule instance that is used to generate input sample data

Returns:

Dictionary with Input IFM, Output IFM, Weights Tensor and Fan-In for each target layer

agnapprox.utils.model.topk_accuracy(output: torch.Tensor, target: torch.Tensor, topk=(1,)) List[float][source]#

Computes the accuracy over the k top predictions for the specified values of k In top-5 accuracy you give yourself credit for having the right answer if the right answer appears in your top five guesses.

ref: - https://pytorch.org/docs/stable/generated/torch.topk.html - https://discuss.pytorch.org/t/imagenet-example-accuracy-calculation/7840 - https://gist.github.com/weiaicunzai/2a5ae6eac6712c70bde0630f3e76b77b - https://discuss.pytorch.org/t/top-k-error-calculation/48815/2 - https://stackoverflow.com/questions/59474987/how-to-get-top-k-accuracy-in-semantic-segmentation-using-pytorch

Parameters:
  • output – output is the prediction of the model e.g. scores, logits, raw y_pred before normalization or getting classes

  • target – target is the truth

  • topk – tuple of topk’s to compute e.g. (1, 2, 5) computes top 1, top 2 and top 5. e.g. in top 2 it means you get a +1 if your models’s top 2 predictions are in the right label. So if your model predicts cat, dog (0, 1) and the true label was bird (3) you get zero but if it were either cat or dog you’d accumulate +1 for that example.

Returns:

list of topk accuracy [top1st, top2nd, …] depending on your topk input