Sparsifier
A sparse vector, as opposed to a dense one, is a vector which contains a lot of zeroes. When we speak about making a neural network sparse, we thus mean that the network’s weight are mostly zeroes.
With fasterai, you can do that thanks to the Sparsifier
class.
Sparsifier
Sparsifier (model:torch.nn.modules.module.Module, granularity:str, context:str, criteria:fasterai.core.criteria.Criteria, nm:bool=False, layer_type:Type[torch.nn.modules.module.Module]=<class 'torch.nn.modules.conv.Conv2d'>)
Class providing sparsifying capabilities
Type | Default | Details | |
---|---|---|---|
model | Module | The model to sparsify | |
granularity | str | Granularity of sparsification (e.g., ‘weight’, ‘filter’) | |
context | str | Context for sparsification (‘global’ or ‘local’) | |
criteria | Criteria | Criteria to determine which weights to keep | |
nm | bool | False | Whether to use N:M sparsity pattern (forces 2:4 sparsity) |
layer_type | Type | Conv2d | Type of layers to apply sparsification to |
The Sparsifier
class allows us to remove some weights, that are considered to be less useful than others. This can be done by first creating an instance of the class, specifying:
- The
granularity
, i.e. the part of filters that you want to remove. Typically, we usually remove weights, vectors, kernels or even complete filters. - The
context
, i.e. if you want to consider each layer independently (local
), or compare the parameters to remove across the whole network (global
). - The
criteria
, i.e. the way to assess the usefulness of a parameter. Common methods compare parameters using their magnitude, the lowest magnitude ones considered to be less useful.
User can pass a single layer to prune by using the Sparsifier.sparsify_layer
method.
Sparsifier.sparsify_layer
Sparsifier.sparsify_layer (m:torch.nn.modules.module.Module, sparsity:float, round_to:Optional[int]=None)
Apply sparsification to a single layer
Type | Default | Details | |
---|---|---|---|
m | Module | The layer to sparsify | |
sparsity | float | Target sparsity level (percentage) | |
round_to | Optional | None | Round to a multiple of this value |
Returns | None |
Most of the time, we may want to prune the whole model at once, using the Sparsifier.prune_model
method, indicating the percentage of sparsity to you want to apply.
Sparsifier.sparsify_model
Sparsifier.sparsify_model (sparsity:Union[float,List[float]], round_to:Optional[int]=None)
Apply sparsification to all matching layers in the model
Type | Default | Details | |
---|---|---|---|
sparsity | Union | Target sparsity level(s) | |
round_to | Optional | None | Round to a multiple of this value |
Returns | None |
In some case, you may want to impose the remaining amount of parameters to be a multiple of a given number (e.g. 8), this can be done by passing the round_to
parameter.
Also, instead of passing a single value of sparsity, a list of sparsities can also be provided. In that case, each value in the list is the sparsity that will be applied to all layers.
Example: I have a 4-layer network and want to remove half of the parameters from the layers 2 and 3, I can provide the list: sparsity = [0, 50, 50, 0]