# from fasterai.regularize.regularize_callback import RegularizeCallback
# from fasterai.core.criteria import large_final
# # Apply L1 regularization at filter granularity
# cb = RegularizeCallback(
# criteria=large_final,
# granularity='filter',
# weight=0.01,
# verbose=True
# )
# learn.fit(10, cbs=[cb])Regularize Callback
Overview
The RegularizeCallback applies structured regularization during training to encourage weight sparsity at various granularities. This is useful as a pre-pruning step: by regularizing groups of weights toward zero during training, subsequent pruning can remove more parameters with less accuracy loss.
Key Features: - Supports multiple granularity levels ('weight', 'vector', 'kernel', 'filter') - Compatible with any criteria from fasterai.core.criteria - Optional scheduling to vary regularization strength over training
Parameters: - criteria: Importance criteria to use for computing regularization (e.g., large_final) - granularity: Level at which to group weights ('weight', 'vector', 'kernel', 'filter') - weight: Regularization coefficient (higher = stronger regularization) - layer_types: Module types to regularize (default: nn.Conv2d) - schedule: Optional schedule to vary regularization strength over training - verbose: Print regularization weight after each epoch
Usage Example
Apply filter-level L1 regularization to encourage entire filters to become unimportant (making them easier to prune later):
Typical Workflow: 1. Train with RegularizeCallback to push unimportant filter groups toward zero 2. After training, use PruneCallback or Pruner to remove the zeroed-out structures 3. Fine-tune the pruned model to recover any lost accuracy
See Also
- Sparsifier - Apply sparsification after regularization pushes weights to zero
- Criteria - Importance measures that can leverage regularized weights
- SparsifyCallback - Combine with sparsification for gradual pruning