Skip to content

Model Compression

Quantization Reducing precision from Float64 to Int8
Pruning Removing unnecessary aspects of the model
Removing neurons in ANN
Last Updated: 2024-05-12 ; Contributors: AhmedThahir

Comments