This is an ongoing set of experiments, and blogs about removing activations. Pruning models using mean activated absolute values can lead to models with smaller weight matrices. This means faster models, but I think there are some interesting things to be learned about how the models work too.
Removing Activations
An introduction into how and why I am removing activations from neural nets
Removing Activations Part 2
Looking at what happens to the shape of models when they are reduced in size.
Removing Activations Longboy
Following on from some interesting results in part 2, looking at deeper models, and how the size of different layers change as models reduce in size.
Next, removing activations, control
This is the one that is not finished. This compares removing activations using the MAAV method to other methods.