In vis.py I have noticed that we have changed the way we select latents for specific class
https://github.com/alan-turing-institute/affinity-vae/blame/17e65f90445e69b12fd81d50736f75783a120968/avae/vis.py#L1144
we seem to be selecting multiple encodings and then take the mean of them. Is it not better to just take one encoding for the purpose of interpolation and visualisation ?