Neural networks are interesting, for they are a black box. We know what comes in and we have a general sense of what comes out, but what's inside, the hundreds of thousands of numbers that make up the matrices that GPUs all around the world bang and smash into submission, we don't know anything about them. They are as mystical as magic for all I know.
I came across a friend talking about TSAC (
bellard.org/tsac/), which is an audio compression utility using transformers (which are a kind of neural network, and the T in ChatGPT) to improve upon another NN-based audio codec, and how it's... very much not resilient to corruption, in a way that yields some interesting results.
The two tracks here (I don't know if I have the guts to call them songs) are the result of giving the compression utility raw WAVs of my music, corrupting the output (by flipping bits, bytes, replacing chunks of data, etc.) and telling it to decompress the corrupted data.
I think the results are fascinating, but most importantly, reveal how unsuited neural networks are for tasks like this, at least for now.
This is more of an experiment than an active listening experience.