Playing around with RNN's to generate music in the style of BACH's chorale's. Bach's chorale dataset is challenging because it's quite small
The thing currently implemented is an experiment to see what happens when each of the 4 voices is generated by a different RNN. They all get the same 4-voice input though. It's like they're all improvising together, because the music is not scripted, each model knows the statistics of the song, but not what the other models will do.
run via python main.py
I tried to code this, adhering to best practices, and with maximum customizability in the use of tensorflow. This way I'd like to use it as a template for future projects.
v13_16th_notes_seq_128_transposed_attention is quite good. The attention layer really seems to work well.