Skip to content

Sephora-M/chord2vec

Repository files navigation

Chord2vec

The main goal of this work is to introduce techniques that can be used for learning high-quality embedding chord vectors from sequences of polyphonic music. We aim to achieve this by finding chord representations that are useful for predicting the neighboring chords in a musical piece.

Please refer to the written report for information on notations, etc.

Linear model

This model assumes conditional independence between the notes in a the context chord c given a chord d: p(\mathbf{c} =\mathbf{c}' | \mathbf{d}) = \prod_{i=1}^N p(c_i =c_i'| \mathbf{d})

Autor regressive model

This model decomposes the context chord probability distribution according to the chain rule: p(\mathbf{c} =\mathbf{c}' | \mathbf{d}) = \prod_{i=1}^N p(c_i =c_i'| \mathbf{d}, c_{<i})

Sequence to Sequence model

Sequence-to-sequence models allow to learn a mapping of input sequences of varying lengths (a chord) to output sequences also of varying lengths (a neighbor chord) . It uses a neural network architecture known as RNN Encoder-Decoder. The model estimates the conditional probability of a context chord c given an input chord d by first obtaining the fixed-length vector representation v of the input chord (given by the last state of the LSTM encoder) and then computing the probability of c with the LSTM decoder.

About

From word2vec to chord2vec tensorFlow implementation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published