Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request support for REMI and Compound Words representations #63

Open
15805383399 opened this issue Jul 28, 2021 · 2 comments
Open

Request support for REMI and Compound Words representations #63

15805383399 opened this issue Jul 28, 2021 · 2 comments
Labels
enhancement New feature or request

Comments

@15805383399
Copy link

Recently, I found two new representations for symbolic music generation.
But using their open source code is not as easy as using muspy.
I hope that these two representations could be added to muspy's built-in representations.

REMI

https://ailabs.tw/human-interaction/pop-music-transformer/
image
The MIDI-like representation is actually the event-based representation in muspy
And of course, the REMI representation got beter performance in their paper

Compound Words

https://ailabs.tw/human-interaction/compound-word-transformer-generate-pop-piano-music-of-full-song-length/
image
The Compound Words representation further improves the REMI representation.
The most import point I think is that, Compound Words representation makes the sequence much shorter.
And this makes it posible for attention window to cover the whole sequence.
image

@salu133445 salu133445 added the enhancement New feature or request label Aug 7, 2021
@salu133445
Copy link
Owner

Added in 28afd97. The implementation of the REMI representation is different from what described in the paper though.

@15805383399
Copy link
Author

15805383399 commented Aug 15, 2021

For chord events in REMI, there is a way to recognize chords from midi, offered in compound-word-transformer
They uses their open source library chorder to recognize chord events, and also offers an example of processing the dataset they used in their experiment.
But their is also one problem for their code. It could only recognize 60 kinds of chords, which is not enough, especially for classical music.
And in my own experiment, I found that the chord event can really help the model to understand chord, and to generate music with more precise chords. But limit the number of chords used in generation under 60(in fact, only a part of them).I think adding recognizable chord types may help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants