Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API: how to send audio (input) to VST? #9

Open
drscotthawley opened this issue Apr 28, 2018 · 7 comments
Open

API: how to send audio (input) to VST? #9

drscotthawley opened this issue Apr 28, 2018 · 7 comments

Comments

@drscotthawley
Copy link

drscotthawley commented Apr 28, 2018

I downloaded a free compressor plugin and started modifying your example of the Dexed synth to use the compressor, and I could query all the parameters and their names, but then...

...I've been all through the code and docs, and I still can't figure it out: How does one send audio into the VST plugin?

I see several "get" routines in the source for RenderEngine.... but for a plugin like an echo or compressor ...how do I "put"?

Thanks!

(little screenshot of how far I got, LOL)
screen shot 2018-04-27 at 10 14 38 pm

@fedden
Copy link
Owner

fedden commented Apr 28, 2018

I see the problem - so with the current VST you are using, you want to send buffers of audio in at get altered FX audio out, rather than triggering a MIDI synth to subsequently generate audio frames?

This VST host was designed in my Undergrad Dissertation to host a synth and create one-shot sounds from various synthesiser patches. This was so I 'learn' an automatic VST synthesiser programmer, by training a neural network representation between the MFCCs features (derived from the sound) and the parameters used to make the said sound.

Although it's been a year since I have looked at the source, I suspect the code would need to be modified in RenderEngine::renderPatch.

Lines 121, 122 of RenderEngine.cpp show the audioBuffer being passed to the plugin as a reference.
In this case, we would want to fill the audioBuffer object with data before it goes into the plugin. I could be wrong - it's been a while since I worked with JUCE, but that is certainly where I would start:

// Turn Midi to audio via the vst.
plugin->processBlock (audioBuffer, midiNoteBuffer);

I have about 20 days left on my thesis, and as I said I will be reviving RenderMan for a creative ML course I'll be doing. Until then my hands are tied! Let me know if I can point you in the right direction, and if not, I'll add this to the list of features to be implemented.

Thanks for your perserverance!

@drscotthawley
Copy link
Author

drscotthawley commented Apr 28, 2018

In that case, I have a language suggestion: change references to "VST host" to "VSTi host."
Because that's what you've got. https://www.quora.com/What-is-the-difference-between-VST-and-VSTi ...It would help keep people like me from getting too excited.

Thanks for pointing out the place to start in the code. If I can convince a couple local JUCE experts to help, maybe we can add audio and send a PR. How 'bout we leave this issue open, and maybe someone else in the world will contribute!

Aside: Good luck with your thesis! Sounds interesting. I'm working on deep learning as well, only with audio. And I'll be in London in late June for a couple AI conferences. I'd love to visit Goldsmiths while I'm around. I took Rebecca Fiebrink's online course recently and loved it.

@fedden
Copy link
Owner

fedden commented Apr 28, 2018

Apologies if I wasted your time and the references to VST have been changed so thanks for pointing that out. Rebecca is well worth meeting if you get the chance - one of the standout lecturers for me by far!

Contributions are very welcome but the potential to train neural networks for automatic mixing / FX is an entincing one so I'll see what can be done in the coming months. I should add the VSTi programming project has already accepted to IEEE as a paper. My dissertation this year is focused on neural audio synthesis at high sample rates and in real-time usage! :)

@igorgad
Copy link

igorgad commented Apr 28, 2018 via email

@drscotthawley
Copy link
Author

@fedden No worries; I'd just been wanting an audio-audio python VST host for a while. That's great about your paper being accepted!

@igorgad Great to hear about your project. I pulled your repo, built it, and will see if I can help. Currently getting an error that seems unrelated to swig. I'll send you an Issue...

@faroit
Copy link

faroit commented Mar 10, 2020

@drscotthawley did you find a way to handle audio->VST from python?

@drscotthawley
Copy link
Author

@faroit It's been a while, but yes, we had something working once: Check out @igorgad's "dpm" repo, e.g.

https://github.com/igorgad/dpm/blob/master/contrib/run_plugin.py

I keep meaning to come back to this, but so many other things to work on! Let me know if this helps and/or if you make progress with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants