Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility of controlling the final mix #14

Open
Sonidocreativo opened this issue Feb 12, 2020 · 6 comments
Open

Possibility of controlling the final mix #14

Sonidocreativo opened this issue Feb 12, 2020 · 6 comments
Labels
enhancement New feature or request

Comments

@Sonidocreativo
Copy link

Thanks for your work it works well. You have thought about putting controls to regulate the percentage of the target, for example of the eq and the stereo field. regards

@sergree
Copy link
Owner

sergree commented Feb 12, 2020

Hi! Thank you for your feedback

I'm not sure that this will work, because our library does not process audio in real time, but the entire file at once. therefore, when changing the parameters, the application has to re-process the entire file.
I'll think about it better in my spare time..

but Matchering basically has such concept: if you want to change the result, select a different reference (or edit it in some way with other software). no knobs or faders. this is a mental shift.
this is described here

@sergree sergree added the enhancement New feature or request label Feb 12, 2020
@Sonidocreativo
Copy link
Author

But it would be fine if I decide to put only 50% of the EQ and 70 of the stereo field and process it again. It takes more time but you can adjust more to the desired result.

@yoyololicon
Copy link
Contributor

yoyololicon commented Feb 13, 2020

A handy option when the user want to have more control on the final result, sounds good to me.
We can add some parameters (like EQ/stereo width/loudness... etc) in the preview stage.
Users can decide how much they want to be close to the reference, or let matchering decide for them (100% in current version).

I'm not sure that this will work, because our library does not process audio in real time, but the entire file at once. therefore, when changing the parameters, the application has to re-process the entire file.

Maybe not the entire file, only the preview section needs to be processed in the preview stage.
The whole matchering can be done afterwards.

@sergree
Copy link
Owner

sergree commented Feb 13, 2020

@yoyololicon Hi 🙏

There are no such thing as stereo width parameter in Matchering. The effect of expanding the stereo width is achieved by equalizing mid and side channels separately.
https://github.com/sergree/matchering/blob/master/matchering/stages.py#L70

We will still need to figure out how to correctly put parameters in these formulas, because simple multiplication by 0.5 or 0.7 is unlikely to work. And the current version of the library creates a preview after creating the full version. And the library is not responsible for saving states between sessions... Most of the flow will have to be rethought. need to think..

So to make such update we will need to edit core library and web app. I still believe that if we make this functionality, we should not impose these settings on users immediately when opening the page, hiding them behind a separate button that can be clicked by those who want such advanced settings. Or make a separate advanced fork of https://github.com/sergree/matchering-web

What I can say for sure is that we need to collect more such feature requests for some time before making such breaking changes. I hope you agree.

Smells like v2.1 😅

@nicokaniak
Copy link

I think it would be useful to automatize the tuning of the reference track to the key of the target track. I would propose to do this in the following way:

  1. detect the frequency of the lowest peak in the target audio (xEx: 43 Hz)
  2. detect the frequency of the lowest peak in the reference audio (xEx: 52 Hz)
  3. Shift the frequency of the reference audio the amount of Hz in order to match the targets Low freq peak (xEx: 52-43 = 9 Hz) -> (Reference audio should be shifted 9 Hz down)
  4. commence the matching process as usual.

I know it sounds obvious, but it gives me peace of mind to explain it step y step.
I think this would be useful so that both tracks share the same low end. Also, aligning the low-frequency peaks is better than aligning higher frequencies due to the logarithmic scale. Shifting from the lowest frequencies won't affect the higher end, but it will respect the low-end better.

@sergree
Copy link
Owner

sergree commented Mar 15, 2020

Hello @nicokaniak! Thank you for your feedback and suggestion. I like your idea 🙂

I remember this equalization method from SurferEQ. I think this might be too overkill, but I'll try experimenting with it in Jupyter Notebook before the next update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants