Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreAudio driver #200

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

CoreAudio driver #200

wants to merge 1 commit into from

Conversation

v3n
Copy link

@v3n v3n commented Dec 14, 2015

NOTE: this is not ready to merge, this pull request is for visibility right now.

This implements OS X CoreAudio support to Audiality2, allowing native audio playback on OS X without SDL or JACK.

OS X's CoreAudio works by multi-buffering AudioQueueBuffer's via use of AudioQueues. It is currently set up to use CoreAudio internal threads, though I may change this to use application-specific threads. The driver is enabled by default on OS X unless JACK or SDL is found. OS X requires interleaved PCM audio, so this driver converts audiality's internal buffers to interleaved buffers during copy. Currently, it has no support for multichannel audio output other than stereo. This will require a little extra work to map those audio channels (unless Audiality already complies to something similar to Apple's multichannel layout specs.

If there are issues with the implementation/style of this driver, please definitely let me know and I'll fix them.

FIXME: Currently skips badly due to underflow, not currently sure if I'm using the Audiality API incorrectly or initialized the OS X context improperly. Changing from double to tripled buffering doesn't fix anything, so I suspect I may be invoking Audiality incorrectly or unintentionally blocking the other buffers in the queue.

Thanks!

@olofson
Copy link
Owner

olofson commented Dec 14, 2015

Awesome! I've been considering adding other APIs, but as it is, JACK and SDL cover my needs, so I'm probably not going to get around to do it myself any time soon.

(That said, I need to add SDL2 support as well, as my current project actually ends up using both SDL 1.2 and 2...)

Not sure what the underflow could be about, but one thing that crossed my mind is that AudioQueueStart() actually seems to start the device - but at that point, there are no buffers allocated or queued. My gut feeling is that AudioQueueStart() shouldn't be called until everything's really ready for action.

Thanks!

@v3n
Copy link
Author

v3n commented Dec 14, 2015

I'm fairly sure it's not the AudioQueueStart() call, that merely kicks off the audio thread and starts the callback thread. What does the A2_REALTIME flag change in the processing logic? It may not actually be underflowing, just putting non-contiguous data in the buffer.

There's a fairly good change you'll see an OpenAL driver from me after I've finished this one. While most of A2 is pretty over my head in terms of music theory, it's still a lot of fun to hack on.

@olofson
Copy link
Owner

olofson commented Dec 14, 2015

That's exactly my point; I'd expect weird things to happen if the callback thread is started before the buffers are in place. That said, unless you're doing both input and output, an issue like that will typically sort itself out after one or two minor glitches right at the start, but I'm not sure how CoreAudio handles this situation.

The A2_REALTIME flag affects how a context deals with API/engine synchronization. If A2_REALTIME is specified, it's assumed that the engine runs in an asynchronous context, possibly on a different CPU. This is what you should be using with any normal audio I/O driver.

If A2_REALTIME is not specified, all processing is supposed to be done in the same context as the API thread, via a2_Run() or similar. test/renderwave.c demonstrates both modes of operations. This feature is also used internally when scripts use 'wave' for pre-rendering waves at load/compile time.

OpenAL would be cool, as it's pretty popular in games. Haven't looked deeply into it, but I think some deeper level of integration would be useful; wiring A2 voices directly to OpenAL Sources and the like. (As an alternative to the "manual" positional audio I'm doing in my projects currently.)

@v3n
Copy link
Author

v3n commented Dec 18, 2015

It's possible I'm misunderstanding how the timing in Audiality works. This driver is currently triple-buffered, with the callback for a buffer being called immediately after it finished playing, then enqueueing it. Is there anything I need to know about to ensure that this works as intended?

@olofson
Copy link
Owner

olofson commented Jan 4, 2016

Whoa! I thought I responded to this, but apparently not.

Anyway, timing in Audiality is derived from the timing of the buffer callbacks. (Major update of the timestamping API recently, BTW!) Each call is timestamped internally, and the message API calculates timestamps based on the last registered callback timestamp, and elapsed time since that.

When not using A2_TIMESTAMP, or calling a2_TimestampReset() (formerly a2_Now()) before each batch of messages, timestamps are adjusted so that the messages are processed with a constant delay corresponding to one audio buffer, so that you have one audio buffer's worth of "jitter tolerance" when sending messages. (That delay is configurable in the new API. If you set it to zero, you have to send messages "instantly", or they risk arriving late when you're close to the buffer boundaries. That's intended for applications that calculate their own "jitter margin" to avoid adding more latency than necessary.)

This allows you to do things like triggering each individual round on a 9000 rpm Gatling gun from the game logic, instead of implementing it as a looping sound effect, as it's typically done. (The latest timestamping API update adds a test/demo that generates a high pitched tone this way.)

Note that messages delivered late will only be just that; late! They'll be bumped to the start of the buffer that's about to be processed as the messages arrive, which will mess up timing a bit, but that's all. No dropouts or anything. (Unless we're talking about audio stream messages, obviously! But only the offending stream will drop out in that case; not the engine as a whole.)

BTW, it's actually quite important that the callbacks are done with steady timing, if you want highly accurate timing. Non-power-of-two buffer sizes will cause trouble on some systems, as the backends still use power-of-two buffer sizes, so the API will periodically skip or insert extra buffer callbacks to achieve the correct average stream rate.

Anyway, none of this should cause the issues you're seeing. Since there's no audio input (yet), there's only one possible blocking point, so it shouldn't be possible to get stuttering by setting the queue up incorrectly.

(Sorry about the long message. It might have been shorter, if I had more time, and wasn't so incredibly tired right now. ;-) )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants