Writing audio sources
All audio sources in jReality are subclasses of
AudioSource. In practice, virtually all audio sources will be subclasses of
RingBufferSource, a subclass of
AudioSource that maintains a circular sample buffer and takes care of everything except the actual writing of samples to the sample buffer.
The responsibilities of an audio source are limited to maintaining a buffer for mono samples, writing samples upon request, and handing out readers for its sample buffer to consumers of samples. A reader is an instance of
SampleReader. A reader first fulfills sample requests with samples that are already in the buffer; when the reader reaches the current end of the buffer, it requests more samples from the audio source. Several readers must be able to work concurrently in a thread-safe manner. If you derive your audio source from
RingBufferSource, then all this is already taken care of and you can focus on the main job of an audio source, i.e., computing audio samples.
When asked to render n samples, an audio source may choose to render more than n samples. For instance, if an audio source wraps a software synthesizer, it may be convenient to render a multiple of the buffer size of the synthesizer. Our approach implicitly reconciles buffer sizes across audio sources and output devices.
The audio source keeps track of time in terms of the number of samples requested so far. An audio source generates a mono signal at the sample rate of its choosing; sample rate conversion, spatialization, and multi-channel processing are the responsibility of the backend (see The jReality audio rendering pipeline).
The deliberately simple nature of audio sources is intended to encourage developers to implement audio sources that draw their samples from a wide variety of inputs.
In order to get started, you may want to look at the
SynthSource and see how they work. Both are quite short, just a few dozen lines, with all the boilerplate taken care of by