Odds and ends
The jReality audio rendering pipeline lets you patch in two types of objects, processors and distance cues. Processors are intended to handle an entire buffer's worth of samples at once, and so they are used for the initial preprocessing as well as the final directionless processing of samples. Distance cues are intended for distance-dependent processing and handle one sample (along with individual location information for this sample) at a time.
An audio processor is initialized with an instance of
SampleReader from which it reads its input. The
AudioProcessor extends the
SampleReader interface, so that audio processors can be chained, and the convenience class
AudioProcessorChain simplifies this.
In order to get an idea how to write an audio processor, take a look at
Inplementations of the
DistanceCue interface are intended to handle individual samples one at a time, along with location information. Like audio processors, distance cues are intended to be chained, and the convenience class
DistanceCueChain facilitates this.
The parameters of the
nextValue method seem redundant (they include a distance r between microphone and audio source, as well as x, y, and z coordinates that indicate the position of the microphone relative to the source), but they are not because r is given in microphone coordinates, while x, y, and z are given in source coordinates. Depending on the transformation matrices between source and microphone, r may not be given by a simple function of x, y, and z.
DistanceCue interface itself includes a number of sample implementations. The
LowPassFilter class is a slightly more complex example.