Odds and ends

From JReality Wiki
Revision as of 06:04, 8 June 2009 by Brinkmann (Talk | contribs)

(diff) ←Older revision | view current revision (diff) | Newer revision→ (diff)
Jump to: navigation, search

The jReality audio rendering pipeline lets you patch in two types of objects, processors and distance cues. Processors are intended to handle an entire buffer's worth of samples at once, and so they are used for the initial preprocessing as well as the final directionless processing of samples. Distance cues are intended for distance-dependent processing and handle one sample (along with individual location information for this sample) at a time.


Audio processors

An audio processor is initialized with an instance of SampleReader from which it reads its input. The AudioProcessor extends the SampleReader interface, so that audio processors can be chained, and the convenience class AudioProcessorChain simplifies this.

In order to get an idea how to write an audio processor, take a look at LowPassProcessor.


Distance cues

Inplementations of the DistanceCue interface are intended to handle individual samples one at a time, along with location information. Like audio processors, distance cues are intended to be chained, and the convenience class DistanceCueChain facilitates this.

The parameters of the nextValue method seem redundant (they include a distance r between microphone and audio source, as well as x, y, and z coordinates that indicate the position of the microphone relative to the source), but they are not because r is given in microphone coordinates, while x, y, and z are given in source coordinates. Depending on the transformation matrices between source and microphone, r may not be given by a simple function of x, y, and z.

The DistanceCue interface itself includes a number of sample implementations. The LowPassFilter class is a slightly more complex example.