Writing applications with audio

From JReality Wiki
Jump to: navigation, search

Executive summary

In order to write an audio application with jReality, create one or more audio sources, attach them to the scene graph, and launch a viewer with audio support. Don't forget to invoke the start() method of each audio source, or else they'll remain silent (like well-behaved children, jReality audio sources don't speak unless spoken to).


The following code snippet is a fully functional audio application:

...  // omitted package info and imports
 
public class AudioExample {
 
  public static SceneGraphComponent getAudioComponent() throws Exception {
    // create scene graph node and attach a geometry to it
    SceneGraphComponent audioComponent = new SceneGraphComponent("monolith");
    audioComponent.setGeometry(Primitives.cube());
    MatrixBuilder.euclidean().translate(0, 5, 0).scale(2, 4.5, .5).assignTo(audioComponent);
 
    // create an audio source that reads from a wav file
    InputStream audioStream = AudioExample.class.getResourceAsStream("zarathustra.wav");
    Input audioInput = Input.getInput("zarathustra", audioStream);
    final AudioSource source = new CachedAudioInputStreamSource("zarathustra", audioInput, true);
 
    // start the audio source and attach it to the scene graph
    source.start();
    audioComponent.setAudioSource(source);
 
    return audioComponent;
  }
 
  
  public static void main(String[] args) throws Exception {
    JRViewer v = new JRViewer();
    v.addBasicUI();
    v.addAudioSupport();
    v.addVRSupport();
    v.setPropertiesFile("AudioExample.jrw");
    v.addContentSupport(ContentType.TerrainAligned);
    v.setContent(getAudioComponent());
    v.startup();
  } 
}


Available audio sources

  • SampleBufferAudioSource: The simplest possible audio source; it takes a float array of samples and plays it once or repeatedly.
  • SynthSource: An abstract class that provides the basic functionality for implementing software synthesizers within jReality; simply subclass it and implement the nextSample() method. A protected field called index of type long keeps track of the total number of samples computed before.
  • AudioInputStreamSource: An audio source that reads samples from a URL.
  • CachedAudioInputStreamSource: Like AudioInputStreamSource, except it reads out all samples and caches them; only recommended for shortish sample arrays.
  • JackSource: An audio source that reads its input from a JACK client; will only work if JACK is installed and running.


In addition to these audio source, we also have another source, CsoundSource, which runs an instance of Csound internally. It's a great way to create amazing sounds, but sadly, the Java bindings for Csound cause a lot of segfaults on Linux systems, so that the Csound source is not currently an official part of jReality. If you are using a Mac, however, you can install the Java bindings for Csound (you need to compile them for double precision) and add the Csound components of jReality to your build path.


Common methods of all audio sources

All audio sources are equipped with basic transport control (start(), stop(), and pause()). Once can also register audio listeners that will get notified when the transport status of an audio source changes. Upon creation, an audio source is stopped, i.e., you will need to call the start() method so that it begins rendering samples. If you have an audio application that mysteriously remains silent, you should first make sure that you've started your audio sources.

The remaining public methods of the AudioSource base class are intended for use in audio backends and are of little concern for application developers.


Csound sources

jReality supports Csound, a powerful software synthesis package. In order to enable the Csound components of jReality, download and install the Java bindings for Csound (Ubuntu has a package called libcsnd-java) and add them to your build path (don't forget to set the path to the native library accordingly). Add the package src-audio/de/jreality/audio/csound to your build path. If you want to check whether this works, include src-tutorial/de/jreality/audio/MinimalExample.java to your build path and run it.

Warning: Csound support in jReality is very experimental right now. It seems to work fine on my MacBook Pro, but under Linux the Java bindings for Csound don't seem to like the massive threading that's going on in jReality, causing frequent crashes.


Configuring audio through appearances

The jReality audio rendering pipeline can be configured through appearances; it has several places (akin to aux send/return on a mixing console) where users can plug in gain control, reverberation, various filters, distance-dependent attenuation, etc. See AudioAttributes for the full list of attributes that can be set through appearances. The AudioOptions plugin should give you an idea how to use most of these attributes.


AudioAttributes includes two parameters who meaning may not be immediately obvious, earshot and update cutoff. In most cases, you can safely stick to the default values, but here's their meaning for the sake of completeness:

  • Since jReality models sound propagation with a growable variable delay line, memory use may get out of control if a sound source moves far away from the listener. The earshot property determines when a sound source will be considered out of earshot (i.e., while the position of the source is still tracked in case it returns within earshot, no more samples from this source are added to the delay line). earshot is measured in samples and defaults two 96000, i.e., 2 seconds at a sample rate of 48kHz.
  • The update cutoff determines the lowpass filter cutoff that is used for smoothing relative positions of source and microphone. The default is 6Hz, which seems to work well in most situations. If your video frame rate is much higher than 25Hz, you may increase the cutoff for snappier performance (although I doubt that it'll be noticeable). If your audio gets choppy at low video frame rates, you may want to decrease the cutoff.