The jReality scene graph

This document is designed to be used in conjunction with the jReality Javadoc pages , where the available Javadoc documentation for the classes mentioned here can (and should) be found.

The jReality package provides viewers which can render 3D scene descriptions given in the form of scene graphs (directed acyclic graphs). The viewer capability is encapsulated in the Viewer interface. The most important methods here are

  • setSceneRoot(), which sets the scene graph root to be rendered,
  • setCameraPath() which describes the position in the scene graph, of the camera to use to render it, and
  • render() which renders an image.

A class which implements the Viewer interface is generically called a backend. Depending on the rendering requirements, one can attach several different backends to the same scene graph to produce different sorts of images of the same scene. There are currently two interactive backends, one based on jogl and a software backend written completely in Java. Additionally, one can choose from variety of non-realtime backends for higher-resolution rendering.

Since working directly with backends can be time-consuming, jReality comes with a plugin system for assembling applications from reusable components. See this tutorial for an introduction to jReality plugins.

SceneGraphNode

The jReality scene graph is based on a relatively small number of classes, most of which inherit from the abstract superclass SceneGraphNode. The only subclass which can have children is SceneGraphComponent. The leaves of the scene graph include the following classes:

  • Transformation: describes where
  • Appearance: describes how
  • Geometry: describes what (graphics)
  • Audio source: describes what (sound)
  • Light
  • Camera

The first four classes are the most common; the final two are more specialized. An instance of SceneGraphComponent can have one each of the above nodes, in addition to a list of children (called proper children) which are also SceneGraphComponents. All these proper children plus any of the 5 fields present in a SceneGraphComponent are collectively known as the children of the component. To specify only the proper children, we say so!

In order to allow backends to optimize their performance by caching state, all the subclasses of SceneGraphNode broadcast change events to registered listeners.

A SceneGraphComponent also contains a possibly empty list of tools (implements the Tool interface). Tools are discussed in more detail below.

Transformation

The transformation is essentially given as a real 4×4 matrix. This matrix describes the projective transformation which converts from the local coordinate system into the parent coordinate system. This description includes euclidean as well as non-euclidean isometries, all of which are supported as equal citizens in the jReality package. (See below, “Metric neutrality”). Of course other sorts of matrices can also be applied, such as skews, scales, or perspectivities. Each backend maintains some sort of matrix stack where these transformations are appropriately concatenated during traversal. JReality includes a wide variety of classes to assist in the generation of 4×4 matrices. See the Matrix, MatrixBuilder, and P3 classes.

Appearance

The Appearance class itself is simple: essentially nothing more than a list of attributes, each consisting of a key-value pair, where the key is a String instance, and the value is some Object. But the rendering of appearances is the most complicated part of the jReality rendering process. It’s described in more detail below in the section on shading. Note: The class CommonAttributes contains definitions of the most commonly used appearance attribute keys, arranged according to their usage pattern.

Geometry

The abstract class Geometry includes two sorts of subclassess. First are surface based primitives such as Sphere and Cylinder, which have no points or lines, so only the polygon shader is invoked (see below). Secondly, are point-based primitives including PointSet, its subclass IndexedLineSet, and its subclass IndexedFaceSet. These point-based classes allow users to construct arbitrary simplicial 2-complexes.

Point, line, and face sets

These classes are based, like appearances, on attribute sets organized as a list of key/value pairs. Again the keys are strings, such as “coordinates”, “normals”, “colors”, “textureCoordinates”, etc. (See the Attribute class for all predefined attributes here). Conceptually, everything can be explained using the IndexedFaceSet class. It has three sets of attributes, one each for vertices, edges, and faces (don’t ask me how we decide when to use these terms instead of points, lines, and polygons!). If it has no face attributes, then it’s really an IndexedLineSet; if it additionally has no edge attributes, then it’s really a PointSet.

The values of the attributes are instances of class DataList. This is a very clever (but opaque) class which allows a variety of array formats to be used to represent the data. For example, if I have a list of 20 points, each of which is a 3-vector, I can use double [20]3 or double [60] to represent this data; the DataList handles the access, etc.

Factories

Partly because DataList’s are scary, jReality also provides a wide set of geometry factories which can be used without having to refer to the DataList class. These factories include PointSetFactory, IndexedLineSetFactory, IndexedFaceSetFactory, QuadMeshFactory, ParametricSurfaceFactory, BallAndStickFactory, and TubeFactory. All these factories function according the factory pattern: many set…() methods, a single update() method after a series of set calls, and finally a getGeometry() method to get the geometry instance which the factory produces. This geometry may subsequently be updated/changed using the same factory.

A final subclass of Geometry is ClippingPlane. A clipping plane does not cause any geometry to be rendered. It can be either global, or local to the scene graph where is occurs.

There are many static methods available in jReality for producing standard types of geometric primitives without having to explicitly use factories. See the classes Primitives, , PointSetUtliity, IndexedLineSetUtility, IndexedFaceSetUtility, QuadSetUtility.

Audio Sources

These paragraphs are merely an overview of the basic audio architecture. Please consult the Developer Tutorial for more detailed information. The user tutorial contains detailed information on how to run jReality applications with audio.

The core element of the audio architecture is a scene graph node called audio source that can be attached to a scene graph component just like a geometry or a light.

The responsibilities of an audio source are limited to maintaining a circular sample buffer, writing samples upon request, and handing out readers for its circular buffer to consumers of samples. A reader first fulfills sample requests with samples that are already in the circular buffer; when the reader reaches the current end of the buffer, it requests more samples from the audio source.

The audio source keeps track of time in terms of the number of samples requested so far. An audio source generates a mono signal at the sample rate of its choosing; sample rate conversion, spatialization, and multi-channel processing are the responsibility of the backend.

The deliberately simple nature of audio sources is intended to encourage developers to implement audio sources that draw their samples from a wide variety of inputs.

An audio backend traverses the scene graph, picking up a sample reader and transformation matrices for each occurrence of an audio source in the graph. For each occurrence of an audio source, it creates a {m sound path} that is responsible for modeling the propagation of sound emitted by the source. It also keeps track of a microphone in the scene.

SceneGraphPath

Any instance of SceneGraphNode can appear at multiple places in the scene graph. In this way, geometry, appearances, and transformations can be repeatedly applied to different parts of the scene graph. Of course SceneGraphComponent instances cannot appear as descendents of themselves! But otherwise reuse is everywhere allowed.

To distinguish between different appearances of the same node in the scene graph, the class SceneGraphPath describes a path in the scene graph from the root to a final element. For the purposes of this discussion, all the elements of this list but the last must be SceneGraphComponents (such that each successive one in the child of the preceding one), while the final element must be a subclass of SceneGraphNode.

For example, the setCameraPath() method on Viewer takes such a SceneGraphPath as its argument, whose final element is a Camera instance; otherwise if it took a Camera instance alone it would not know which of possibly many occurrences of the Camera is intended. Similar considerations apply to many situations when dealing with the scene graph, for example, picking of geometric objects specifies its results using such paths.

Metric neutrality

The jReality scene graph is metric neutral. That is, it supports not only euclidean geometry but also hyperbolic and elliptic/spherical geometry, since all three of these geometries can be modeled on three dimensional projective geometry. The group of projective transformations is generated by the euclidean transformations (a subgroup) plus the perspective transformation, so modern rendering software and hardware is all based on the projective group, and the non-euclidean geometries can then be supported with little or no change in the basic code.

The choice of metric is sometimes called the signature, since it has to do with the signature of a certain quadratic form. The signature is propagated through the scene graph using appearance attributes (see section below on appearances). There are several jReality classes which are particularly useful for the care and feeding of the metric neutrality, for example, P2, P3, and Pn.

Homogeneous coordinates

All classes and methods in jReality (knock on wood) are expected to handle coordinate date in either 3- or 4-dimensional form. It is generally well known that euclidean matrices can be written as 4×4 matrices which act on 3-dimensional points (x,y,z) by extending these to (x,y,z,1) and then performing a standard matrix-vector multiplication. These 4D coordinates are sometimes called homogeneous coordinates. For more on homogeneous coordinates, consult references on projective geometry. While not strictly necessary for euclidean setting, they are very useful (for example in rational b-splines); they are more necessary for the support of non-euclidean geometry (especially elliptic geometry, which cannot be represented with a single non-homogeneous atlas).

Rendering process

When rendering the scene, a backend will first typically handle the camera which belongs to its camera path. Next, the scene graph is traversed and only the global lights are collected. jReality supports directional, point, and spot lights. A similar process occurs for global clipping planes. Then, the actual rendering traversal takes place. Leaf nodes are processed before the proper children are rendered. Among the leaf nodes, the geometry is rendered last, after the appearance and transformations. Here’s how the leaf nodes are rendered.

Shading

Typical use of these attributes is by the shading process. Shading occurs when a geometry node is encountered in a scene graph component. In the simplest case, if there is only one appearance in the scene graph and it occurs in the same component where the geometry is, then the values of the shaders applied to this geometry, will be determined completely by the values of this appearance’s attributes, plus the default values specified by the shader interfaces themselves.

Inheritance

Suppose the geometry is specified by a scene graph path, and suppose that on this path, there are several appearances. Then the values of the shaders is determined by the appearance attributes which are defined in the scene graph components appearing in this path. A shader is provided with a list of these appearances, sorted with the closest (along the path) appearances first. In the simplest case, the shader queries this list for the value of a given key, for example, “diffuseColor”. The list of appearances is searched for the first occurrence of this key in the list of appearances. If a first occurrence is found, then the value of this occurrence is returned to the shader; otherwise a default value assigned to this key is returned.

Things are a little more complicated if the key contains the . character (period). This serves as a delimiter. Suppose the key is such a string: “polygonShader.diffuseColor”. First the list of appearances is searched for any occurrences of the complete string. If none is found, the next-to-last substring (defined by occurrences of .) is removed, along with one ., and the search if repeated for this shorter string, in this case, “diffuseColor”. The default value is returned only if all such searches find no match. This allows for a powerful (and sometimes confusing!) form of inheritance of attributes.

See also the class EffectiveAppearance, which encapsulates the inheritance mechanism described above.

Shaders

The shaders which appear in the jReality rendering process are defined as follows. First of all, all shaders are defined by interfaces. These interfaces are then used to automatically construct classes which fill their fields by querying the appearance lists as described above.

At the top level, every shading operation begins with an instance of GeometryShader, which in turns contains three sub-shaders: point, line, and polygon shaders, plus flags indicating whether points, lines, and polygons, respectively, are to be rendered. This allows for differentiated rendering of a single geometric primitive without having to repeat the geometry in order to render it as polygons, as lines, and as points. The jReality rendering process renders all three aspects of the same geometry, as desired.

Which shader interfaces provide the point, line, and polygon shaders can also be specified by appearance attributes. In general, for most generic rendering tasks, the default shaders are adequate. The attribute pair (“polygonShadername”, “default”) specifies this default polygon shader; other supported values include “constant”, “twoSide”, and “implode”.

Default shaders — notes

The default point shader allows the choice of rendering a 2D disk whose size is specified by the “pointShader.pointSize” attribute, or rendering spheres of a radius given by “pointShader.pointRadius” around each point. Typically the spheres look better, but the disks can be considerably smaller. In the JOGL backend, the 2D disks look like spheres (since they are represented by texture-mapped sprites) but they show artifacts when they overlap with real 3D geometry. If spheres are drawn, use the attribute prefix “pointShader.polygonShader.” to specify attributes for rendering them. Warning: since the attributes “pointSize” and “pointRadius” live in different coordinate systems (screen vs. object), switching between these two options can result in wide variation in the apparent size of the disks/spheres. If you think of a good solution, let us know. (This problem is limited to the JOGL backend.)

The default line shader, in a similar vein, allows the choice of rendering a flat bresenham representation of the line segment with a width specified by the “lineShader.lineWidth” attribute, or rendering tubes of a radius specified by the “lineShader.tubeRadius” attribute. If tubes are drawn, use the attribute prefix “lineShader.polygonShader.” to specify attributes for rendering them.

The default polygon shader implements a standard shader with ambient, diffuse, and specular contributions, plus an optional 2d texture (see Texture2D interface) which is active only if the geometry comes with texture coordinates. Play around with the ViewerApp navigator for the full set of shader attributes available. Additionally one can specify “smoothShading” on or off. The default is on. In this case, shading values at the vertices are smoothly interpolated across faces. If vertex normals are present they are used, as well as vertex colors. If smooth shading is off, then face normals and face colors are used, and the values are not interpolated across the face. For a nice effect for polyhedra (using JOGL backend), provide only face normals and face colors but turn on smooth shading. Then specular highlights look much closer to Phong shading than with traditional flat shading. Disclaimer: Your mileage with this trick may vary using other backends.

Warning: there may be difficulties with some backends when non-default (or default!) shaders are used, since there is no enforcement mechanism to make sure a given backend implements a given shader interface. If it doesn’t, it will typically use the default one instead.

Text shaders

jReality supports 3D text. Each of the standard point, line and polygon shaders contains a text shader. If the corresponding attribute named “labels” is present, then this shader is used to render 3D labels at the corresponding point, edge midpoint, or face center. See DefaultTextShader for possible parameters. The geometry factories can be used to generate automatic labels.

Tools

Just as the Viewer interface provides an abstract description for the different backends, the Tool interface, and related classes, provide an abstract description of the different environments in which interaction can occur. This allows the same tool code to be used with a mouse device on a desk top or a wand device in a virtual reality, like the PORTAL installation of our group at TU-Berlin. There is a single configuration file describing the actual hardware devices in the environment; these devices are mapped on common virtual devices which then are the only objects needed to drive the actual tools. For example, the inputs of the mouse device and the wand device are both processed to create a virtual device called a “PointerTransformation” which is then accessed by a concrete jReality tool class.

The tool system inner loop

Tools are added to the tool list in a scene graph component. A tool has two kinds of slots, activation slots and current slots. If a tool has no activation slots, then it is always active. A typical activation might be a left mouse down event. Upon activation, the tools activate() method is called; upon deactivation, deactivate() is called. While active, perform() is called on each execution of the tool system queue. [I have to consult with the expert before I write any more.]

Picking

jReality provides picking infrastructure in the form of the PickSystem interface. There is an implementation called BruteForcePicking which does what its name implies. There’s also support for something called AABBPickTree’s. Brute force picking has recently been upgraded to support non-euclidean picking. Essentially picking is a projective task, but certain details about the ordering of the pick results along the pick line can be quite sensitive to non-euclidean isometries. As a result, the pick results which are returned from the computePick() method are sorted according to an affine coordinate along the pick line which runs from 0 to infinity through the positive values. This also avoids problems arising when geometries of different signature are present in the same scene, and the different distance functions are incompatible for sorting.

As with rendering, there is separate control over picking of points, lines, and polygons. This is done by setting the attributes “point.pickable”, “line.pickable”, and “polygon.pickable” in the appearance. These are inherited as described above with shader attributes. Or, to completely skip over a component, set the attribute “pickable” in its appearance to false. If points or lines are to be picked, then the sphere, resp. tube, representation is used to perform the picking (though the rendering may be done with the other representation — a not entirely correct result). Furthermore the picked point of the sphere or tube is returned rather that the point, resp. closest point on the edge. This should be fixed.

The PickResult class provides the information about the picked point. When the pick is on a face, barycentric coordinates of the pick point in a triangle are calculated and used to interpolate the attributes, such as object coordinates, color, and texture coordinates.

.