Use a camera listener

From JReality Wiki
Jump to: navigation, search

Source file: CameraListenerExample

JavaDoc: Camera

JavaDoc: SceneGraphPath

JavaDoc: CameraListener


Run as Java webstart


in jReality, rendering is controlled by an instance of Camera. Users who wish to be informed of changes in the state of the camera can implement instances of the CameraListener interface. This example shows how you can use a camera listener to control the level of detail used to render a geometrical object.


To run: execute the main method, and in the graphics window activate the camera zoom tool using the mouse scroll wheel. The camera listener calls update(), which calculates the expected screen extent of the sphere and chooses a tessellation of the sphere which keeps the size of the triangles roughly constant.


How it works: It implements a simple level of detail algorithm to display different resolutions of a tessellated icosahedra based on the apparent size of the object.

  • This apparent size is calculated from the transformation which converts from object coordinates into normalized device coordinates.
  • Notice how spherePath and camPath, instances of SceneGraphPath, along with the utility method CameraUtility.getCameraToNDC(), are used to generate the full transformation from sphere space to normalized device coordinates (represented by the variable s2ndc).
  • The method P3.getNDCExtent() returns the maximal length of the images of a unit frame in object (sphere) space, when projected into normalized device coordinates.
    • The lowest resolution corresponds to extent less than .02; each doubling of this value results in a further refinement of the tessellation, replacing each triangle with four smaller ones, thus keeping the screen size of the triangles displayed roughly constant.


	public void update()	{
		double[] s2w = spherePath.getMatrix(null);
		double[] w2c = camPath.getInverseMatrix(null);
		double[] c2ndc = CameraUtility.getCameraToNDC(viewer);
		// the net transformation object to normalized device coordinates is the 
		// product of the three matrices above.
		double[] s2ndc = Rn.times(null, Rn.times(null, c2ndc, w2c), s2w);
		double size = P3.getNDCExtent(s2ndc);
		// lowest resolution (0) corresponds to an ndc extent of exp(-4) ~= .02
		// each doubling of this extent moves up to the next tessellation, so
		// approximate screen size of triangles remains roughly constant.
		double logs = Math.log(size)/Math.log(2.0)+4;
		if (logs < 0) logs = 0;
		if (logs > (numLevels-1)) logs = (numLevels-1);
		int which = (int) logs;
		System.err.println("size = "+size+" which = "+which);
		if (which == lastLevel) return;
		sphereSGC.setGeometry(levelsOfDetailSpheres[which]);
		dls.setTubeRadius(.033*Math.pow(.5, which));
		dps.setDiffuseColor(colors[which]);
		lastLevel = which;
	}



The following images shows two images of the sphere, one when its further away and one closer up. Notice that the former occupies less area of the screen and is accordingly not subdivided as finely as the latter.

Far away, fewer triangles Zoomed in, more triangles)


Previous: Use a camera path Developer Tutorial: Contents Next: Use selection capabilities