CAVE/large powerwall scenario

General discussion area.
Simon
Posts: 8
Joined: Wed 29. Apr 2009, 10:17

CAVE/large powerwall scenario

Post by Simon » Wed 29. Apr 2009, 11:32

Hi,

I was thinking about creating a proof of concept/feasibility study on java in CAVE/large powerwall scenarios where distributed/clustered scenegraphs with stereo capabilities are required.
According to the feature page and my investigations jReality seems to be the only java scenegraph claiming to aim at/support (some of) this features.

I would be happy if you could give me some hints/starting points on creating a very basic application capable of displaying a scene distributed via multiple screens/viewports.

Kind regards,

Simon

// EDIT: so i just discovered src-portal which seems to be a good starting point ... I probably missed it reading the first time through ...

User avatar
steffen
Posts: 186
Joined: Fri 16. Jun 2006, 13:30
Location: TU Berlin
Contact:

Post by steffen » Wed 29. Apr 2009, 17:20

Hi, glad to hear someone else gives our portal-backend a try! It runs successfully on our Portal (3 Walls, a.r.t. tracking system), at TU Berlin, Math Dep.

But since we had no other people using it there is almost no documentation yet - I will start writing a tutorial on how to set it up. Maybe you tell some details about the system you have...

Reading the code in src-portal will possibly need some explanation:There are two types of such backends. One is based on the synchronization of a master scene graph on the client machines, the other one starts the same application on the clients, and just broadcasts the tool events from the master machine. Since we have implemented the second backend, we use it exclusively for performance reasons. However, for simple scenes scene graph distribution should work fine, but might need some testing...

These backends are based on smrj (smrj.sf.net, which is outdated but will be updated soon). This is a library for remote java objects, which I wrote since rmi was not fast and flexible enough. smrj allows to create proxy objects, which correspond to identical objects living on different client machines (such as proxies for a geometry or a viewer). Calliing a method on such a proxy invokes the method call on all the client objects. Parameters of the method call will be serialized, or if they are proxies itself they will be resolved to the according local object. It also includes remote garbage collection and takes care of distributing the classpath, so the clients will download jars etc. from the master.

Hope that helps a bit...

Simon
Posts: 8
Joined: Wed 29. Apr 2009, 10:17

Post by Simon » Wed 29. Apr 2009, 20:35

Hi,
steffen wrote:Maybe you tell some details about the system you have..
we are currently using c++ software based on openSG as a distributed scene graph and are quite curious how java-based approaches perform in such a setup.

I would first try it on our CAVE (3 sides + floor) because jReality seems to support this use-case (in theory) out of the box.
The second "display" is a curved powerwall driven by three projectors. Because of the curved surface its not that easy to setup the viewport etc...

Both displays are driven by a multi CPU/GPU shared memory machine on IA64 linux which will probably cause most of the problems when trying to get things started because there is no java on there yet :)

Documentation or at least more notes like http://www3.math.tu-berlin.de/jreality/ ... lity_Notes would be nice because I was not yet able to get an example/tutorial running.

So an example for loading (e.g. a VRML) scene which can be explored would be a great starting point. The next step would involve some programmatic interaction with the scene graph.

Thanks in advance

Simon

User avatar
gunn
Posts: 323
Joined: Thu 14. Dec 2006, 09:56
Location: TU Berlin
Contact:

Post by gunn » Thu 30. Apr 2009, 12:17

Hi Simon,

Regarding sample code to test out loading a vrml file: Have you checked out the developer's tutorial? The first tutorial provides exactly what you want. You can find it at:

http://www3.math.tu-berlin.de/jreality/ ... hp/Intro01

The whole developer's tutorial can be found at http://www3.math.tu-berlin.de/jreality/ ... r_Tutorial.

jReality currently only can read VRML1.0 (it writes 1.0 and 2.0, however). It reads other common formats like OBJ and 3DS. See http://www3.math.tu-berlin.de/jreality/ ... le_formats.

Note that you can run the tutorial examples as webstarts (see the link "Run as Java Webstart" on the above tutorial page), assuming you have java1.5 or higher installed on your machine. Simply choose "File->Load File" when running the ViewerApp version or "File->Load Content" when running the PluginViewerApp and select your VRML1.0 file to load.

Hope this helps,

Charles
jReality core developer

User avatar
steffen
Posts: 186
Joined: Fri 16. Jun 2006, 13:30
Location: TU Berlin
Contact:

Post by steffen » Thu 30. Apr 2009, 18:27

As I understood it the question was how to run this tutorial in a CAVE... I am working on it, coming soon ;-)

Simon
Posts: 8
Joined: Wed 29. Apr 2009, 10:17

Post by Simon » Fri 1. May 2009, 20:24

steffen wrote:As I understood it the question was how to run this tutorial in a CAVE... I am working on it, coming soon ;-)
exactly ... perfect!

We've already set up java (on IA64 which unexpectedly worked like a charm... sadly there is no java web start on IA64), jogl and jReality and tested the normal viewerVR application.

We also tried passing -Djreality.jogl.quadBufferedStereo=true and enabling stereo via the menu which caused our active stereo shutter glasses to flicker on one eye ... so I am curious .. did anyone try active stereo yet or only passive systems?

We also tried loading some larger models as some kind of benchmark ... is there an easy way to display statistics like FPS and number of triangles in viewerVR?

My final question (for the moment) concerns loading fully textured models .. when loading a model it seems like only the geometry is set up and all materials/textures are being omitted. We tried some different 3DS files...

Kind regards

Simon

User avatar
gunn
Posts: 323
Joined: Thu 14. Dec 2006, 09:56
Location: TU Berlin
Contact:

Post by gunn » Fri 1. May 2009, 20:48

We also tried loading some larger models as some kind of benchmark ... is there an easy way to display statistics like FPS and number of triangles in viewerVR?
The PluginViewerApp class (which you can invoke using the tutorial de.jreality.tutorial.intro.Intro01) has a plugin for this purpose. When the graphics window has the focus, type 'i' and you will see a display of the memory usage, polygon count (including polygons used for tubes and spheres!) and frame rate (for the JOGL backend only). The two frame rates are pure rendering times, and clock frame rate. You can also attach this plugin to any plugin-based viewer, such as the ContentViewerVR class (but if you want the key shortcut 'i' you'll also need to include the ViewerKeyListenerPlugin too).
My final question (for the moment) concerns loading fully textured models .. when loading a model it seems like only the geometry is set up and all materials/textures are being omitted. We tried some different 3DS files...
The 3DS reader does attempt to read materials and textures ... whether it succeeds is of course another matter. We're mostly concerned in our group with content creation rather than content processing so our readers tend to be written for a specific project and are not complete implementations in some cases. I would expect that the VRML (1.0) reader is more complete and robust than the 3DS one.
jReality core developer

User avatar
steffen
Posts: 186
Joined: Fri 16. Jun 2006, 13:30
Location: TU Berlin
Contact:

Post by steffen » Sat 2. May 2009, 00:08

Great to hear that java, jogl and jreality runs! Do you work with eclipse there?
We also tried passing -Djreality.jogl.quadBufferedStereo=true and enabling stereo via the menu which caused our active stereo shutter glasses to flicker on one eye ... so I am curious .. did anyone try active stereo yet or only passive systems?
No, I dont think so. But I guess that the camera was not set to stereo. But what you tell sounds promising, try to enable stereo, it needs to be set on the camera. In the ViewerVR there is a menu entry Camera->Toggle stereo. Maybe that works...

Simon
Posts: 8
Joined: Wed 29. Apr 2009, 10:17

Post by Simon » Sat 2. May 2009, 12:38

gunn wrote:When the graphics window has the focus, type 'i' and you will see a display of the memory usage, polygon count (including polygons used for tubes and spheres!) and frame rate (for the JOGL backend only).
Ok, i'll try that next week.
gunn wrote:The 3DS reader does attempt to read materials and textures ... whether it succeeds is of course another matter. We're mostly concerned in our group with content creation rather than content processing so our readers tend to be written for a specific project and are not complete implementations in some cases. I would expect that the VRML (1.0) reader is more complete and robust than the 3DS one.
I see .. VRML 1.0 is also kind of limited but i'll try to get some models in that format.
steffen wrote:Do you work with eclipse there?
At the moment we just start things via command line. We probably will install eclipse next ... We also tried modifying the start(remote)client scripts for our use-case (one machine instead of four) but nothing seemed to happen (no output, no window ..)
steffen wrote:But I guess that the camera was not set to stereo. But what you tell sounds promising, try to enable stereo, it needs to be set on the camera. In the ViewerVR there is a menu entry Camera->Toggle stereo. Maybe that works...
You seem to have missed that I already tried that ;) passing only the option (obviously) changes nothing until I enable stereo through the menu ... without glasses it seems that the two views are rendered correctly but the stereo sync signal or something seems to be wrong and the shutter glasses flicker.

User avatar
steffen
Posts: 186
Joined: Fri 16. Jun 2006, 13:30
Location: TU Berlin
Contact:

Post by steffen » Tue 5. May 2009, 16:06

Hi, I have started a tutorial, see http://www3.math.tu-berlin.de/jreality/ ... t_Tutorial. I suggest you first set up Eclipse (there is also a tutorial: http://www3.math.tu-berlin.de/jreality/ ... p_tutorial) and try to get ViewerVR running with 3 displays (left, center, right). Note that I have done some changes to the code and the config files, please check out the latest version from SVN. For fullscreen config edit the file portal.props.

Simon
Posts: 8
Joined: Wed 29. Apr 2009, 10:17

Post by Simon » Tue 5. May 2009, 20:56

great!

I just did some (slow and painful) dry-runs using some nested X servers to represent center, left, right and everything seemed to work.
I'll try to install eclipse and run things on our cave in the next days...
Then we'll have to figure out why stereo is flickering (our great ATI hardware?! :D ) ..

User avatar
steffen
Posts: 186
Joined: Fri 16. Jun 2006, 13:30
Location: TU Berlin
Contact:

Re: CAVE/large powerwall scenario

Post by steffen » Thu 7. May 2009, 17:59

How is it going with stereo? Please let me know about your tracking system, do you use trackd?

Simon
Posts: 8
Joined: Wed 29. Apr 2009, 10:17

Re: CAVE/large powerwall scenario

Post by Simon » Fri 8. May 2009, 00:16

steffen wrote:How is it going with stereo?
Stereo seems to have worked all along :) ... and we have just been victims of several broken shutter glasses :D

So I am happy to tell you that we ran ViewerVR on our CAVE (3 sides + floor, stereo)!
steffen wrote:Please let me know about your tracking system, do you use trackd?
We will try to include input tomorrow because it is required for our next tests (moving through the scene, ..).
We have head tracking on one of our shutter glasses and a tracked wand both using trackd. We discovered the xml files describing the input/tool configuration but have not yet figured out the best approach.

As a first step we thought about normal keyboard+mouse input but this didn't seem to work (we forwarded the master X window ...).

Because we are not that familiar with trackd we will consult a colleague of ours. We have to figure out the correct ports/identifiers/mappings to use head tracking and the wands tiny joystick for movement.

So we would be happy if you have any hints how to approach this (especially a simple way to move around and enable statistics/fps).

User avatar
steffen
Posts: 186
Joined: Fri 16. Jun 2006, 13:30
Location: TU Berlin
Contact:

Re: CAVE/large powerwall scenario

Post by steffen » Fri 8. May 2009, 12:06

So I am happy to tell you that we ran ViewerVR on our CAVE (3 sides + floor, stereo)!
Great! Glad to hear that... How is the screen of your floor? Quadratic? Then I guess the pictures wont match up correctly, right?
We will implement a way to configure different screen settings via the config files, but that may take a week or so... But you should be able to get at least the three walls set up correctly with what we have.
As a first step we thought about normal keyboard+mouse input but this didn't seem to work (we forwarded the master X window ...).
I thought with the stuff I have written in the tutorial that the master viewer has WASD navigation and jump at the space-key. Also the mouse should work, but that viewer component needs to have focus. Try to enable for instance the rotate tool in the ViewerVR control frame, and drag the geometry with left mouse. Also, you should see a small ball in the scene (also in the different walls) at the pick point of the mouse.

About trackd: If you are using trackd anyway, it should be fairly easy to get it working. First, look into the folder trackd, there is a small c-file, that needs to be compiled and linked to a jni library. Dont ask me where to get the trackd api, maybe ask Mechdyne... Then compile the native library, add it to the java library path, adapt de.jreality.toolsystem.raw.Trackd (in src-portal) to the shared memory segments to which your trackd installation is configured (maybe we have the standard settings there, but I am not sure). Default config is /etc/trackd.conf i think. Then Trackd.java should work and print out the trackd data in a JFrame.

That output may help when adapting the "toolconfig-portal.xml" file (if it is necessary).

When that works, look at DeviceTrackd, you need to adapt the shared mem segments there (yes, this should go into toolconfig.xml). Then take a look at the calibrate method in DevicePortalTrackd - we convert feet to meters and translate the coordinate system, maybe you need something else...

Then, add the VM flag "-Dde.jreality.scene.tool.Config=portal" when you run ViewerVR (and you can also remove the LocalViewer flag, maybe it is faster then).

Ah, and take a look at the class PortalCoordinateSystem (ignore the portal-scale for now, this is required for non-euclidean stuff) and adapt the screen sizes to your setup...

Sorry that this has not yet moved to a config file...

Good luck!

Simon
Posts: 8
Joined: Wed 29. Apr 2009, 10:17

Re: CAVE/large powerwall scenario

Post by Simon » Fri 8. May 2009, 12:27

well, the input "suddenly" worked today. so WASD and some mouse actions work when I use the local viewer.
Another strange thing: yesterday the local viewer didn't render the scene at all, now it is shown in cross-eyed stereo .. any ideas what could influence this behaviour? I am quite sure that the configs stayed the same...

I adapted portal.props to use fullscreen and our (square) projector resolution .. (which in retrospect seems kind of redundant if fullscreen=true :) )

I'll now try to build the required trackd glue/JNI ...

I already looked into the Trackd* classes which seem to have the controller and other ids hardcoded (did you mean that by "adapt the shared mem segments"?).

Post Reply