The best thing I've gotten to work so far is this:
Code: Select all
public class HiddenImageBufferRenderTest {
public static void main(String[]unused) {
JRViewer v = JRViewer.createJRViewer(Primitives.wireframeSphere());
v.startupLocal();
final Viewer viewer = v.getViewer();
JFrame jf = new JFrame("Active");
jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jf.getContentPane().add((Component)viewer.getViewingComponent());
jf.setSize(512,512);
jf.setLocation(5000,5000);
jf.setVisible(true);
jf.setLocation(5000,5000);
jf.toFront();
EventQueue.invokeLater(new Runnable() {
public void run() {
BufferedImage img = ImageUtility.captureScreenshot(viewer);
JFrame jf2 = new JFrame("Snapshot");
jf2.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jf2.getContentPane().add(new JLabel(new ImageIcon(img)));
jf2.pack();
jf2.setVisible(true);
}
});
};
1. Extra system overhead required to set up the GUI just to take a picture of it
2. Potential race conditions exist in case the window gets covered or minimized, etc
3. Artifacts of being on the screen (frame decorations, etc) may show up.
I consider what I'm doing now to be a kludge. Is there any way to get the rendering without relying on JRobot and its idiosyncrasies?
I'd rather place a Camera on a path, and make a call that returns the corresponding BufferedImage on demand, without requiring anything to be rendered to the physical screen.
I've tried editing JOGL FBO Viewer, but couldn't get it to return data in the BufferedImage.
Code: Select all
BufferedImage img = fbo.renderOffscreen(100,100);
Does anyone have a clue how FBOs work and whether or not I can use it to render offscreen and capture the results in a BufferedImage?
Some text explaining how the JOGLFBOTextureExample in the Programmer's tutorial works would be nice. I wouldn't mind posting how it's done in a tutorial if I could get it to work.
Thanks,
_-T