I've spent a good amount of my weekend reading about and playing with Chromium. I've still not totally "evaluated" it, but I think I've been looking at it enough to write something.

For one, all of the pictures of display walls in their documentation is very encouraging. They also have pictures of Quake 3 running. Essentially, what Chromium lets you do is modify the Opengl command stream by directing through different servers. The servers may be running one the same machine or across the network. The servers can do all kinds of things like substitute the opengl commands, log them, or pass them on to a GPU's driver. Zero or more network hops may be along the way. The use the word parallel a lot also, which probably gets people's blood pumping. Parallel in this context I think is to say they can be intelligent about how to divide the opengl command stream so it could execute in parallel across multiple machines.

There are some gotchas though. Specifically, for our project, is it worth our time? I think for an *ideal* version of our table, something like Chromium could be very cool if we made little table modules that could be connected for an arbitrarily large table/wall.

In the real world though, it is unclear that it would be a home run for our project. For one, we need a PC for each head. We also need an application PC that programs will actually run on. This is how they intend you to use Chromium, and also how we want to things to run. It would be very baroque if we broke with what everyone else is doing and required table apps to actually be a collection of smaller processes that would coordinate their display output etc.

Second, in trying to run a network of PCs in our table, we'd need to install a network too. This would add a bit to our cost (we probably want a gigabit switch), but more importantly it would add a latency.

I've heard the idea that maybe (and this sounds like it could be a rumor) it would be a good idea to run the cameras on the nodes attached to projectors. While it sounds elegant, it also is unnecessarily complicated. Because the table applications would (presumably) run on the app node, the individual camera information would need to be accumulated there anyways.

The worst part, though, is that Chromium may be too crusty to be reliable. The last release was in August 2006. I got their simplest demo setup compiled and running in a virtual machine though. All it does is redirect the rendering of an Opengl app through the Chromium pipeline into another window on the same desktop. I have not yet tried to make it go across the network to a 2nd vm (this specifically is hardly novel: remote X accomplishes it also.)

For the display wall setup, Chromium relies on something named Distributed Multihead X (DMX). DMX is very cool - I'm almost embarrassed I had NEVER heard of it before. It is a pretty elegant program: all it does is become another X server which proxies draw commands to other X servers over the network. A very simple idea.. but that is actually all you need to do a display wall also. If we just directly used DMX, I'm not sure where we'd need Chromium except that perhaps it may be faster than Opengl -- but I don't really know since I haven't gotten either running too well then.

DMX segfaults in every configuration I've tried to run it in. This may be a symptom of also being somewhat old (newest article I found about it was from 2006 also.) I could bang on it some more tomorrow, *MAYBE* try to roll back to an older distro in a vm. I do not feel too hopeful though.

The biggest drawback to these solutions, for me, is that I already have 4 head output running fine at my desk. I'm pretty confident that I could drop another card in there to get our 6 heads. If I try to go for Compiz on it, then my config gets a bit more archaic and actual Opengl performance (say, trying to play Q3 or WoW inside) will drop off, but it still works. And it is all one computer. And stuff going over pci express is probably faster than over the network. And it appears to be stable as well -- I was nervous about Nvidia drivers being flaky.

If I *DON'T* try to run Compiz, it is more reliable and performs a bit better. I don't really need Compiz unless I try to go for the display correction, which the solution above actually complicates a little.

As an aside, and I've mentioned this to Jas already, I'm pretty much at the point where going much further into the display work would involve me trying to understand a MASSIVE codebase that could absorb a lot of my time and which could simply be out of my league.

So past checking out Chromium/DMX and having this multihead box working, I would like to move on and just be sure we do a really good job mounting up our projectors in the final table.

In case anyone is interested though, these are the options I have for the display correction:
  • Customizing XGL (hardest)

  • Customizing Compiz (hard)

  • Asking Nvidia's driver to give me custom resolutions (not too bad)

No comments:

Post a Comment