- Custom resolutions in video driver
- Custom screen offsets
- Dialing settings into projectors
- Seeing if projectors have any useful DDC controls (they don't have any controls)
- Running a custom Compiz
- Running a custom Xgl
- Trying RANDR 1.3, which just came out and has options for custom screen transforms
- Running Chromium
- Running DMX
- Running DMX amongst multiple instances of Xnest
(There may be more that have come across my mind which I do not recall at the moment)
I view a final solution as falling into 3 tiers: individual screen transformation, individual screen trimming, and doing nothing.
Screen transformation would require some combination of Compiz, XGL, DMX, and Xnest. Or RANDR. There are wrinkles to all of these, however. I want to give a description of the current X architecture and then discuss the theory. The architecture has mostly been in my head since we started, and the possible solutions piled up as time went on.
The X Window System (also called X or X11) is a network system. A server controls the video output, the keyboard, and the mouse. Any program that runs, like a terminal, is a client. The server listens over TCP/IP or over a Unix socket. Clients connect and tell the server what to draw and the server sends back user input. This is commonly referred to as network transparency: the EXECUTION of gui applications and the INTERFACE with them can occur on different machines. This may sound convoluted, but it can be useful at times.
A simple example is that VNC-like capabilities have been there since day 1. A more complex example might be Chromium: because the system is architected with the thought that apps may have a disjointed operation, Chromium is able to grab and reroute applications.
Apps speak a (relatively) simple protocol with the server, and fancier things like dialog boxes are implemented with higher level libraries which aren't strictly standard. Communication at this level is probably analogous to Quartz on OSX or GDI+ on Windows. One of the driving philosophies here also is that the system provides mechanism and not policy, which is why the GUI can be weird sometimes.
Part of the design of this weird system, which might have a lack of features, is that it is extendable. If you look at /var/log/Xorg.0.log on your machine, you can see a lot of the extensions that have been added to the most common X server on Linux, X.org. For what I have been working on, the extensions I care about are Xinerama, Composite, and RANDR.
This is the description of Composite from the X.org wiki: "This extension causes a entire sub-tree of the window hierarchy to be rendered to an off-screen buffer. Applications can then take the contents of that buffer and do whatever they like. The off-screen buffer can be automatically merged into the parent window or merged by external programs, called compositing managers. Compositing managers enable lots of fun effects.".
To digress a little: a window manager is a privileged X application that is charged with managing windows. A given X program can only draw what is inside their window; everything outside their window, like the border or desktop, is implemented by the window manager. Consequently, window movement or minimization is handled by the window manager (it manages windows, after all!)
Window managers like Compiz rely on the Composite extension to perform distortions on the windows themselves, since the drawn output sits in an off-screen buffer rather than having been painted directly onto the screen. I don't know exactly how it is structured inside, but it pretty much uses Opengl to drive the entire screen and presumably imports drawn windows as textures onto 3D objects.
This provides possible technique for screen transformation: since Compiz is using Opengl to draw everything, it should not be *too* complicated to modify the object(s) representing the entire screen to be slightly distorted, as needed.
Problem 1: Nvidia DISABLES the Composite extension in a multi-gpu setup like we have, so Compiz will NOT run. There are a couple possible solutions to this, however
1) Use XGL
When you configure a multi-gpu setup, you typically use the Xinerama extension. It is apparently straightforward to do what amounts to running an individual X server on each gpu. This doesn't achieve the desired effect, however, as applications are stuck on whichever server they connected to (actually, this description is pretty hand-wavy and this isn't what really happens, but this is how it looks from a user perspective.)
When you use Xinerama, the different gpus are united and this extension figures out how to split the different drawing commands and so on between the different heads. It even splits Opengl between the heads, with heavy assistance by Nvidia's driver I'm sure. Key point here: random hardcore fullscreen Opengl works fine in a Xinerama arrangement.
Enter Xgl: Xgl is a pure software X server which renders into an Opengl window rather than directly onto the video card. As such, you can turn on Xinerama so you get nice multi-gpu Opengl and then run Xgl which will place its own window so it perfectly covers all the screen area Xinerama tells it about.
Problem solved, right? I've gotten this running, in lab, and it looks nice. You can imagine a performance hit, and there is, but it isn't too bad unless you do serious 3D work, which the table technically won't do. This method possibly shifts the coding work into either Xgl or Compiz: one of them would need to be informed about the layout of the various heads and be told to draw onto N objects with various distortions rather than the one. This is possible doable.
Problem 2: the code for Xgl is huge and intimidating. It may be a huge time sink understanding it and modifying it.
Problem 3: Xgl is deprecated! It isn't in any version of Ubuntu past 8.04, but maybe if we are lucky it can be coerced into running on 9.04. This of course only enables me to dump time into rewriting part of it as well; see Problem 1.
Possible Solution: Either stick with an older distro (no MPX support then) or try to get Xgl running on a newer one.
2) Use DMX
When experimenting with Chromium, which isn't a good fit for our project as it exists now, I found out about DMX. DMX stands for Distributed Multihead X. It is what Chromium relies on for a display wall. The concept of DMX is simple: again, it is a software-only X server like Xgl, but instead of drawing into its own window, it draws onto OTHER X servers. So a possible method is to ditch Xinerama, run 1 X server per gpu, and run DMX on top of all of them. Then run Compiz inside of DMX.
Problem 4: DMX segfaults on everything I've tried it with.
Possible Solution: run older versions of DMX, which apparently work (but I have not tried, sounds shady to me and apparently has rendering glitches.)
3) Use Xnest
Xnest is another pure-software X server. It simply draws into a normal window on your desktop. It could be possible to one Xnest server for each projector, and then have DMX draw into them.
Problem 5: This would perform horribly (no hardware Opengl happens here and we kind of want to use an Opengl window manager) and DMX crashes all the time anyway.
Possible Solution: I could fix DMX (either by fixing the code or running an old version of DMX) and then if Compiz runs like a pig, I could try to modify Xnest for display transformations.
This post has almost exhausted the pure software solutions. There are two more: RANDR and writing my own weird custom display layer.
RANDR is an extension for controlling the resizing and rotation of screens. It is what we use when we set up the multihead on our lab computers. In the newest version of X, which came out a few weeks ago and comes with Ubuntu 9.04, RANDR has been updated to allow arbitrary transformations. I was excited when I found out about this, and I rushed to try it in lab. Sadness: the xrandr tool simply segfaults. I think perhaps it is buggy and/or needs driver support from Nvidia. If the latter, I doubt we can rely on Nvidia to give us something usable anytime soon. For one, just because it is really new. Also, Nvidia disables RANDR when doing multi-gpu setups as it is, so I am not optimistic it would even be usable in our situation anyway.
The other solution is writing my own custom display layer. This sounds very hefty, and to be at all useful, it would need to be on the order of Xnest or Xgl anyway. It is unclear whether I am good enough or can justify the time.
One glimmer of hope is that when I first brought up the projector wall, I had a minor misconfiguration from when I was tiling 4 of our widescreens before. It caused one of the projectors to have a horizontally trimmed picture. I have not been able to recreate this effect in lab, sadly. If I did figure it out, it would be a matter of adjusting Metamodes or Xinerama layout specifed in Xorg.conf.
Doing the correction in software is tough. To do full distortion looks like it would require me to pretty much dig in and write arguably OS-level code. I don't feel comfortable facing that in terms of my programming skill or the time for this project. Trimming *may* be achievable, simply because I may have accidentally done it last week.
I want to resolve the possibility of trimming and work on the possibility of physically aligning the projectors "well enough". I've experimented with it a little bit, and it doesn't seem *too* bad. Not having perfectly adjustable surfaces for mounting the projectors is the biggest impediment in my tests. It may be all we can get away with, though.