Big summer update 4bis – The updated version

To anyone who was waiting for BSU 5 (which, let’s spoil it, will be about file management and virtual file systems), bad news ! For this new article will be a revised version of the last one, whose plans have seen much change since its publication, thanks to Brendan’s feedback.

Going further in the “long-term thoughts” part of this summer update, I’ll now be bringing up some long-term ideas about how I’d like to do graphics in the early days of this OS’ graphic stack, the a framebuffer-based backend to begin with even though I obviously keep the option to do things otherwise latter. I’ll noticeably explain what motivates this choice, what the dirty implications are, and how I plan to live with them.

Ode to the joy of modern graphics

Graphics are a mess, both on IBM compatibles and on ARM devices. Every GPU manufacturer in this world has forgotten what the “standards” word means, and the only thing which allows user-space software to consistently run across multiple hardware is abstractions layers implemented in software. And when I say “consistently”… Well… You know why consoles and Apple iDevices are so popular among game developers as compared to desktop PCs and Android devices respectively, isn’t it ?

When you begin in the world of alternative OSs, you are pretty much alone with your keyboard and your chair. As such, there is strictly no way you’ll manage to have the manpower it takes to write a range of hardware-specific drivers for every GPU in existence, and rewrite them each time a new chipset family comes out due to manufacturers enjoying breaking stuff for the fun of it. Even mature codebases like Linux, with thousands of developers, don’t manage to get it right, so the logical conclusion is that it’s simply impossible without having the manufacturer writing the driver for you. Even that won’t save you, considering that according to Microsoft’s statistics, the manufacturer-provided GPU drivers are the main source of crash on Windows, the OS which they are mainly developed for.

As such, there is a need for GPU-independent abstractions, which allow one to draw stuff on screen and retrieve information about said screen in a standard way, even though the full power of the GPU is not available. Thankfully, such a thing exists in the x86 world, in the form of the VESA BIOS Extensions (VBE) for all BIOS-compatible computers sold today, and the equivalent UEFI functionality.

Linear framebuffers

VESA BIOS Extensions

The VESA BIOS Extensions (VBE) are a set of BIOS services that you are guaranteed to find on any desktop or laptop sold today and for many years to come. They essentially provide two useful functionalities :

  • Switching pixel resolutions of the main screen and giving software access to an abstract version of the screen, called a linear framebuffer, which is basically a giant bitmap of known format with extra padding between the pixels and scanlines.
  • Getting the EDID information of said screen, which basically tells you everything about it, including its size and the pixel resolution it supports.

The other side of the coin is…

  • 16-bit code : Like most other BIOS services, VBE functions require you to switch your computer back to its startup state or to emulate their real mode code. Emulating is complex and switching is basically only a reasonable thing to do at boot time, so this forces a specific design on us : the bootstrap component has to do the mode switching job, then transmit the EDID and frambuffer information to the OS through some standard mean.
  • Poor manufacturer support : Although VBE has catched up with the evolutions of screen technology (even supporting stereoscopic “3D” displays in its latest edition), GPU manufacturers have not well catched up with the evolutions of VBE. Main consequence is that on most hardware, only 4:3 screen resolutions are available, which on modern 16:9 and 16:10 screens mean sub-optimal graphics and distorsions due to wrong pixel aspect ratio unless some sort of software correction is being applied.
  • No multiple displays : VBE only supports displaying data on one screen at a time, and that generally happens to be the main computer screen. Supporting stuff such as video projectors and multi-head setups requires dedicated video drivers.
Equivalent UEFI functionality

The PC BIOS is a piece of software which strongly shows its age, and many big industry players are fed up with it. A replacement has hence been devised, in the form of the Unified Extensible Firmware Interface (UEFI), which is basically a cleaner, more powerful and modern, well-standardized, but also weirdly licensed and incompatible version of most BIOS services. Deployment is currently under way, and goes at a snail pace. But by the time this OS is released UEFI could actually be commonplace.

UEFI is based on the notion of protocols. My current mental model of the concept is that a number of virtual or physical EFI-compatible devices (Bus controllers, GPUs’ frame buffers, displays, the CMOS clock, etc…) register themselves in the EFI database as being able to handle a number of protocols, which are themselves standard firmware services. One may lookup the EFI database for devices supporting a specific protocol, and then use that protocol to interact with the device. UEFI also provides facilities to identify a device found this way within the system, by providing the information necessary to look it up on the PCI bus or in the ACPI tables.

Among the protocols of UEFI, two are especially of interest to us.

  • The Graphics Output Protocol (EFI_GRAPHICS_OUTPUT_PROTOCOL) is provided by each compatible frame buffer (that is, each independent video output of a GPU) within the system. It provides information on the video modes supported by both the frame buffer and the display (as opposed to VBE, which gives information about the frame buffer and the display separately), allows one to set such video modes, and gives access to a linear framebuffer similar to the one provided by VBE.
  • The EDID_ACTIVE_PROTOCOL protocol is provided by each active display, allowing one to retrieve its EDID without caring whether said EDID comes directly from the display itself or from the UEFI-compliant firmware (which can optionally override the display’s values).

An internal hierarchy of parent and child handles allows one to relate a display to the GPU which outputs to it through mechanisms which I’ve not figured out yet, and a naming system (EFI_COMPONENT_NAME2_PROTOCOL) helps the user figure out what each EFI device and display is.

Characteristics of the UEFI graphic output architecture are :

  • Only works on boot : This is for a different reason than VBE, though. BIOS services are always there, but since they are written in 16-bit code we can’t easily execute them. In the EFI world, on the other hand, there is an explicitly defined boundary between “boot-time services” and “run-time services”, and once the OS has declared that boot time is over, all boot time services are stopped and freed from memory. This may be a problem, as the linear framebuffer should probably remained mapped in memory, but nothing in the spec guarantees this. Testing is required there…
  • Uncertainty : Apart from the problem above, another potential issue is that nothing in the EFI spec forces graphics adapters to implement the Graphics Output Protocol. EFI only specifies how it should be implemented, if it is implemented. Another potential issue is that UEFI-compliant drivers are not forced to provide software with direct access to the hardware and can instead only provide a blitting function. This is a problem because as stated above, the driver which provides the blitting function goes the way of the dodo during system boot, and that blitting function would obviously end up not being accessible afterwards.
  • More modern hardware support : As stated before, EFI’s Graphic Output Protocol supports multiple displays well and only broadcasts which modes are supported by the connected displays. The spec also attempts to make EFI firmwares less lacking than their VBE counterpart in the realm of low resolutions, by asking that integrated graphics controllers support the native resolution of the host display, which makes it difficult for IGP firmwares *not to support* widescreen framebuffers. The spec does give them an occasion to run away, though, by giving them the vague option to implement “A mode required in a platform design guide” instead. Let’s hope that not too many EFI firmwares will choose this latter option and just put 640x480x8bit as a requirement in the platform design guide…

To sum it up, UEFI offers a framebuffer abstraction that’s potentially much better than VBE’s in every way, but that’s not available in current hardware and emulators, and can go wrong in a number of way depending on manufacturers’ interpretation of the EFI spec. Still, it’s a good effort already, and maybe the UEFI mafia will have me join their secret society on my own free will after all…

Some graphics stack idea

Now that we’ve seen what kind of graphic hardware we’ll build drivers on to begin with, let’s see what kind of graphic stack we could build on top of that. Please note that layers do not equal processes, although they beautifully make up for isolated components : two layers may be united in one single process (as an example by implementing it in a library form), one layer may be split into several processes (the desktop shell, typically), etc…

Now that this is said, let’s explore the current graphic stack concept, from its bottom to its top.

Layer 1 : Video driver

The role of this device-dependent layer is to abstract away the various differences between heterogeneous actual hardware interfaces into a small number of OS-defined hardware-independent interfaces. Drivers should keep lean, with OS-defined interfaces that remain relatively close to the bare metal, as every piece of code that is handled within drivers has to be rewritten many times in every single driver. This causes reliability issues, effort duplication, and platform inconsistency, which as any Linux OpenGL user can attest is not a good thing.

From a “bare metal” point of view, there are two families of hardware which this OS will support. On one side, there are framebuffers, like those mentioned above, which are used for software-rendered 2D graphics. On the other side, there are full-blown GPUs, for the hardware which gets the luxury of native drivers, and those are designed for hardware-accelerated raster 3D graphics and have a very complex internal structure.

My take to manage this dichotomy is to have two different system abstractions at this level.

One is for framebuffers. It is very simple, as framebuffers are pretty clean hardware abstractions. The abstract OS framebuffer would probably mostly differ from actual hardware frambuffers by the fact that it has a consistent 32-bit RGB pixel layout, if the software rendering stack can affort the performance hit of pixel conversions of a screen-sized bitmap that this implies. Run-time display resolution change will not be a requested feature from framebuffer drivers, as it has limited use and it seems that none of the modern framebuffers cleanly supports it. If there is a need for resolution upscaling, it will be done in software, as software upscaling has better quality anyway.

The other abstraction is for the native drivers of GPUs. The abstract hardware as a structure close to… well… an abstract GPU. Linux drivers attempt to do this by having the driver controlled with OpenGL commands, but OpenGL has proven to be so high-level already that this is a very bad idea. So what I suggest to do instead is to do as Linux engineers now want to do, and use the Gallium3D GPU abstraction, that is closer to the bare metal and allows us to reuse native drivers from other OSs as a nice bonus. OpenGL functionality will then be provided as a layer on top of the Gallium3D driver, typically a shared system library.

Layer 2 : Primitive rendering

This layer, which is located on top of each driver, draws on screen the primitive shapes that are used within the GUI. To start with, I’d probably use a full catalogue of common basic 2D shapes from image editors : pixels, lines, rectangles, gradients, text, etc. The initial implementation will work on top of framebuffers, since that’s all we’ll initially have. Then at some point, support for rendering these same primitives with OpenGL will be added, allowing the GUI to work on top of native drivers without requiring those to masquerade as a 2D framebuffer.

This layer also abstracts away the various color spaces of video output devices, in order to reach a consistent visual look on every device which has ICC color profiles installed. To this end, the primitive renderer internally works with an hardware-agnostic color space such as CIE_Lab or CIE_XYZ (I tend to lean towards the former, as it offers perceptual uniformity which means nicely uniform color mixing). Ideally, it would only do the conversion to RGB at the last moment, for perfect visual uniformity across devices, but the expensive nature of CIE-to-RGB conversions may force some approximations to be done in the area.

One exciting idea, which was suggested by Brendan along with the one above, is to also use a “material system” where the screen is rendered as lit from font and behind, and the color of every object on screen is defined by its light transmission and reflection properties, just like the color of real-world objects is. In an hypothetic future where native drivers would be commonplace and the computational power available for eye candy would have no limit, 3D primitives could also be added, and this material system could be effortlessly extended into a full raytracing sytem managing refractive properties, specularity, diffraction, etc…

Layer 3 : Window manager

At this point, we manage separate displays in a separate fashion. Although this workflow may be desirable when some displays are totally unrelated, like when a video projector is plugged into a laptop, there are other situations where users actually want several screen to be handled as one single continuous drawing surface by the system, like in so-called “multi-head” setups. For multi-screen support, we thus want to create a number of abstract “virtual screens” which do not necessarily have a 1:1 mapping to their physical counterpart.

Conversely, proper software isolation practice would require that we give such an isolated “virtual screen” to each of our GUI processes, so that they don’t mess with each other’s UI. An abstraction which sounds much less exotic once one puts its regular name on it : windows. Several virtual screens within the boundaries of one (or more) physical screens.

I don’t know yet if those two abstractions should be separated, as they sound very much related to each other. Each time, we make software use virtual coordinates in their drawing jobs, and then hook and adjust these drawing jobs to make them fit the reality of physical screens. For now, let’s put both of these abstractions under the common name of “Window manager”.

Layer 4 : Widget toolkit

There is a level above the window abstraction and below applications where standard system controls are drawn. In my opinion, this is also the level where resolution independence, that is, independence on the display’s output resolution and the input device’s precision, should be implemented. Why is it so ?

First because input resolution independence requires relatively high-level knowledge to be achieved, as it only affects input controls (a button would be resized, but a picture or PDF document would stay the way it is). Not before the widget toolkit level is this knowledge available, unless we choose to “taint” the graphics output of primitive rendering with metadata about the input/not input status of what we draw. Abstractions leaking this way would be a form of spaghetti code, as it makes input event management more entangled with graphic output and harder to service independently.

Second reason why I think that resolution independence should be implemented at the widget level is that resolution independence, input one in particular, makes the size of things random and unpredictable for anyone above the resolution independence layer itself. So if I put resolution independence at the primitive rendering level, some widgets will always end up being rendered outside of an application’s window (and thus, become invisible) at some point, in a conflict for space that should definitely be managed by the widget manager. One solution would be to have the primitive rendering code communicate with the widget rendering code to determine whether a widget would be visible or not, but that’s both cumbersome and inelegant.

As for the drawbacks to this approach, it makes it impossible to make a smooth multihead setup between two screens of different I/O resolution. However, I’m not sure that putting totally unrelated screens side by side and asking that they display a consistent picture is a valid use case. If it is only the color response which differs, I can understand, but if physical screen size and DPI change too, that’s pretty much a no-no. A way to solve the problem the pain would perhaps be to put the input resolution independence in the widget toolkit and the output resolution independence in the primitive rendering engine, but splitting the resolution independence like that just sounds quite inelegant, so I’d first like to know if this is really necessary/useful.

Layer 5 : Applications/Desktop shell

Above the widget toolkit layer, all system GUI abstractions for developers are available and applications can run, sending their resolution-independent GUI spec sheets to the lower layers. One special application (or, perhaps, set of applications) will be the desktop shell, which groups together the global system controls that allow one to switch tasks, close programs, open new software, see status information and notifications, perhaps log in too, etc…

Advertisements

9 thoughts on “Big summer update 4bis – The updated version

  1. Brendan August 28, 2011 / 1:49 am

    Hi,

    I was thinking it might be fun to have a poll, to find out what people are using for monitor/s; so…

    My info!

    The left monitor is a 16:10 LCD monitor running at a resolution of 1600*1200. The right monitor is a 16:9 LCD monitor running at a resolution of 1920*1200. These are the monitor’s native/preferred resolutions. For physical dimensions, the right monitor is larger – slightly taller and about 10% wider than the left monitor. Both monitors are using a “32 bits per pixel” colour depth, and KDE happily uses both of them (such that maximizing a window makes it fill one screen; but I can make a single window cover both screens without any problem by dragging the window’s borders, which can be fun for “full field of view” gaming).

    There’s also a third monitor, which is a smaller 16:10 LCD. This is an auxiliary thing which I rarely use from this computer (KDE doesn’t know about it at the moment). It’s to my far left all on it’s lonesome. I had planned to set it up for TV/movies one day but I haven’t got around to actually doing it (there’s an ageing 4:3 TV to my right which will hopefully stop working one day and provide some incentive). Both monitors on the left are also hooked up to other computers via. KVMs, but it’s probably best I don’t go into that.

    The video cards in this machine are both the same (“lspci” calls them “Radeon HD 4770”), and they’ll handle up to 2 monitors each. The monitor on the right is connected to the first video card. Both of the monitors on the left are connected to the second video card. I don’t worry about crossfire (even though it should work fine) and treat them as individual video cards (it wasn’t intended as a gaming machine).

    – Brendan

  2. Hadrien August 28, 2011 / 10:04 am

    Many apologies, you caught me using the “resolution” word for two unrelated concepts which it covers, pixel resolution and DPI. I’m correcting that in the article later.

    If your monitor second monitor is 1200 pixels tall (pixel resolution) and is only “slightly taller”, then it very likely has the same DPI (“output resolution”), the difference in height being only about physical build mismatch. What I was questioning was if it was reasonable of a user to except getting a working (smooth) multihead setup with two screens of different DPI (which requires a whole lot more work than two screens of identical, but unknown DPI).

    Horizontal characteristics mismatch is not as much of a big issue, as there’s no transition to manage there unless you get into vertical multihead setups, which I’ve only seen in big “screen walls”. When you’re building such a thing, you’ll probably want screens of similar physical size anyway, both horizontal and vertical, as you can order them in bulk to reduce costs and make the physical build easier.

    Another thing, whose validity I questioned, is strong vertical physical screen size mismatch between screens of same DPI. I have less problems with it because this can be easily managed in software, by artificially “letterboxing” the image on the tallest screen, but it effectively means that there’s all this screen area which will always display black and never be used…

    A poll could be useful, yup, there’s only two problems
    1/There’s not enough users on this website to get relevant results. Perhaps on OSdev’s forums ?
    2/Won’t people’s answers be altered by their experience of existing multihead operating systems, which have their own limitations, even if I word it the “what do you think would be reasonable ?” way ?

  3. Brendan August 28, 2011 / 6:46 pm

    Hi,

    I probably should’ve mentioned that I didn’t wake up one day and say “I want to buy 3 completely different monitors” and go out and get them. The history of my setup is a bit convoluted.

    Originally I was using a mixture of old (4:3) CRT monitors. The 16:10 LCD was the first LCD monitor I got, and for a while I was using that as the main monitor on the right, with a 17-inch 4:3 CRT on the left (and an older 15-inch CRT as the “auxiliary” on the far left). After a while I wanted more, and got the larger 16:9 monitor, and started using both LCD monitors as a dual-head setup on my main machine. The 17-inch CRT became the “auxiliary” on the far left (shared by other computers and not connected to my main computer at all).

    Then I decided to get a new development machine, and with 3 monitors around me I decided a pair of dual-head video cards would be nice. That was the start of the triple-head “one CRT and 2 different LCDs” era.

    After a while the power supply in the original 16:10 LCD blew up (cheap electrolytic capacitors). I tried to cope with dual-head “CRT + 16:9 monitor” and that lasted about 1 day. That’s when I got the third LCD monitor – I went into the local computer shop and asked “what have you got in stock that will fill a 520 mm wide hole?” and walked out with it 10 minutes later. The plan was to replace the older 16:10 monitor that failed, and end up with triple-head “CRT + smaller 16:9 + larger 16:9”.

    I can be a bit crafty when I want to be though. With nothing much to lose I pulled the old 16:10 monitor apart, found some suspect electrolytic capacitors in the power supply, wrote down the details for all of the power supply’s capacitors, went to an electronics shop and bought replacement capacitors, and got my soldering iron out. That was a few years ago, and the old 16:10 monitor has been working fine since then.

    That’s how I ended up with the “smaller 16:9 + older 16:10 + larger 16:9” triple head arrangement I’m using now. It’s the result of piecemeal evolution; and any similarity between DPIs is pure coincidence.

    The thing is, I wouldn’t be surprised if a lot of people with multiple monitors have similar stories that result in a strange mixture of monitors. I’d expect the most common would be “bought a new LCD monitor to replace an old one, but decided to use both old and new after I tried it”. You can’t really assume all end users will have a matching set of monitors.

    Probably one of the most messed up situations would be a laptop (with an inbuilt LCD screen) that is also connected to a projector, where the DPI difference is massive. This is far from rare – almost everyone has seen this arrangement in classrooms and/or meetings.

    A poll on OSdev’s forums would work fine. Another alternative might be OSNews (if Thom can go shopping for mini-ITX HTPC motherboards as a featured article, then… ;-)

    If you ask how many monitors, and the physical size and preferred/native resolution of those monitors; then you’d be able to determine hardware characteristics (and things like DPI differences, how many people are still using CRTs, etc) without too much trouble. You’re right though – trying to determine how people would like to use multi-head (rather than how they currently use it on existing OSs) wouldn’t be easy.

    – Brendan

  4. Hadrien August 29, 2011 / 10:17 am

    Probably one of the most messed up situations would be a laptop (with an inbuilt LCD screen) that is also connected to a projector, where the DPI difference is massive. This is far from rare – almost everyone has seen this arrangement in classrooms and/or meetings.

    It’s even worse than that : the DPI of a projector is an unknown, since it depends on the physical setup and in particular the distance between the projector and the wall.

    For this situation, and the one where two totally unrelated screens are plugged in (e.g. a netbook connected to a TV or Apple Cinema Display), I think that seamless multihead is not enough. A good OS must also support treating separate physical screens as separate virtual screens. And perhaps a “clone” mode where the secondary screen mirrors what happens on the primary screen would also be useful.

  5. Brendan August 29, 2011 / 5:47 pm

    Hi,

    You’re right – seamless multihead (or “one virtual screen spread across multiple physical monitors”) isn’t enough. There’s a whole bunch of different cases that could be taken into account.

    For multi-user; you’d want to be able to group available monitors together and assign them to users in arbitrary ways. Worst case would be a server with many thin clients, where each thin client may have multiple monitors and/or where each user might use multiple thin clients to simulate multihead (e.g. one user that uses 2 thin clients with 2 monitors on each thin client to get a quad-head setup).

    Even without multi-user, you can still have a single user that uses a combination of local video cards and remote desktops to simulate multi-head.

    Of course “remote desktop” might mean that the OS has virtual video drivers which actually use networking and protocols like X, VNC, RDP, etc, so that other software (e.g. the window manager) needn’t know or care if it’s using remote desktops or not.

    I’d also be tempted to consider “split screen”, where the same physical monitor is treated as separate virtual screens. For example, you might have a large screen where the first user uses the left half and the second user uses the right half.

    For “clone mode”, there’s actually 3 cases. The first is where the same video signal is sent to 2 or more monitors (that may be different physical sizes and different aspect ratios), which is common in low-end laptops (with external monitors) as splitting the video signals from one video card is cheaper than having separate video cards. The second case is where there’s separate video cards for each monitor, which is a bit worse as each monitor may be using a different video mode (but also a bit better as you can generate different video for each different monitor – e.g. use “letterboxing” or something to cope with different physical aspect ratios).

    The last case for “clone mode” is multi-user clone mode (e.g. where different users share the same desktop). A common example of this is Microsoft’s “Windows Remote Assistance”, where administrators and/or help desk operators can assist end users by logging into the user’s computer remotely.

    Then there’s screens used for special purposes, like my “auxiliary” third monitor. I’d be tempted to assume these screens are for presentation purposes – watching videos, showing promotional material (e.g. a receptionist’s computer being used for normal stuff by a receptionist, that’s also used to run a promotional video in a waiting room). The “laptop with a projector” example would probably fit here too, where the projector is used to show a power point presentation or slide show or something to a room full of people.

    For handling all of this…

    At the lowest level, I’d have a “physical screen” abstraction, where device drivers (for video cards and remote desktops) provide an interface for controlling what is shown on that screen. You seem to want to split this into 2 layers (drivers and primitive renderers) but it’s effectively the same (e.g. a single layer containing 2 sub-layers if you like). The interface provided by this layer should be device independent, because the next layer would be a massive nightmare if it’s not. For example, in my case there’d be 3 physical screens.

    At the next level I’d have a “virtual screen” abstraction where virtual screens are mapped to physical screens, which takes care of things like screen cloning and split screen; and supports things like the “unix style virtual desktops” (for e.g. I’ve got X/KDE setup for 6 virtual desktops, but support is poor because I can’t have different GUIs running on different virtual desktops and “control+tab” between them). I’d also group virtual screens together into logical groups at this layer, and track things like physical size and physical location, and which user is in control of which logical groups of virtual screens. For example, in my case there’d be 6 virtual screens for the first physical monitor, 6 virtual screens for the second virtual monitor and one more virtual screen for the third/auxiliary monitor. The virtual screens for the first and second monitors would be grouped into pairs (e.g. 6 groups of virtual screens with 2 virtual screens per group) while the other virtual screen for the third/auxiliary monitor would be in it’s own group. All groups of logical screens would belong to the same user; who would be able to cycle through the 7 logical groups of virtual screens by pressing some hotkey (like “control + tab”) – the first 6 “control+tab” presses switch between what is displayed on the 2 main monitors, and the seventh time “control+tab” shifts focus (keyboard/mouse) to the auxiliary monitor.

    The next level would be your Window Manager (and my “GUI”), which creates a logical desktop from a group of one or more virtual screens and the “physical size and physical location” information for each virtual screen. A logical desktop may not be rectangular. For example, in my case I’d have 7 separate window managers or logical desktops – 6 logical desktops (one for each group of virtual screens) that share both of the 2 main monitors, plus another logical desktop that corresponds to the only virtual screen for my third/auxiliary monitor. The first 6 logical desktops would be a rectangle with a horizontal strip missing from the bottom left. The last logical desktop would be rectangular.

    I think that covers everything…

    Note: I should learn to shut up (giving away my ideas for free is going to come back and bite me one day); but I’ve been considering 3D positioning as an extra/additional way of tracking the relationships between virtual screens; where (for my first 6 logical desktops) the logical desktop would be described as a shape with a vertical strip missing down the middle and a horizontal strip missing from the bottom left; that isn’t flat and has a bend/angle in the middle. Applications, etc would still use the “flattened” representation of the logical desktop; but things like 3D games would use the alternative “3D positioning” representation. The idea of this is to allow for a “holes cut out of a sphere” effect; where the monitors are the “holes”, and where the user is it at the centre of the sphere and 3D graphics are projected onto the holes/monitors. It’s a bit hard to explain without a diagram, so I created a diagram that might help: http://i.imgur.com/RRdb4.png

    – Brendan

  6. Hadrien September 4, 2011 / 7:47 pm

    I think we pretty much agree, just wanted to say a bit about your 3D screen positioning idea : I like it, and it could also help manage things like projectors, but do you have an idea how it would be operated by the end user ?

  7. Brendan September 5, 2011 / 3:36 am

    Hi,

    I’ve been imagining a utility that allows users/administrators to do “advanced desktop setup”. They start this utility, and the OS draws where it thinks the user’s head, monitors, speakers, microphones, web cams, etc are in 3D space based on anything it can autodetect (which would typically include the number of monitors, speakers, microphones and cameras; and the physical size/s of the monitor/s). Then the utility lets the user/administrator drag the items around inside that 3D space to reposition them.

    It would also need to support profiles (e.g. for things like laptops and docking stations); and in some cases I may be able to provide default profiles from a web service – e.g. if you install the OS on a “model 1234XYZ laptop” the OS might be able to use information from SMBIOS (model and vendor) and ask the web service where the laptop’s built in screen, microphone, speakers, etc are so that the user doesn’t need to do it. Of course when no information is available it’d make default assumptions (like all monitors are positioned next to each other horizontally with any speakers to the left and right of those monitor/s and the microphone near the middle, with the user’s head roughly 1 meter from the centre of the monitors), where the default assumptions are likely to be wrong (but are hopefully a “good enough” starting point, especially for simple single-monitor setups).

    There’s a few other more advanced ideas I’ve been having; like allowing a microphone to be used to automatically determine the dynamic range of any speakers, allowing a web cam to be used to auto-adjust the colour/gamma/contrast/brightness of multi-monitor setups, allowing things like Microsoft’s Kinect to detect where the user’s head is, etc. These ideas would need a lot more research though, and there’s plenty of reasons why some might not be viable (e.g. glare from monitors and strobe effects causing too many problems for the “auto-adjust colour correction via. web-cam” idea).

    After it’s setup, the user wouldn’t need to do anything – the 3D positioning would automatically be used for all 3D graphics.

    Cheers,

    Brendan

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s