Plasma 5.8 will be our first long-term supported release in the Plasma 5 series. We want to make this a release as polished and stable as possible. One area we weren’t quite happy with was our multi-screen user experience. While it works quite well for most of our users, there were a number of problems which made our multi-screen support sub-par.
Let’s take a step back to define what we’re talking about.
Multi-screen support means that connecting more than one screen to your computer. The following use cases give good examples of the scope:
- Static workstation A desktop computer with more than one display connected, the desktop typically spans both screens to give more screen real estate.
- Docking station A laptop computer that is hooked up to a docking station with additional displays connected. This is a more interesting case, since different configurations may be picked depending on whether the laptop’s lid is closed or not, and how the user switches between displays.
- Projector The computer is connected to a projector or TV.
The idea is that the user plugs in or starts up with that configuration, if the user has already configured this hardware combination, this setup is restored. Otherwise, a reasonable guess is done to put the user to a good starting point to fine-tune the setup.
This is the job of KScreen. At a technical level, kscreen consists of three parts:
- system settings module This can be reached through system settings
- kscreen daemon Run in a background process, this component saves, restores and creates initial screen configurations.
- libkscreen This is the library providing the screen setup reading and writing API. It has backends for X11, Wayland, and others that allow to talk to the exact same programming interface, independent of the display server in use.
At an architectural level, this is a sound design: the roles are clearly separated, the low-level bits are suitably abstracted to allow re-use of code, the API presents what matters to the user, implementation details are hidden. Most importantly, aside from a few bugs, it works as expected, and in principle, there’s no reason why it shouldn’t.
So much for the theory. In reality, we’re dealing with a huge amount of complexity. There are hardware events such as suspending, waking up with different configurations, the laptop’s lid may be closed or opened (and when that’s done, we don’t even get an event that it closed, displays come and go, depending on their connection, the same piece of hardware might support completely different resolutions, hardware comes with broken EDID information, display connectors come and go, so do display controllers (crtcs); and on top of all that: the only way we get to know what actually works in reality for the user is the “throw stuff against the wall and observe what sticks” tactic.
This is the fabric of nightmares. Since I prefer to not sleep, but hack at night, I seemed to …