Does Redox have a latency target?

This article confirmed my own nagging suspicions:

https://danluu.com/input-lag/

It’s a bit absurd that a modern gaming machine running at 4,000x the speed of an apple 2, with a CPU that has 500,000x as many transistors (with a GPU that has 2,000,000x as many transistors) can maybe manage the same latency as an apple 2 in very carefully coded applications if we have a monitor with nearly 3x the refresh rate.

I suspect my sensitivity to low latency will soon be the norm (if it’s not already) as computer-driven peripherals became ever more commonplace in our daily lives.

Not only is it too early to put Redox to the test for this, but I assume a high-speed camera isn’t always so easy to come by. But I’m wondering if “input latency < x on SomeDevice” could be a valuable design goal for Redox OS.

IMO this is somewhat related in concept to Bufferbloat applied to operating systems - too many context switches, too many subsystems, too many messages being sent, too many clever abstractions etc. between keypress and display output.

1 Like

I am a complete idiot, so don’t take the below too seriously, but:

It seems like most of the latency due to layers and locking could be avoided by dynamically creating very tiny monolithic event handlers that pierce the stack. You could call this pre-assembling a pipeline.
For instance, when user input is happening, instead of checking what the current terminal is, then the terminal passing the input to the window manager, then the window manager passing it to the application with the active window, the hardware events should go straight to the window.
How? By recreating the handler function whenever for instance a different window gets focus.

I suppose the same thing is already happening on the output front, with compositors directly letting window apps painst to their gpu area instead of passing through copious layers of software composition to create a full screen image.