It calls out IPC and async I/O as two particular areas where POSIX doesn’t cover very much. I’m curious whether Redox folks have written about plans for those things anywhere? I noticed this issue discussing IPC, so maybe that’s a good place to follow along right now? I didn’t find a similar issue for async I/O though.
I have not yet read the linked paper, but honestly speaking it is not just IPC and async I/O where people don’t use POSIX. For example many languages have their own non POSIX interface for managing threads and processes and I don’t know anyone who likes directly using POSIX threads instead of some library abstracting over them.
If anyone wants to look into async IO here are some sources:
https://people.freebsd.org/~jlemon/papers/kqueue.pdf
a paper describing how kqueue works (from what I have read kqueue is techniqally
superiour to epoll, not just because of the problems many people find with epoll but also because it seems to be easier to extend (add new types of events) and it seems to have a more egonomic API, but I’m not to deep into epoll vs. kqueue)
other techniques to look into, both for the good and bad parts (to prevent doing the same mistakes again) should contain: AIO (linux), IOCP (Windows), Singals, /dev/poll (solaris).
I personally would prefer a kqueue like API but it might be quite a big think to implement and I don’t have much experience with system level async I/O. Also it has to be made with the Redox design principles in mind.
Wrt. to POSIX I wonder how close / “compatible” redox wants to be with Linux and POSIX.
Mainly how much of the “bad” parts it wants to implement for compatibly issues. In Linux (as well as POSIX) there are often multiple layers/revisions of a solution for a specific problem where newer ones tend to combat problems found with the previous ones, making the previous one kinda obsolete.
A example would be poll -> select -> epoll(linux)/kqueue(FreeBSD/Mac) or signal -> sigaction -> signalfd/sigwaitinfo.
Especially wrt. to signals not supporting signal (but sigaction) probably would be fine as signal is not really compatible with multi-threading and is deprecated even in the POSIX standart (as far as I know, might be mistaken).
So I think it would be a good idea to start with designing the “state of the art” kind solutions like e.g. epoll/kqueue/something_else_but_kinda_similar and then add older combat functions if needed.
As a side note when someone designs a system like epoll/kqueue for redox it would be nice to be able to handle signals over it as the default way to handle singnals on redox. In Linux this is possible with signalfd+epoll, kqueue on the other hand has only limited singal handling capabilities, while it can tell you which signals happened since the last kqueue query it’s limited to exactly that, so you don’t have access to the siginfo_t struct which can be retrived when using sigwaitinfo. Btw. from the many ways to handle signals (singal/sigaction/sigwaitingo/sigwait/signalfd/sigtimedwait) the “best”[1] i.e. least brittle and race-condition prone seems to be to:. Block/mask signals on all other threads, then have one thread for handling signals (unblock/mask them) in which you basically build a simple event loop with sigwaitinfo (which suspends the thread until a signal happens) and when you get a signal event you forward this information through some other mean (e.g. channels) to all other interested threads. Naturally handling of inter-thread signals might require a bit of different work, but “typical” signal handler are prone to all kind of race conditions. So it would be awesome if this can be just part of the standart asyncIO/event system as the “best”[1] solution is already quite similar to it anyway. [1]: Aka the solution mentioned abvoe. Note that there are probably other “best” solutions, e.g. building a similar event loop with signalfd+epoll. I’m much to used to use libraries which abstract this away to be certain in any way.
EDIT:
here is a article discussing why signalfd does not “solve” the problem with signals. It also does explain a “best” solution similar to what I mentioned above but with more details and proposes how something like signalfd could actually solve the problem if it would be made slightly different: https://ldpreload.com/blog/signalfd-is-useless?reposted-on-request
(btw. it is from the person behind rust’s MIO)
There was a proposedCLONE_FD/clone4 interface on Linux to make it possible to wait on child processes asynchronously from a library without needing to own the process-wide SIGCHLD handler. Unfortunately I don’t think it ever landed. But that’s another good example of what Could Have Been.
Maybe also related, I think EINTR exists because lots of blocking system calls were designed before async IO was a thing. (select comes along in 1983: idea.popcount.org/2016-11-01-a-brief-history-of-select2.) Handling interruptions properly is so tricky that Python reversed its behavior in 2014. In an OS that has async IO from the beginning, hopefully EINTR doesn’t need to exist at all? Maybe all blocking calls could be userland wrappers around non-blocking APIs?
While we’re at it, can PID reuse and FD reuse be not a thing? Honestly there must be some book somewhere of “things we all regret from 50 years of Unix”?
Wouldn’t you be able to run out of IDs at some point (on a long running havy used system which frequently spawns many short lived processes)? But the there is no reason to not wait until that point, before starting to reuse IDs. This would make some bugs very unlikely to ever take Effekt (Bugs where a program remembers the pid of a long gone program and e.g. sends a signal to it)
So a good idea I think
If the PID was a 64-bit counter, and you created 1 billion new processes every second, it would be 2**64 / (10**9 * 60 * 60 * 24 * 365) = 585 years before you ran out. If the PID was a 128-bit counter, running out of PIDs would take as much work as brute forcing a modern TLS encryption key, at which point PID reuse would be the least of our problems. But yeah, 64 bits is probably enough I’d be surprised if there wasn’t some other 64-bit counter somewhere in the kernel that would overflow before you got that far?
You are totally right, I should have done a bit of math before posting
And like you said 64bit should be enough, not just because of the fact that until then some other counter could run out. If we continue your calculation and assume some really long running system, let’s say 50years without reboot[1] we would need to either start more then 11.6988 * 10e9 processes per second or create as many file descriptors in a single process. This are 11 processes/file descriptors per nano second . I really had been quite naive to think that running out of ID could happen in some “extreme case”
[1]: Is there even pc hardware which can make a system run for 50 years? Maybe with some distributed Os or some hot exchangeable CPU/ motherboard but this is quite far fetched…
MAIN:
I think one possible reason for reusing process IDs is to make it easier to display/type them e.g. typing kill 94648321 compared to kill 28842 is a bit harder / easier to mistake. But then, today we live in a world with copy-past (even in terminals) kill by name and other tools which makes this mostly irrelevant, even if you would have to type kill 1125899906842612 (I allwyas feelt typing out a process id is, somehow, wrong.)
Through a interesting idea would be to be able to “name” (mainly) long running processes so that other processes can refer to them by name when sending signals, but then I think such parts might be better solved through through a system IPC system. Nevertheless running ps a and getting | bla-server pts/3 S+ 1:32 /usr/lib/foo_system/foot-server | instend of | 294314 ... | sounds nice, but also incompatible …
This is fun to brainstorm about I don’t know how many times “normal” systems cycle through their 32768 PIDs, but I assume it’s “usually only a few times”? So hopefully even with an enormous PID space, most machines won’t actually end up generating PIDs more than a digit or two longer than what they do today?
My linux Laptop is by now “up” for ~13 days and my highest PID is 32604.
I’m not sure what I can read from it, as 1. PIDs are beeing reused and 2. It might be that certain process get height PID’s from the get to go even if lower ones are available.
So probably? it cycled at last one time through the IDs (as a Linux desktop system with a bit of Python developing in the last 13days). So having 6 diget PIDs would probably become a common
case, for servers at last.