Article: "POSIX has become outdated"

This article recently made the rounds on HackerNews: http://www.cs.columbia.edu/~vatlidak/resources/POSIXmagazine.pdf

It calls out IPC and async I/O as two particular areas where POSIX doesn’t cover very much. I’m curious whether Redox folks have written about plans for those things anywhere? I noticed this issue discussing IPC, so maybe that’s a good place to follow along right now? I didn’t find a similar issue for async I/O though.

I have not yet read the linked paper, but honestly speaking it is not just IPC and async I/O where people don’t use POSIX. For example many languages have their own non POSIX interface for managing threads and processes and I don’t know anyone who likes directly using POSIX threads instead of some library abstracting over them.

If anyone wants to look into async IO here are some sources:

other techniques to look into, both for the good and bad parts (to prevent doing the same mistakes again) should contain: AIO (linux), IOCP (Windows), Singals, /dev/poll (solaris).

I personally would prefer a kqueue like API but it might be quite a big think to implement and I don’t have much experience with system level async I/O. Also it has to be made with the Redox design principles in mind.

1 Like

Somewhat older, but still relevant to this topic: Ghost of Unix Past

1 Like

Wrt. to POSIX I wonder how close / “compatible” redox wants to be with Linux and POSIX.
Mainly how much of the “bad” parts it wants to implement for compatibly issues. In Linux (as well as POSIX) there are often multiple layers/revisions of a solution for a specific problem where newer ones tend to combat problems found with the previous ones, making the previous one kinda obsolete.

A example would be poll -> select -> epoll(linux)/kqueue(FreeBSD/Mac) or signal -> sigaction -> signalfd/sigwaitinfo.

Especially wrt. to signals not supporting signal (but sigaction) probably would be fine as signal is not really compatible with multi-threading and is deprecated even in the POSIX standart (as far as I know, might be mistaken).

So I think it would be a good idea to start with designing the “state of the art” kind solutions like e.g. epoll/kqueue/something_else_but_kinda_similar and then add older combat functions if needed.

As a side note when someone designs a system like epoll/kqueue for redox it would be nice to be able to handle signals over it as the default way to handle singnals on redox. In Linux this is possible with signalfd+epoll, kqueue on the other hand has only limited singal handling capabilities, while it can tell you which signals happened since the last kqueue query it’s limited to exactly that, so you don’t have access to the siginfo_t struct which can be retrived when using sigwaitinfo. Btw. from the many ways to handle signals (singal/sigaction/sigwaitingo/sigwait/signalfd/sigtimedwait) the “best”[1] i.e. least brittle and race-condition prone seems to be to:. Block/mask signals on all other threads, then have one thread for handling signals (unblock/mask them) in which you basically build a simple event loop with sigwaitinfo (which suspends the thread until a signal happens) and when you get a signal event you forward this information through some other mean (e.g. channels) to all other interested threads. Naturally handling of inter-thread signals might require a bit of different work, but “typical” signal handler are prone to all kind of race conditions. So it would be awesome if this can be just part of the standart asyncIO/event system as the “best”[1] solution is already quite similar to it anyway. [1]: Aka the solution mentioned abvoe. Note that there are probably other “best” solutions, e.g. building a similar event loop with signalfd+epoll. I’m much to used to use libraries which abstract this away to be certain in any way.

EDIT:
here is a article discussing why signalfd does not “solve” the problem with signals. It also does explain a “best” solution similar to what I mentioned above but with more details and proposes how something like signalfd could actually solve the problem if it would be made slightly different:
https://ldpreload.com/blog/signalfd-is-useless?reposted-on-request
(btw. it is from the person behind rust’s MIO)

1 Like

There was a proposed CLONE_FD/clone4 interface on Linux to make it possible to wait on child processes asynchronously from a library without needing to own the process-wide SIGCHLD handler. Unfortunately I don’t think it ever landed. But that’s another good example of what Could Have Been.

Kind of related, the existence of interfaces like dup3 and F_DUPFD_CLOEXEC makes it seem like the Unix defaults for child process inheritance were really unfortunate, similar to the signal masking issues you mentioned. (Windows has a perma-bug around this too: support.microsoft.com/en-us/help/315939/prb-child-inherits-unintended-handles-during-createprocess-call)

Maybe also related, I think EINTR exists because lots of blocking system calls were designed before async IO was a thing. (select comes along in 1983: idea.popcount.org/2016-11-01-a-brief-history-of-select2.) Handling interruptions properly is so tricky that Python reversed its behavior in 2014. In an OS that has async IO from the beginning, hopefully EINTR doesn’t need to exist at all? Maybe all blocking calls could be userland wrappers around non-blocking APIs?

Another set of What Could Have Been thoughts along these lines: news.ycombinator.com/item?id=13624276

[The two-link-limit for new users is biting me here :p]

While we’re at it, can PID reuse and FD reuse be not a thing? :slight_smile: Honestly there must be some book somewhere of “things we all regret from 50 years of Unix”?

I really like this idea :rofl:

Wouldn’t you be able to run out of IDs at some point (on a long running havy used system which frequently spawns many short lived processes)? But the there is no reason to not wait until that point, before starting to reuse IDs. This would make some bugs very unlikely to ever take Effekt (Bugs where a program remembers the pid of a long gone program and e.g. sends a signal to it)
So a good idea I think :slight_smile:

If the PID was a 64-bit counter, and you created 1 billion new processes every second, it would be 2**64 / (10**9 * 60 * 60 * 24 * 365) = 585 years before you ran out. If the PID was a 128-bit counter, running out of PIDs would take as much work as brute forcing a modern TLS encryption key, at which point PID reuse would be the least of our problems. But yeah, 64 bits is probably enough :slight_smile: I’d be surprised if there wasn’t some other 64-bit counter somewhere in the kernel that would overflow before you got that far?

SIDE NOTE:

You are totally right, I should have done a bit of math before posting :sweat_smile:

And like you said 64bit should be enough, not just because of the fact that until then some other counter could run out. If we continue your calculation and assume some really long running system, let’s say 50years without reboot[1] we would need to either start more then 11.6988 * 10e9 processes per second or create as many file descriptors in a single process. This are 11 processes/file descriptors per nano second . I really had been quite naive to think that running out of ID could happen in some “extreme case” :upside_down:

[1]: Is there even pc hardware which can make a system run for 50 years? Maybe with some distributed Os or some hot exchangeable CPU/ motherboard but this is quite far fetched…

MAIN:
I think one possible reason for reusing process IDs is to make it easier to display/type them e.g. typing kill 94648321 compared to kill 28842 is a bit harder / easier to mistake. But then, today we live in a world with copy-past (even in terminals) kill by name and other tools which makes this mostly irrelevant, even if you would have to type kill 1125899906842612 :wink: (I allwyas feelt typing out a process id is, somehow, wrong.)

Through a interesting idea would be to be able to “name” (mainly) long running processes so that other processes can refer to them by name when sending signals, but then I think such parts might be better solved through through a system IPC system. Nevertheless running ps a and getting | bla-server pts/3 S+ 1:32 /usr/lib/foo_system/foot-server | instend of | 294314 ... | sounds nice, but also incompatible …

This is fun to brainstorm about :slight_smile: I don’t know how many times “normal” systems cycle through their 32768 PIDs, but I assume it’s “usually only a few times”? So hopefully even with an enormous PID space, most machines won’t actually end up generating PIDs more than a digit or two longer than what they do today?

My linux Laptop is by now “up” for ~13 days and my highest PID is 32604.
I’m not sure what I can read from it, as 1. PIDs are beeing reused and 2. It might be that certain process get height PID’s from the get to go even if lower ones are available.

So probably? it cycled at last one time through the IDs (as a Linux desktop system with a bit of Python developing in the last 13days). So having 6 diget PIDs would probably become a common
case, for servers at last.

Apparently there’s a new Linux IPC proposal called BUS1 that might be superseding kdbus: