fe.settings:getUserBoardSettings - non array given[qanonresearch] - Endchan Magrathea
 >>/9186/
One thread by definition uses a single CPU core. If you want to maximize throughput, you need to run some multiple threads (or forks, or both even). I'm just saying that a single thread/fork can handle multiple fds and it's how it's done in existing software. Since threads share memory (and fds), they can even cooperatively manage a single socket (like one would accept, another would handle reads, another would handle writes). I can't say much about the CPU cache optimization (or handling race conditions) because I didn't do that much systems programming to be proficient on that matter - I assume that makers of Apache or nginx did their homework. Of course, a call such as accept or recv will block if there's no data available, and that's why you use select (or similar) which is basically like an OR switch - it will block until ANY of the sockets have some incoming data or connection, later you can handle a particular one.

Yes, those calls are useful if you write your own network server, you don't control what the existing software use - but you can be almost sure that they do use them if they are available (because some are OS specific - and it's the configure script's work to determine if they are present on a given platform).