I have been thinking. Let's consider this example: We have 2 programs running on PC. First is, for example internet browser, and second is some WiFi scanning software. Now, browser wants to use internet connection over WiFi, but WiFi scanner need the WiFi adapter to be switched into scan mode...
OK, so, which side in a modern OS architecture is responsible for handling these kinds of collisions? Some OS layer, device driver, or the program itself?
Because, for example the WiFi scanner scanner switched the WiFi adapter into scan mode. Now, the browser has been launched. So the OS switches CPU time to the browser. It calls some abstract OS layer for network which calls the WiFi driver and wants to receive data from it.
Basically, I want to know, how are these situations solved. I have been tinking about it a lot, but I never actually figured it out. Becouse, there are several options:
For example, OS built-in API functions, like for example basic API for printing text to console, or drawing to Windows handles which process to which actual frame on screen itself, and calls the GPU driver itself, so no collision there.
But, you can write your own drivers, not lying under some heavy OS API. And then 2 applications can use this driver for two opposite functions. My main problem in understanding this is the multitasking environment. Because how can process 2 call some driver function if process 1 called it before, and it got switched to process 2 before driver request was completed? Thanks.