3

Yes this is a broad question but I would argue quite a valid one. Sometimes programs and scripts take too long or use too much memory and really start to slow my system down. Fine. Sometimes the system slows down so much that I can barely slide-show my mouse to the terminal and spam Ctrl+C. It baffles me as to why an OS does not give scheduling priority to allow the user to use the mouse, keyboard and kill things. Ever seen this?

> ./program
^C^C^C^C^C^C^C^C^C^C^C^Z^Z^Z^C^C^C^C^C^Clsdhafjkasdf

Now, Ctrl+C isn't as stern as some others (it can be handled by the app and even ignored, but that's not the case here). Ctrl+Z would do the job just fine too as I would be able to kill -9 %1 right after but it doesn't work either.

Another method might be to jump to a virtual console Ctrl+Alt+F2, login and kill the offending app, but since the system is busy this doesn't work and I get a black screen. Likewise I can't open new terminals (the window pops up but fails to drop me in a shell). Other terminals that are open may not respond or run commands.

I suspect one reason the system is so inoperable is the offending program is hitting swap and pushing the more core apps out of main memory. Not even the simplest command to give me a bash prompt or execute kill can get a cycle in edgeways. I have no evidence of this because I can't run top when it happens. Are there any options to improve the chances of the original Ctrl+C working?. Maybe something along the lines of increasing X and terminal priority or automatically killing programs that use a large portion of memory or start to swap too much?

Is there any other linux-fu I could use to regain control when this happens (e.g. SysRq commands)?

Update: After some more tests I'm pretty sure it's apps using too much memory and hitting swap. Even after killing the app in question, others take a very long time to start being responsive as though they've been pushed out of main memory. I'd really like some way to automatically limit high memory usage programs to main memory. If it hits swap it's going to be too slow anyway so what's the point of letting it continue.

NOTE: I'm not after a solution to a specific app and don't know ahead of time when some operation is going to chew up memory. I want to solve this kind of slowdown system wide. I.e. many programs cause this. AFAIK I haven't messed with the system config and it's a pretty standard fedora install. I'm not surprised by these slowdowns but I do want more control.


I'd like to keep my window manager running and these are my last resorts that I'm hoping to avoid. I generally only need these if my GPU is stuck in a loop and blocking X. If enabled, Ctrl+Alt+backspace is a handy shortcut to kill X and all your apps, taking you back to login. A more potent command, again if enabled, is Alt+SysRq+K. If that doesn't work it's holding the power button time.


Alt+SysRq+F (thanks, @Hastur), which kills memory hogging processes is quite destructive but can help as a last resort. Update: Not entirely sure of all the consequences here but @Xen2050's suggestion of ulimit seems to solve many problems...

TOTAL_PHYSICAL_MEMORY=$(grep MemTotal /proc/meminfo | awk '{print $2}')
ulimit -Sv $(( $TOTAL_PHYSICAL_MEMORY * 4 / 8))

Going to leave this in my bashrc and see how things go.

Update: Things mostly seem good, except I guess some apps that share large libraries and map large files. Even if they consume barely any actual memory and its not likely to hit swap frequently. There doesn't seem to be a number low enough to kill deadly swap-hitting apps but leave regular ones (such as 4.6gb VIRT amarok) running.

Related: https://unix.stackexchange.com/questions/134414/how-to-limit-the-total-resources-memory-of-a-process-and-its-children/174894, but still the issue of limiting applications that start to hit swap a lot.


This is exactly the kind solution it turns out I'm after: Is it possible to make the OOM killer intervent earlier?

jozxyqk
  • 2,914

2 Answers2

1

Your particular case doesn't sound like just a process using all the available CPU, more like a display or possibly out of RAM issue. Limiting RAM should be possible with something like cgroups or ulimit / user limits.

But if you want to try limiting the CPU usage of some processes, this might work:
If you know exactly what process(es) is running away with your CPU, you could use cpulimit to slow it down. I use it regularly on a low-priority process that sometimes runs away with the CPU, works great. It:

sends the SIGSTOP and SIGCONT signals to a process, both to verify that it can control it and to limit the average amount of CPU it consumes. This can result in misleading (annoying) job control messages that indicate that the job has been stopped (when actually it was, but immediately restarted). This can also cause issues with interactive shells that detect or otherwise depend on SIGSTOP/SIGCONT. For example, you may place a job in the foreground, only to see it immediately stopped and restarted in the background. (See also http://bugs.debian.org/558763.)

There are examples on running it in it's man page, like:

   Assuming you have started `foo --bar` and you find out with  top(1)  or
   ps(1) that this process uses all your CPU time you can either

   # cpulimit -e foo -l 50
          limits  the CPU usage of the process by acting on the executable
          program file (note: the argument "--bar" is omitted)

   # cpulimit -p 1234 -l 50
          limits the CPU usage of the process by acting  on  its  PID,  as
          shown by ps(1)

   # cpulimit -P /usr/bin/foo -l 50
          same as -e but uses the absolute path name
Xen2050
  • 14,391
0

You may install "xkill" application and assign "xkill" to some keyboard short cut like Ctrl+shift+k and whenever any script or program lags, just press crtl+shift+k and click on the application u want to kill. That's it