Change priority of the current process in C - c

On Windows I can do:
HANDLE hCurrentProcess = GetCurrentProcess();
SetPriorityClass(hCurrentProcess, ABOVE_NORMAL_PRIORITY_CLASS);
How can I do the same thing on *nix?

Try:
#include <sys/time.h>
#include <sys/resource.h>
int main(){
setpriority(PRIO_PROCESS, 0, -20);
}
Note that you must be running as superuser for this to work.
(for more info, type 'man setpriority' at a prompt.)

If doing something like this under unix your want to (as root) chmod you task and set the s bit. Then you can change who you are running as, what your priority is, your thread scheduling, etc. at run time.
It is great as long as you are not writing a massively multithreaded app with a bug in it so that you take over a 48 CPU box and nobody can shut you down because your have each CPU spinning at 100% with all thread set to SHED_FIFO (runs to completion) running as root.
Nah .. I wouldn't be speaking from experience ....

# allain Can you lower your own process' priority without being superuser?
Sure. Be aware, however, that this is a one way street. You can't even get back to where you started. And even fairly small reductions in priority can have startlingly large effects on running time when there is significant load on the system.

Related

What is the efficient way to continuously check until a condition is true

So I have this program that continuously check until the condition is true. My problem is whenever I run it, my computer slows down because of the loop. Can anyone please suggest the best and most efficient way to do this? Thank you for your response in advance.
To illustrate my problem, here is a code that represents it:
#include <stdio.h>
#include <time.h>
#include <string.h>
#include <windows.h>
int main(void){
time_t now;
struct tm *local;
while(1){
time(&now);
local = localtime(&now);
if(local->tm_min > 55){
printf("Time:\t%d:%d:%d\n",local->tm_hour,local->tm_min,local->tm_sec);
getch();
exit(0);
}
}
return 0;
}
If polling is really what you want, or you have to use it, then you must give breath to the system by using sleep's.
So, how much to sleep in each iteration? It can be a fixed value (and if you sleep just 1 millisecond you will be stunned at how this is effective). A fixed value, say 20-30 milliseconds is good if you check for slow events like keystrokes by a real user. If, say, you are monitoring a serial port, perhaps you need lower values.
Then, depending on the application, you can also implement a variable sleep time. For example (this is a little stupid but it is just to explain): you wait for keystrokes, and sleep 30 milliseconds. Then you use your program in a pipe and discover that it is painly slow. A solution could be to set the value to sleep equal to 30 ms, but after having read a character, the value is lowered to 0 which causes the sleep to be not performed. Every time the condition fails the value is raised up to the maximum limit (20-30 milliseconds for a keyboard).
-- EDIT AFTER COMMENTS --
It has pointed out that keyboards and serial ports do not need polling, or they should not be polled. Generally speaking this is true, but it depends on the hardware and operating system (which in turn is a piece of software and, if the hardware does not support an interrupt for a given condition, even the OS would have to poll). About keyboards, for example, I thought at those little ones implemented as a matrix of buttons: some small CPUs have special facilities to generate an interrupt on any I/O change, but other don't: in that case polling is the only solution - and it is also ideal for implementing anti-bouncing (this kind of polling is not necessarily performed inside a loop).
For serial ports, it is almost true that nobody would implement one without an interrupt (to avoid polling). But even so, it is difficult to manage the incoming data in an event-driven fashion; often a flag is set, and some other part of the program, which polls that flag, will work out the message.
Event-driven programming seems easy at first, but as soon the program gets bigger the complication augments too.
There are other situations to consider, for example loops which read data from somewhere and process those data. If something else has to be done inside the loop, for example checking how much time is passed, but the reading is blocking, the reading must be implemented in a non-blocking way, and the whole loop must turn into a kind of polling for one or more conditions -unless one uses multi-threading.
Anyway, I agree that polling is evil and should only be used when necessary.
Efficiently? One way or the other you need to put your process to sleep until the condition WILL BE TRUE - then wake up and die (so to speak :-). Since your code includes windows.h I'll assume you're running on Windows and thus have the Sleep() function available.
#include <stdio.h>
#include <windows.h>
#include <time.h>
int main(void)
{
time_t now;
struct tm *local;
DWORD msecs;
time(&now);
local = localtime(&now);
/* (55 * 60000) = msecs in 55 minutes */
msecs = (55 * 60000) - ((local->tm_min * 60000) + (local->tm_sec * 1000));
if(msecs > 0)
Sleep(msecs)
return 0;
}

Why doesn't Linux prevent spawning infinite number of processes and crashing?

With the very simple code below, my system (Ubuntu Linux 14.04) simply crashes not even letting my mouse respond. I had to force quit with the power button. I thought Linux is a stable OS tolerable of handling such basic program errors. Did I miss something?
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <semaphore.h>
void check(int isOkay){
if(!isOkay){
printf("error\n");
abort();
}
}
int main(void){
#define n 1000000
int array[n];
sem_t blocker;
int i;
while(1){
if(!fork()){
for(i = 0; i < n; ++i){
array[i] = rand();
}
check(sem_init(&blocker, 0, 0) == 0);
check(sem_wait(&blocker) == 0);
}
}
return 0;
}
Congratulations, you've discovered the fork bomb. There are shell one-liners that can wreak the same sort of havic with a lot less typing on your part.
It is in fact possible to limit the number of processes that a user can spawn using ulimit -- see the bottom of the linked wikipedia articles for details.
A desktop install of Ubuntu is not exactly a hardened server, though. It's designed for usability first and foremost. If you need a locked down system that can't crash, there are better options.
The command ulmit -u shows the maximum number of processes that you can start. However, do not start that many processes in the background: your machine would spend time switching between processes and wouldn't get around to getting actual work done.
The linux does its job of processing your request to create a process, it is for the user to implement his code based on this limit.
The main problem here is determining the best limit. A lot of software doesn't use fork() at all, so do you set the limit to something small like 5? Some software might create a new process whenever it receives a request from network, so do you set the limit to "max. number of network packets"? If you assume most software isn't buggy, then you'd be tempted to set the limit relatively high so that correct software works properly.
The other problem is one of scheduling priorities. In a well designed system things like the GUI would be "high priority" and if it wants CPU time it'd preempt normal/lower priority work immediately. If this was the case, a massive fork bomb running at normal/lower priority would have no effect on the system's ability to respond to the user, and the user would be able to kill the fork bomb without much problem.
Sadly, for a variety of reasons, the scheduler in Linux doesn't work like that. It does support priorities, but to use them you have to be a "real time" process and have to be running as root (which is a massive security disaster). Without sane priorities, Linux assumes that every forked process is as important as everything else, and the CPU/s end up busy doing the forking and there's no CPU time left to respond to the user.

How to check for computer power loss?

Hopefully this is a simple question to answer.
Basically I would like to use C to see if there was a power loss in my computer. This will decide how a program runs. If there was a power loss then it would go one way. Otherwise it would respond another way:
#include nopower.h
#include power.h
//------------------------
if(!powerloss){
power_procedure();
}
else no_power_procedure();
//--------------------------
I'm running Ubuntu 12.04 LTS. I'm hoping that this can be run directly on the computer running this code. In otherwords is there a way to check a registry status to see if power was loss. The operating system knows when there is an improper shutdown, and I'd like to know if I can tap into the same or a similar resource. I'd rather not constantly write to a file.
int main()
{
printf ("power is currently on");
}
Coding the "power is currently off" case is somewhat trickier.
Alternately if you want to know the time since the last boot, and put up a message if it was recent, then see Uptime under linux in C

How to prevent linux soft lockup/unresponsiveness in C without sleep

How would be the correct way to prevent a soft lockup/unresponsiveness in a long running while loop in a C program?
(dmesg is reporting a soft lockup)
Pseudo code is like this:
while( worktodo ) {
worktodo = doWork();
}
My code is of course way more complex, and also includes a printf statement which gets executed once a second to report progress, but the problem is, the program ceases to respond to ctrl+c at this point.
Things I've tried which do work (but I want an alternative):
doing printf every loop iteration (don't know why, but the program becomes responsive again that way (???)) - wastes a lot of performance due to unneeded printf calls (each doWork() call does not take very long)
using sleep/usleep/... - also seems like a waste of (processing-)time to me, as the whole program will already be running several hours at full speed
What I'm thinking about is some kind of process_waiting_events() function or the like, and normal signals seem to be working fine as I can use kill on a different shell to stop the program.
Additional background info: I'm using GWAN and my code is running inside the main.c "maintenance script", which seems to be running in the main thread as far as I can tell.
Thank you very much.
P.S.: Yes I did check all other threads I found regarding soft lockups, but they all seem to ask about why soft lockups occur, while I know the why and want to have a way of preventing them.
P.P.S.: Optimizing the program (making it run shorter) is not really a solution, as I'm processing a 29GB bz2 file which extracts to about 400GB xml, at the speed of about 10-40MB per second on a single thread, so even at max speed I would be bound by I/O and still have it running for several hours.
While the posed answer using threads might possibly be an option, it would in reality just shift the problem to a different thread. My solution after all was using
sleep(0)
Also tested sched_yield / pthread_yield, both of which didn't really help. Unfortunately I've been unable to find a good resource which documents sleep(0) in linux, but for windows the documentation states that using a value of 0 lets the thread yield it's remaining part of the current cpu slice.
It turns out that sleep(0) is most probably relying on what is called timer slack in linux - an article about this can be found here: http://lwn.net/Articles/463357/
Another possibility is using nanosleep(&(struct timespec){0}, NULL) which seems to not necessarily rely on timer slack - linux man pages for nanosleep state that if the requested interval is below clock granularity, it will be rounded up to clock granularity, which on linux depends on CLOCK_MONOTONIC according to the man pages. Thus, a value of 0 nanoseconds is perfectly valid and should always work, as clock granularity can never be 0.
Hope this helps someone else as well ;)
Your scenario is not really a soft lock up, it is a process is busy doing something.
How about this pseudo code:
void workerThread()
{
while(workToDo)
{
if(threadSignalled)
break;
workToDo = DoWork()
}
}
void sighandler()
{
signal worker thread to finish
waitForWorkerThreadFinished;
}
void main()
{
InstallSignalHandler;
CreateSemaphore
StartThread;
waitForWorkerThreadFinished;
}
Clearly a timing issue. Using a signalling mechanism should remove the problem.
The use of printf solves the problem because printf accesses the console which is an expensive and time consuming process which in your case gives enough time for the worker to complete its work.

How do I ensure my program runs from beginning to end without interruption?

I'm attempting to time code using RDTSC (no other profiling software I've tried is able to time to the resolution I need) on Ubuntu 8.10. However, I keep getting outliers from task switches and interrupts firing, which are causing my statistics to be invalid.
Considering my program runs in a matter of milliseconds, is it possible to disable all interrupts (which would inherently switch off task switches) in my environment? Or do I need to go to an OS which allows me more power? Would I be better off using my own OS kernel to perform this timing code? I am attempting to prove an algorithm's best/worst case performance, so it must be totally solid with timing.
The relevant code I'm using currently is:
inline uint64_t rdtsc()
{
uint64_t ret;
asm volatile("rdtsc" : "=A" (ret));
return ret;
}
void test(int readable_out, uint32_t start, uint32_t end, uint32_t (*fn)(uint32_t, uint32_t))
{
int i;
for(i = 0; i <= 100; i++)
{
uint64_t clock1 = rdtsc();
uint32_t ans = fn(start, end);
uint64_t clock2 = rdtsc();
uint64_t diff = clock2 - clock1;
if(readable_out)
printf("[%3d]\t\t%u [%llu]\n", i, ans, diff);
else
printf("%llu\n", diff);
}
}
Extra points to those who notice I'm not properly handling overflow conditions in this code. At this stage I'm just trying to get a consistent output without sudden jumps due to my program losing the timeslice.
The nice value for my program is -20.
So to recap, is it possible for me to run this code without interruption from the OS? Or am I going to need to run it on bare hardware in ring0, so I can disable IRQs and scheduling? Thanks in advance!
If you call nanosleep() to sleep for a second or so immediately before each iteration of the test, you should get a "fresh" timeslice for each test. If you compile your kernel with 100HZ timer interrupts, and your timed function completes in under 10ms, then you should be able to avoid timer interrupts hitting you that way.
To minimise other interrupts, deconfigure all network devices, configure your system without swap and make sure it's otherwise quiescent.
Tricky. I don't think you can turn the operating system 'off' and guarantee strict scheduling.
I would turn this upside down: given that it runs so fast, run it many times to collect a distribution of outcomes. Given that standard Ubuntu Linux is not a real-time OS in the narrow sense, all alternative algorithms would run in the same setup --- and you can then compare your distributions (using anything from summary statistics to quantiles to qqplots). You can do that comparison with Python, or R, or Octave, ... whichever suits you best.
You might be able to get away with running FreeDOS, since it's a single process OS.
Here's the relevant text from the second link:
Microsoft's DOS implementation, which is the de
facto standard for DOS systems in the
x86 world, is a single-user,
single-tasking operating system. It
provides raw access to hardware, and
only a minimal layer for OS APIs for
things like the file I/O. This is a
good thing when it comes to embedded
systems, because you often just need
to get something done without an
operating system in your way.
DOS has (natively) no concept of
threads and no concept of multiple,
on-going processes. Application
software makes system calls via the
use of an interrupt interface, calling
various hardware interrupts to handle
things like video and audio, and
calling software interrupts to handle
various things like reading a
directory, executing a file, and so
forth.
Of course, you'll probably get the best performance actually booting FreeDOS onto actual hardware, not in an emulator.
I haven't actually used FreeDOS, but I assume that since your program seems to be standard C, you'll be able to use whatever the standard compiler is for FreeDOS.
If your program runs in milliseconds, and if your are running on Linux,
Make sure that your timer frequency (on linux) is set to 100Hz (not 1000Hz).
(cd /usr/src/linux; make menuconfig, and look at "Processor type and features" -> "Timer frequency")
This way your CPU will get interrupted every 10ms.
Furthermore, consider that the default CPU time slice on Linux is 100ms, so with a nice level of -20, you will not get descheduled if your are running for a few milliseconds.
Also, you are looping 101 times on fn(). Please consider giving fn() to be a no-op to calibrate your system properly.
Make statistics (average + stddev) instead of printing too many times (that would consume your scheduled timeslice, and the terminal will eventually get schedule etc... avoid that).
RDTSC benchmark sample code
You can use chrt -f 99 ./test to run ./test with the maximum realtime priority. Then at least it won't be interrupted by other user-space processes.
Also, installing the linux-rt package will install a real-time kernel, which will give you more control over interrupt handler priority via threaded interrupts.
If you run as root, you can call sched_setscheduler() and give yourself a real-time priority. Check the documentation.
Maybe there is some way to disable preemptive scheduling on linux, but it might not be needed. You could potentially use information from /proc/<pid>/schedstat or some other object in /proc to sense when you have been preempted, and disregard those timing samples.

Resources