Need some help in C code for optimization (Poll + delay/sleep) - c

Currently I'm polling the register to get the expected value and now I want reduce the CPU usage and increase the performance.
So, I think, if we do polling for particular time (Say for 10ms) and if we didn't get expected value then wait for some time (like udelay(10*1000) or usleep(10*1000) delay/sleep in ms) then continue to do polling for more more extra time (Say 100ms) and still if you didn't get the expected value then do sleep/delay for 100ms.....vice versa... need to do till it reach to maximum timeout value.
Please let me know if anything.
This is the old code:
#include <sys/time.h> /* for setitimer */
#include <unistd.h> /* for pause */
#include <signal.h> /* for signal */
#define INTERVAL 500 //timeout in ms
static int timedout = 0;
struct itimerval it_val; /* for setting itimer */
char temp_reg[2];
int main(void)
{
/* Upon SIGALRM, call DoStuff().
* Set interval timer. We want frequency in ms,
* but the setitimer call needs seconds and useconds. */
if (signal(SIGALRM, (void (*)(int)) DoStuff) == SIG_ERR)
{
perror("Unable to catch SIGALRM");
exit(1);
}
it_val.it_value.tv_sec = INTERVAL/1000;
it_val.it_value.tv_usec = (INTERVAL*1000) % 1000000;
it_val.it_interval = it_val.it_value;
if (setitimer(ITIMER_REAL, &it_val, NULL) == -1)
{
perror("error calling setitimer()");
exit(1);
}
do
{
temp_reg[0] = read_reg();
//Read the register here and copy the value into char array (temp_reg
if (timedout == 1 )
return -1;//Timedout
} while (temp_reg[0] != 0 );//Check the value and if not try to read the register again (poll)
}
/*
* DoStuff
*/
void DoStuff(void)
{
timedout = 1;
printf("Timer went off.\n");
}
Now I want to optimize and reduce the CPU usage and want to improve the performance.
Can any one help me on this issue ?
Thanks for your help on this.

Currently I'm polling the register to get the expected value [...]
wow wow wow, hold on a moment here, there is a huge story hidden behind this sentence; what is "the register"? what is "the expected value"? What does read_reg() do? are you polling some external hardware? Well then, it all depends on how your hardware behaves.
There are two possibilities:
Your hardware buffers the values that it produces. This means that the hardware will keep each value available until you read it; it will detect when you have read the value, and then it will provide the next value.
Your hardware does not buffer values. This means that values are being made available in real time, for an unknown length of time each, and they are replaced by new values at a rate that only your hardware knows.
If your hardware is buffering, then you do not need to be afraid that some values might be lost, so there is no need to poll at all: just try reading the next value once and only once, and if it is not what you expect, sleep for a while. Each value will be there when you get around to reading it.
If your hardware is not buffering, then there is no strategy of polling and sleeping that will work for you. Your hardware must provide an interrupt, and you must write an interrupt-handling routine that will read every single new value as quickly as possible from the moment that it has been made available.

Here are some pseudo code that might help:
do
{
// Pseudo code
start_time = get_current_time();
do
{
temp_reg[0] = read_reg();
//Read the register here and copy the value into char array (temp_reg
if (timedout == 1 )
return -1;//Timedout
// Pseudo code
stop_time = get_current_time();
if (stop_time - start_time > some_limit) break;
} while (temp_reg[0] != 0 );
if (temp_reg[0] != 0)
{
usleep(some_time);
start_time = get_current_time();
}
} while (temp_reg[0] != 0 );
To turn the pseudo code into real code, see https://stackoverflow.com/a/2150334/4386427

Related

Setting up a timer with Microblaze?

What is the best way to create a timer with Microblaze which would allow me to have it work more similarly to a function like delay_ms() or sleep() in more conventional scripts?
Easily, I can create a stupid function like this:
void delay_ms(int i) {
//mind that I am doing this on the top of my head
for(delays=0; delay<(i*((1/frequency of the device)/2)); delays++) {
}
}
... but that would only have processor process nothing until it finishes, while in reality I need it to have the function allow me to do stop one process for a certain period of time while another one continues working.
Such thing is possible, no doubt about that, but what would the simplest solution to this problem be?
(I am using Spartan-3A, but I believe the solution would work for different kits, FPGAs as well.)
TL;DR
Use a micro OS, like FreeRTOS.
Bad answer
Well, if you have no OS, no task commutation but have an external timer, you can
use the following approach:
Enable interruption for your hardware timer, and manage a counter driven by this interrution:
You should have something like
/**timer.c**/
/* The internal counters
* each task have its counter
*/
static int s_timers[NUMBER_OF_TASKS] = {0,0};
/* on each time tick, decrease timers */
void timer_interrupt()
{
int i;
for (i = 0; i < NUMBER_OF_TASKS; ++i)
{
if (s_timer[i] > 0)
{
s_timer[i]--;
}
}
}
/* set wait counter:
* each task says how tick it want to wait
*/
void timer_set_wait(int task_num, int tick_to_wait)
{
s_timer[task_num] = tick_to_wait;
}
/**
* each task can ask if its time went out
*/
int timer_timeout(int task_num)
{
return (0 == s_timer[task_num]);
}
Once you have something like a timer (the code above is easily perfectible),
program your tasks:
/**task-1.c**/
/*TASK ID must be valid and unique in s_timer */
#define TASK_1_ID 0
void task_1()
{
if (timer_timeout(TASK_1_ID))
{
/* task has wait long enough, it can run again */
/* DO TASK 1 STUFF */
printf("hello from task 1\n");
/* Ask to wait for 150 ticks */
timer_set_wait(TASK_1_ID, 150);
}
}
/**task-2.c**/
/*TASK ID must be valid and unique in s_timer */
#define TASK_2_ID 1
void task_2()
{
if (timer_timeout(TASK_2_ID))
{
/* task has wait long enough, it can run again */
/* DO TASK 2 STUFF */
printf("hello from task 2\n");
/* Ask to wait for 250 ticks */
timer_set_wait(TASK_2_ID, 250);
}
}
And schedule (a big word here) the tasks:
/** main.c **/
int main()
{
/* init the program, like set up the timer interruption */
init()
/* do tasks, for ever*/
while(1)
{
task_1();
task_2();
}
return 0;
}
I think what I have described is a lame solution that should not be seriously used.
The code I gave is full of problems, like what happens if a task become to slow to execute...
Instead, you --could-- should use some RT Os, like FreeRTOS which is very helpful in this kind of problems.

proper use of time_is_before_jiffies()

My Linux device driver has some obstinate logic which twiddles with some hardware and then waits for a signal to appear. The seemingly proper way is:
ulong timeout, waitcnt = 0;
...
/* 2. Establish programming mode */
gpio_bit_write (MPP_CFG_PROGRAM, 0); /* assert */
udelay (3); /* one microsecond should be long enough */
gpio_bit_write (MPP_CFG_PROGRAM, 1); /* de-assert */
/* 3. Wait for the FPGA to initialize. */
/* 100 ms timeout should be nearly 100 times too long */
timeout = jiffies + msecs_to_jiffies(100);
while (gpio_bit_read (MPP_CFG_INIT) == 0 &&
time_is_before_jiffies (timeout))
++waitcnt; /* do nothing */
if (!time_is_before_jiffies (timeout)) /* timed out? */
{
/* timeout error */
}
This always exercises the "timeout error" path and doesn't increment waitcnt at all. Perhaps I don't understand the meaning of time_is_before_jiffies(), or it is broken. When I replace it with the much more understandable direct comparison of jiffies:
while (gpio_bit_read (MPP_CFG_INIT) == 0 &&
jiffies <= timeout)
++waitcnt; /* do nothing */
It works just fine: it loops for awhile (1600 µS), sees the INIT bit come on, and then proceeds without triggering a timeout error.
The comment for time_is_before_jiffies() is:
/* time_is_before_jiffies(a) return true if a is before jiffies */
#define time_is_before_jiffies(a) time_after(jiffies, a)
As the sense of the comparison seemed nonsensically backward, I replaced both with time_is_after_jiffies(), but that doesn't work either.
What am I doing wrong? Maybe I should replace use of this confusing macro with the straightforward jiffies <= timeout logic, though that seems less portable.
The jiffies <= timeout comparison does not work when the jiffies are wrapping around, so you must use it.
The condition you want to use can be described as "has not yet timed out".
This means that the current time (jiffies) has not yet reached the timeout time (timeout), i.e., jiffies is before the variable you are comparing it to, which means that your variable is after jiffies.
(All the time_is_ functions have jiffies on the right side of the comparison.)
So you have to use timer_is_after_jiffies() in the while loop.
(And the <= implies that you actually want to use time_is_after_eq_jiffies().)
The timeout check should be better done by reading the GPIO bit, because it would be a shame if your code times out although it got the signal right at the end.
Furthermore, busy-looping for a hundred milliseconds is extremly evil; you should release the CPU if you don't need it:
unsigned long timeout = jiffies + msecs_to_jiffies(100);
bool ok = false;
for (;;) {
ok = gpio_bit_read(MPP_CFG_INIT) != 0;
if (ok || time_is_before_eq_jiffies(timeout))
break;
/* you should do msleep(...) or cond_resched() here, if possible */
}
if (!ok) /* timed out? */
...
(This loop uses time_is_before_eq_jiffies() because the condition is reversed.)

C Linux Bandwidth Throttling of Application

What are some ways I can try to throttle back a send/sendto() function inside a loop. I am creating a port scanner for my network and I tried two methods but they only seem to work locally (they work when I test them on my home machine but when I try to test them on another machine it doesn't want to create appropriate throttles).
method 1
I was originally parsing /proc/net/dev and reading in the "bytes sent" attribute and basing my sleep time off that. That worked locally (the sleep delay was adjusting to adjust the flow of bandwidth) but as soon as I tried it on another server also with /proc/net/dev it didn't seem to be adjusting data right. I ran dstat on a machine I was locally scanning and it was outputting to much data to fast.
method 2
I then tried to keep track of how many bytes total I was sending and adding it to a total_sent variable which my bandwidth thread would read and compute a sleep timer for. This also worked on my local machine but when I tried it on a server it was saying that it was only sending 1-2 packets each time my bandwidth thread would check total_sent making my bandwidth thread reduce sleep to 0, but even at 0 the total_sent variable did not increase due to the reduced sleep time but instead stayed the same.
Overall I am wanting a way to monitor bandwidth of the Linux computer and calculate a sleep time I can pass into usleep() before or after each of my send/sendto() socket calls to throttle back the bandwidth.
Edit: some other things I forgot to mention is that I do have a speedtest function that calculates upload speed of the machine and I have 2 threads. 1 thread adjusts a global sleep timer based on bandwidth usage and thread 2 sends the packets to the ports on a remote machine to test if they are open and to fingerprint them (right now I am just using udp packets with a sendto() to test this all).
How can I implement bandwidth throttling for a send/sendto() call using usleep().
Edit: Here is the code for my bandwidth monitoring thread. Don't concern yourself about the structure stuff, its just my way of passing data to a thread.
void *bandwidthmonitor_cmd(void *param)
{
int i = 0;
double prevbytes = 0, elapsedbytes = 0, byteusage = 0, maxthrottle = 0;
//recreating my param struct i passed to the thread
command_struct bandwidth = *((command_struct *)param);
free(param);
//set SLEEP (global variable) to a base time in case it was edited and not reset
SLEEP = 5000;
//find the maximum throttle speed in kb/s (takes the global var UPLOAD_SPEED
//which is in kb/s and times it by how much bandwidth % you want to use
//and devides by 100 to find the maximum in kb/s
//ex: UPLOAD_SPEED = 60, throttle = 90, maxthrottle = 54
maxthrottle = (UPLOAD_SPEED * bandwidth.throttle) / 100;
printf("max throttle: %.1f\n", maxthrottle);
while(1)
{
//find out how many bytes elapsed since last polling of the thread
elapsedbytes = TOTAL_BYTES_SEND - prevbytes;
printf("elapsedbytes: %.1f\n", elapsedbytes);
//set prevbytes to our current bytes so we can have results next loop
prevbytes = TOTAL_BYTES_SEND;
//convert our bytes to kb/s
byteusage = 8 * (elapsedbytes / 1024);
//throttle control to make it adjust sleep 20 times every 30~
//iterations of the loop
if(i & 0x40)
{
//adjust SLEEP by 1.1 gain
SLEEP += (maxthrottle - byteusage) * -1.1;//;
if(SLEEP < 0){
SLEEP = 0;
}
printf("sleep:%.1f\n\n", SLEEP);
}
//sleep the thread for a short bit then start the process over
usleep(25000);
//increment variable i for our iteration throttling
i++;
}
}
My sending thread is just a simple sendto() routine in a while(1) loop sending udp packets for testing. sock is my sockfd, buff is a 64 byte character array filled with "A" and sin is my sockaddr_in.
while(1)
{
TOTAL_BYTES_SEND += 64;
sendto(sock, buff, strlen(buff), 0, (struct sockaddr *) &sin, sizeof(sin))
usleep(SLEEP);
}
I know my socket functions work because I can see the usage in dstat on my local machine and the remote machine. This bandwidth code works on my local system (all the variables change as they should) but on the server I tried testing on elapsed bytes does not change (is always 64/128 per iteration of the thread) and results in SLEEP throttling down to 0 which should in theory make the machine send packets faster but even with SLEEP equating to 0 elapsedbytes remain 64/128. I've also encoded the sendto() function inside a if statement checking for the function returning -1 and alerting me by printf-ing the error code but there hasn't been one in the tests I've done.
It seems like this could be most directly solved by calculating the throttle sleep time in the send thread. I'm not sure I see the benefit of another thread to do this work.
Here is one way to do this:
Select a time window in which you will measure your send rate. Based on your target bandwidth this will give you a byte maximum for that amount of time. You can then check to see if you have sent that many bytes after each sendto(). If you do exceed the byte threshold then sleep until the end of the window in order to perform the throttling.
Here is some untested code showing the idea. Sorry that clock_gettime and struct timespec add some complexity. Google has some nice code snippets for doing more complete comparisons, addition, and subtraction with struct timespec.
#define MAX_BYTES_PER_SECOND (128L * 1024L)
#define TIME_WINDOW_MS 50L
#define MAX_BYTES_PER_WINDOW ((MAX_BYTES_PER_SECOND * TIME_WINDOW_MS) / 1000L)
#include <time.h>
#include <stdlib.h>
int foo(void) {
struct timespec window_start_time;
size_t bytes_sent_in_window = 0;
clock_gettime(CLOCK_REALTIME, &window_start_time);
while (1) {
size_t bytes_sent = sendto(sock, buff, strlen(buff), 0, (struct sockaddr *) &sin, sizeof(sin));
if (bytes_sent < 0) {
// error handling
} else {
bytes_sent_in_window += bytes_sent;
if (bytes_sent_in_window >= MAX_BYTES_PER_WINDOW) {
struct timespec now;
struct timespec thresh;
// Calculate the end of the window
thresh.tv_sec = window_start_time.tv_sec;
thresh.tv_nsec = window_start_time.tv_nsec;
thresh.tv_nsec += TIME_WINDOW_MS * 1000000;
if (thresh.tv_nsec > 1000000000L) {
thresh.tv_sec += 1;
thresh.tv_nsec -= 1000000000L;
}
// get the current time
clock_gettime(CLOCK_REALTIME, &now);
// if we have not gotten to the end of the window yet
if (now.tv_sec < thresh.tv_sec ||
(now.tv_sec == thresh.tv_sec && now.tv_nsec < thresh.tv_nsec)) {
struct timespec remaining;
// calculate the time remaining in the window
// - See google for more complete timespec subtract algorithm
remaining.tv_sec = thresh.tv_sec - now.tv_sec;
if (thresh.tv_nsec >= now.tv_nsec) {
remaining.tv_nsec = thresh.tv_nsec - now.tv_nsec;
} else {
remaining.tv_nsec = 1000000000L + thresh.tv_nsec - now.tv_nsec;
remaining.tv_sec -= 1;
}
// Sleep to end of window
nanosleep(&remaining, NULL);
}
// Reset counters and timestamp for next window
bytes_sent_in_window = 0;
clock_gettime(CLOCK_REALTIME, &window_start_time);
}
}
}
}
If you'd like to do this at the application level, you could use a utility such as trickle to limit or shape the socket transfer rates available to the application.
For instance,
trickle -s -d 50 -w 100 firefox
would start firefox with a max download rate of 50KB/s and a peak detection window of 100KB. Changing these values may produce something suitable for your application testing.

Output text one letter at a time in C

How would I output text one letter at a time like it's typing without using Sleep() for every character?
Sleep is the best option, since it doesn't waste CPU cycles.
The other option is busy waiting, meaning you spin constantly executing NoOps. You can do that with any loop structure that does absolutely nothing. I'm not sure what this is for, but it seems like you might also want to randomize the time you wait between characters to give it a natural feel.
I would have a Tick() method that would loop through the letters and only progress if a random number was smaller than a threshold I set.
some psuedocode may look like
int escapeIndex = 0;
int escapeMax = 1000000;
boolean exportCharacter = false;
int letterIndex = 0;
float someThresh = 0.000001;
String typedText = "somethingOrOther...";
int letterMax = typedText.length();
while (letterIndex < letterMax){
escapeIndex++;
if(random(1.0) < someThresh){
exportCharacter = true;
}
if(escapeIndex > escapeMax) {
exportCharacter = true;
}
if(exportCharacter) {
cout << typedText.charAt(letterIndex);
escapeIndex = 0;
exportCharacter = false;
letterIndex++;
}
}
If I were doing this in a video game lets say to simulate a player typing text into a terminal, this is how I would do it. It's going to be different every time, and it's escape mechanism provides a maximum time limit for the operation.
Sleeping is the best way to do what you're describing, as the alternative, busy waiting, is just going to waste CPU cycles. From the comments, it sounds like you've been trying to manually hard-code every single character you want printed with a sleep call, instead of using loops...
Since there's been no indication that this is homework after ~20 minutes, I thought I'd post this code. It uses usleep from <unistd.h>, which sleeps for X amount of microseconds, if you're using Windows try Sleep().
#include <stdio.h>
#include <unistd.h>
void type_text(char *s, unsigned ms_delay)
{
unsigned usecs = ms_delay * 1000; /* 1000 microseconds per ms */
for (; *s; s++) {
putchar(*s);
fflush(stdout); /* alternatively, do once: setbuf(stdout, NULL); */
usleep(usecs);
}
}
int main(void)
{
type_text("hello world\n", 100);
return 0;
}
Since stdout is buffered, you're going to have to either flush it after printing each character (fflush(stdout)), or set it to not buffer the output at all by running setbuf(stdout, NULL) once.
The above code will print "hello world\n" with a delay of 100ms between each character; extremely basic.

Creating a timeout using time and difftime

gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU
You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).
Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC

Resources