Shortest Remaining Time Scheduling Timing Issue - c

I am working on finishing an Operating Systems Process Scheduling Algorithms project, and I am running into a timing issue related to the Shortest Time Remaining First algorithm. My teammates and I hand calculated the results of the waiting time, turnaround time, etc., and compared them to an online source, so we think the problem is in our code, specifically for the waiting time.
Here is the relevant function where we calculate the timing and run the algorithm:
bool shortest_remaining_time_first(dyn_array_t *ready_queue, ScheduleResult_t *result)
{
if (!ready_queue || !result)
return false;
const int numProcesses = (int)ready_queue->size;
/* should provide array in lowest arrival times */
dyn_array_sort(ready_queue, &compare_arrival_times);
dyn_array_t *run_queue = dyn_array_create(ready_queue->capacity, ready_queue->data_size, NULL);
/* ProcessControlBlock_t looks like:
typedef struct ProcessControlBlock = {
uint32_t arrival,
uint32_t remaining_burst_time,
uint32_t priority
} ProcessControlBlock_t
and dyn_array_t looks like:
typedef struct dyn_array = {
size_t capacity;
size_t size;
const size_t data_size;
void *array; /* this is an array of ProcessControlBlock_t's*/
void (*destructor)(void *);
} dyn_array_t;
*/
ProcessControlBlock_t pcb, pcbRun;
dyn_array_extract_front(ready_queue, &pcb);
/* we need 3 of these to use seperately */
int processCount = numProcesses;
int loopCounter = numProcesses;
int runCounter = 0;
/* timing stuff */
int ticks = 0;
int waitTime = 0;
int timeComplete = 0;
/* used to check if the arrival time/burst time is the same to check for the same process */
int arrivalCheck = 0;
int burstCheck = 0;
/* while there are still processes to run */
while (loopCounter > 0)
{
/* will wait to push new process onto run_queue */
while((int)pcb.arrival <= ticks && processCount > 0)
{
runCounter++;
/* this will not push a new value until that much time has passed */
dyn_array_push_front(run_queue, &pcb);
if(processCount > 0)
{
dyn_array_extract_front(ready_queue, &pcb);
processCount--;
}
}
if(runCounter > 0)
{
/* we want to re-sort each time so we can always pull the smallest burst time */
dyn_array_sort(run_queue, &compareBurstTimes);
dyn_array_extract_front(run_queue, &pcbRun);
/* assuming there is still a process to run */
if(pcbRun.remaining_burst_time > 0)
{
/* if we are on the same process, do nothing */
/* else, change the waitTime */
if((int)pcbRun.arrival == arrivalCheck && (int)pcbRun.remaining_burst_time == burstCheck){}
else
{
/* this is where we think the problem is occurring */
waitTime += ticks - pcbRun.arrival;
}
/* the process has started, then run on "CPU" and increase time passed */
pcbRun.started = true;
virtual_cpu(&pcbRun);
ticks++;
/* if still > 0 after being decremented */
if(pcbRun.remaining_burst_time > 0)
{
dyn_array_push_front(run_queue, &pcbRun);
arrivalCheck = pcbRun.arrival;
burstCheck = pcbRun.remaining_burst_time;
}
}
/* if the process has run all the way through, decrease the number of processes */
if(pcbRun.remaining_burst_time <= 0)
{
loopCounter--;
runCounter--;
timeComplete += ticks - pcbRun.arrival;
}
}
else
{
/* if there is a gap between arrival times and processes */
ticks++;
}
}
/* calculate timing values */
result->average_waiting_time = (float)waitTime / (float)numProcesses;
result->average_turnaround_time = (float)timeComplete / (float)numProcesses;
result->total_run_time = ticks;
/* this frees run_queue AND run_queue->array */
dyn_array_destroy(run_queue);
return true;
}
Our first Process Control Block comes in with a table that looks like this:
pid
arrival
remaining_burst_time
priority
0
0
15
0
1
1
10
0
2
2
5
0
3
3
20
0
From these values, we hand-calculated and verified the waiting time to be 11.75, yet our program produces a calculated waiting time to be 12.25. Any idea why this may be?
Our approach of calculating each value on essentially almost every tick has made it more difficult to find an appropriate solution.

Related

Calculating and returning CPU usage on a one second interval

I am trying to build a function that will return the cpu usage of my vm's processors over a period of 1 second. The goal is to use pretty basic C library function. The method takes 3 arguments: the path, a cpu_stats *prev structure and a cpu_stats *curr structure. Both structures are used to store previous and current values so that the method gets accurate as soon as it ran twice. The problem I seem to have is about accurately returning the value. For now I am adding every value of the first line of /proc/stat and using that as my total value, and taking the value of the 3rd column as my idle value ( no idea if it is this one, different sites different answers about what each column is). Let me know if you know where to start and what to change. For now all the tests my code go through says that my results are always 100.0% but the values expected are like 32.2%/72.1%/49.0%/etc...
Here is my code:
double pfs_cpu_usage(char *proc_dir, struct cpu_stats *prev, struct cpu_stats *curr)
{
long idleOne, idleTwo, totalOne, totalTwo=0;
idleOne = prev->idle;
totalOne = prev->total;
int fd = open_path(proc_dir, "stat");
if (fd <= 0) {
perror("open_path");
return -1;
}
size_t line_sz = 0;
char line[256];
while ((line_sz = one_lineread(fd, line, 256)) > 0) {
char *next_tok = line;
char *curr_tok;
char *endPtr;
int counter = 1;
while ((curr_tok = next_token(&next_tok, "\n\t: ")) != NULL) {
if(counter == 5) {
counter++;
idleTwo = strtol(curr_tok, &endPtr, 32);
curr->idle = idleTwo;
}
else if(strcmp(curr_tok,"cpu") == 0){
counter++;
}
else{
counter++;
totalTwo += strtol(curr_tok, &endPtr, 32);
curr->total = totalTwo;
}
}
}
long diffIdle = idleTwo - idleOne;
long diffTotal = totalTwo - totalOne;
double cpuUsage = (1.0 - ((double)diffIdle)*1.0/((double)diffTotal)*100);
close(fd);
return cpuUsage;
}
Here is the first line of my /proc/stat file:
cpu 12836188 17450 280277082 121169501 1538 0 2490 5206 0 0
Apparently, the idle value stored seems off from my debugging.
Ok, then this article? https://stackoverflow.com/a/23376195/13307070
This answer based on https://htop.dev/ which using /proc/stat

How can I have a conditional statement that performs two tasks? first one and then the other in a loop

So I am trying to code a program in arduino that does a task for 2 minutes and then another task for 5 mins. first one and then the other in a loop until a different condition is met.
Ive been trying to do this with if statements and while loops but im getting lost in the time part i think
//Pneumatic feed system
double minute = 1000*60;
double vibTime = 2 * minute; //2 mins
//double vibTime = 5000;
double execTime = millis();
double waitTime = 5 * minute; //time waits between vibrating
//double waitTime = 10000;
void DoPneuFeed() {
//Serial.print("Delta:");
//Serial.println(millis()-execTime);
if(Temp_Data[T_9] < 500)
{
if(execTime + vibTime < millis())
{
//turn on vibration for 2 mins
execTime = millis();
Serial.println("VIBRATING");
}
if(execTime + waitTime < millis())
{
//turn off vibration for 5 mins
execTime = millis();
Serial.println("WAITING");
}
}
if(Temp_Data[T_9] > 500)
{
relayOff(3);
relayOff(7);
}
}
void PneuFeed()
{
if(execTime + vibTime > millis())
{
//air pressure open
relayOn(3);
//vibrator on
relayOn(7);
}
else if(execTime + waitTime > millis())
{
relayOff(3);
relayOff(7);
}
}
I want to turn on the vibration mode for 2 mins and then turn it off for 5 mins. as long as the Temp_Data(T_9) < 500. if it is greater than 500 it should stay off.
Without going into the specifics of your code (since I cannot build it to test.) I only have one quick suggestion:
Instead of using a series of if-then-else (or variations of it) consider using a state machine. Implementations often include use of a switch() inside a while(expression){...} loop. The following is a very simple example of how you might be able to do the steps you need in this construct:
(Note, this is a mix of C and pseudo code for illustration. It is close to compilable , but contains a few undefined items.)
typedef enum {
START,
SET_VIB,
CHECK_TEMP,
GET_TIME,
CLEANUP,
SLEEP,
MAX_STATE
}STATE;
enum {
IN_WORK,
FAILED,
SUCCESS
};
enum {// durations in minutes
MIN_2,
MIN_5,
MIN_10,
MAX_DUR// change/add durations as profile needs
};
const int dur[MAX_DUR] = {120, 300, 600};
int RunProcess(STATE state);
int main(void)
{
STATE state = START;
RunProcess(state);
return 0;
}
int RunProcess(STATE state)//Add arguments here for temperature,
{ //Sleep duration, Cycles, etc. so they are not hardcoded.
int status;
time_t elapsed = 0; //s
BOOL running = TRUE;
double temp, setpoint;
int duration;
time_t someMsValue;
while(running)
switch (state) {
case START:
//Power to oven
//Power to vibe
//initialize state and status variables
duration = dur[MIN_2];
state = SLEEP;
status = IN_WORK;
break;
case SET_VIB:
//test and control vibe
if(duration == dur[MIN_2])
{
duration = dur[MIN_5];
//& turn off vibe
}
else
{
duration = dur[MIN_2];
//& turn on vibe
}
//init elapsed
elapsed = 0;
status = IN_WORK;
state = CHECK_TEMP;
break;
case CHECK_TEMP:
//read temperature
if(temp < setpoint)
{
state = SLEEP;
status = IN_WORK;
}
else
{
state = CLEANUP;
status = SUCCESS;
}
break;
case GET_TIME:
elapsed = getTime();
if(elapsed > duration) state = SET_VIB;
else state = SLEEP;
status = IN_WORK;
break;
case CLEANUP:
//turn off heat
//turn off vibe
running = FALSE;
break;
case SLEEP:
Sleep(someMsValue);
state = GET_TIME;
status = IN_WORK;
break;
default:
// called with incorrect state value
// error message and exit
status = FAILED;
state = CLEANUP;
break;
}
return status;
}
Suggestion for improvement to this illustration:
Expand this code to read in a "profile" file. It could include such things as parameter values typical to your process, such as temperature profiles, vibration profiles, number of cycles, etc. With such information, all of the hard-coding used here as illustration could be replaced with run-time configurable parameters, allowing the system to use many different profiles
without having to recompile the executable each time.
Another state machine example.

Why nice is defined as long in 'set_user_nice'

I am a student learning OS. When I read the source code of set_user_nice, I found the nice is defined as long. But I have learned that the range of nice is -20 ~ 19. So, why not define nice as int to save more room in RAM?
Is this related to x86 or x64 systems?
Here is the source code(./kernel/sched/core.c)
void set_user_nice(struct task_struct *p, long nice)
{
bool queued, running;
int old_prio, delta;
struct rq_flags rf;
struct rq *rq;
if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
return;
/*
* We have to be careful, if called from sys_setpriority(),
* the task might be in the middle of scheduling on another CPU.
*/
rq = task_rq_lock(p, &rf);
/*
* The RT priorities are set via sched_setscheduler(), but we still
* allow the 'normal' nice value to be set - but as expected
* it wont have any effect on scheduling until the task is
* SCHED_DEADLINE, SCHED_FIFO or SCHED_RR:
*/
if (task_has_dl_policy(p) || task_has_rt_policy(p)) {
p->static_prio = NICE_TO_PRIO(nice);
goto out_unlock;
}
queued = task_on_rq_queued(p);
running = task_current(rq, p);
if (queued)
dequeue_task(rq, p, DEQUEUE_SAVE);
if (running)
put_prev_task(rq, p);
p->static_prio = NICE_TO_PRIO(nice);
set_load_weight(p);
old_prio = p->prio;
p->prio = effective_prio(p);
delta = p->prio - old_prio;
if (queued) {
enqueue_task(rq, p, ENQUEUE_RESTORE);
/*
* If the task increased its priority or is running and
* lowered its priority, then reschedule its CPU:
*/
if (delta < 0 || (delta > 0 && task_running(rq, p)))
resched_curr(rq);
}
if (running)
set_curr_task(rq, p);
out_unlock:
task_rq_unlock(rq, p, &rf);
}

Thread pool - handle a case when there are more tasks than threads

I'm just entered multithreaded programming and as part of an exercise trying to implement a simple thread pool using pthreads.
I have tried to use conditional variable to signal working threads that there are jobs waiting within the queue. But for a reason I can't figure out the mechanism is not working.
Bellow are the relevant code snippets:
typedef struct thread_pool_task
{
void (*computeFunc)(void *);
void *param;
} ThreadPoolTask;
typedef enum thread_pool_state
{
RUNNING = 0,
SOFT_SHUTDOWN = 1,
HARD_SHUTDOWN = 2
} ThreadPoolState;
typedef struct thread_pool
{
ThreadPoolState poolState;
unsigned int poolSize;
unsigned int queueSize;
OSQueue* poolQueue;
pthread_t* threads;
pthread_mutex_t q_mtx;
pthread_cond_t q_cnd;
} ThreadPool;
static void* threadPoolThread(void* threadPool){
ThreadPool* pool = (ThreadPool*)(threadPool);
for(;;)
{
/* Lock must be taken to wait on conditional variable */
pthread_mutex_lock(&(pool->q_mtx));
/* Wait on condition variable, check for spurious wakeups.
When returning from pthread_cond_wait(), we own the lock. */
while( (pool->queueSize == 0) && (pool->poolState == RUNNING) )
{
pthread_cond_wait(&(pool->q_cnd), &(pool->q_mtx));
}
printf("Queue size: %d\n", pool->queueSize);
/* --- */
if (pool->poolState != RUNNING){
break;
}
/* Grab our task */
ThreadPoolTask* task = osDequeue(pool->poolQueue);
pool->queueSize--;
/* Unlock */
pthread_mutex_unlock(&(pool->q_mtx));
/* Get to work */
(*(task->computeFunc))(task->param);
free(task);
}
pthread_mutex_unlock(&(pool->q_mtx));
pthread_exit(NULL);
return(NULL);
}
ThreadPool* tpCreate(int numOfThreads)
{
ThreadPool* threadPool = malloc(sizeof(ThreadPool));
if(threadPool == NULL) return NULL;
/* Initialize */
threadPool->poolState = RUNNING;
threadPool->poolSize = numOfThreads;
threadPool->queueSize = 0;
/* Allocate OSQueue and threads */
threadPool->poolQueue = osCreateQueue();
if (threadPool->poolQueue == NULL)
{
}
threadPool->threads = malloc(sizeof(pthread_t) * numOfThreads);
if (threadPool->threads == NULL)
{
}
/* Initialize mutex and conditional variable */
pthread_mutex_init(&(threadPool->q_mtx), NULL);
pthread_cond_init(&(threadPool->q_cnd), NULL);
/* Start worker threads */
for(int i = 0; i < threadPool->poolSize; i++)
{
pthread_create(&(threadPool->threads[i]), NULL, threadPoolThread, threadPool);
}
return threadPool;
}
int tpInsertTask(ThreadPool* threadPool, void (*computeFunc) (void *), void* param)
{
if(threadPool == NULL || computeFunc == NULL) {
return -1;
}
/* Check state and create ThreadPoolTask */
if (threadPool->poolState != RUNNING) return -1;
ThreadPoolTask* newTask = malloc(sizeof(ThreadPoolTask));
if (newTask == NULL) return -1;
newTask->computeFunc = computeFunc;
newTask->param = param;
/* Add task to queue */
pthread_mutex_lock(&(threadPool->q_mtx));
osEnqueue(threadPool->poolQueue, newTask);
threadPool->queueSize++;
pthread_cond_signal(&(threadPool->q_cnd));
pthread_mutex_unlock(&threadPool->q_mtx);
return 0;
}
The problem is that when I create a pool with 1 thread and add a lot of jobs to it, it does not executes all the jobs.
[EDIT:]
I have tried running the following code to test basic functionality:
void hello (void* a)
{
int i = *((int*)a);
printf("hello: %d\n", i);
}
void test_thread_pool_sanity()
{
int i;
ThreadPool* tp = tpCreate(1);
for(i=0; i<10; ++i)
{
tpInsertTask(tp,hello,(void*)(&i));
}
}
I expected to have input in like the following:
hello: 0
hello: 1
hello: 2
hello: 3
hello: 4
hello: 5
hello: 6
hello: 7
hello: 8
hello: 9
Instead, sometime i get the following output:
Queue size: 9 //printf added for debugging within threadPoolThread
hello: 9
Queue size: 9 //printf added for debugging within threadPoolThread
hello: 0
And sometimes I don't get any output at all.
What is the thing I'm missing?
When you call tpInsertTask(tp,hello,(void*)(&i)); you are passing the address of i which is on the stack. There are multiple problems with this:
Every thread is getting the same address. I am guessing the hello function takes that address and prints out *param which all point to the same location on the stack.
Since i is on the stack once test_thread_pool_sanity returns the last value is lost and will be overwritten by other code so the value is undefined.
Depending on then the worker thread works through the tasks versus when your main test thread schedules the tasks you will get different results.
You need the parameter passed to be saved as part of the task in order to guarantee it is unique per task.
EDIT: You should also check the return code of pthread_create to see if it is failing.

C multithread performance issue

I am writing a multi-threaded program to traverse an n x n matrix, where the elements in the main diagonal are processed in a parallel manner, as shown in the code below:
int main(int argc, char * argv[] )
{
/* VARIABLES INITIALIZATION HERE */
gettimeofday(&start_t, NULL); //start timing
for (int slice = 0; slice < 2 * n - 1; ++slice)
{
z = slice < n ? 0 : slice - n + 1;
int L = 0;
pthread_t threads[slice-z-z+1];
struct thread_data td[slice-z-z+1];
for (int j=z; j<=slice-z; ++j)
{
td[L].index= L;
printf("create:%d\n", L );
pthread_create(&threads[L],NULL,mult_thread,(void *)&td[L]);
L++;
}
for (int j=0; j<L; j++)
{
pthread_join(threads[j],NULL);
}
}
gettimeofday(&end_t, NULL);
printf("Total time taken by CPU: %ld \n", ( (end_t.tv_sec - start_t.tv_sec)*1000000 + end_t.tv_usec - start_t.tv_usec));
return (0);
}
void *mult_thread(void *t)
{
struct thread_data *my_data= (struct thread_data*) t;
/* SOME ADDITIONAL CODE LINES HERE */
printf("ThreadFunction:%d\n", (*my_data).index );
return (NULL);
}
The problem is that this multithreaded implementation gave me a very bad performance compared with the serial (naive) implementation.
Are there some adjustments that could be done to improve the performance of the multithreaded version ??
a thread pool may make it better.
define a new struct type as follow.
typedef struct {
struct thread_data * data;
int status; // 0: ready
// 1: adding data
// 2: data handling, 3: done
int next_free;
} thread_node;
init :
size_t thread_size = 8;
thread_node * nodes = (thread_node *)malloc(thread_size * sizeof(thread_node));
for(int i = 0 ; i < thread_size - 1 ; i++ ) {
nodes[i].next_free = i + 1;
nodes[i].status = 0 ;
}
nodes[thread_size - 1].next_free = -1;
int current_free_node = 0 ;
pthread_mutex_t mutex;
get thread :
int alloc() {
pthread_mutex_lock(&mutex);
int rt = current_free_node;
if(current_free_node != -1) {
current_free_node = nodes[current_free_node].next_free;
nodes[rt].status = 1;
}
pthread_mutex_unlock(&mutex);
return rt;
}
return thread :
void back(int idx) {
pthread_mutex_lock(&mutex);
nodes[idx].next_free = current_free_node;
current_free_node = idx;
nodes[idx].status = 0;
pthread_mutex_unlock(&mutex);
}
create the threads first, and use alloc() to try to get a idle thread, update the pointer.
don't use join to judge the status.
modify your mult_thread as a loop and after the job finished , just change your status to 3
for each loop in the thread , you may give it more work
I wish it will give you some help.
------------ UPDATED Apr. 23, 2015 -------------------
here is a example.
compile & run with command
$ g++ thread_pool.cc -o tp -pthread --std=c++
yu:thread_pool yu$ g++ tp.cc -o tp -pthread --std=c++11 && ./tp
1227135.147 1227176.546 1227217.944 1227259.340...
time cost 1 : 1068.339091 ms
1227135.147 1227176.546 1227217.944 1227259.340...
time cost 2 : 548.221607 ms
you may also remove timer and it can also compiled as a std c99 file.
In current , the thread size has been limited to 2. You may also adjust the parameter thread_size, and recompile & run again. More threads may give your some more advantage(in my pc, if I change the thread size to 4, the task will finish in 280ms), while too much thread number may not help you too much if you have no enough cpu thread.

Resources