I am a newbie to CGI programming in C. I was trying to generate a random number and keep displaying on the web-page. I made a program in C compiled it and have place in the path of fedora core /var/www/cgi-bin/rand_mumber.cgi. I can access this cgi page via local host and the results are fine. But, when I try to access the web page from another machine on the network, a problem is seen. The webpage keeps loading the data and finally times out. What my analysis says is that the web-browser will not upload the page until the program has terminated. In the example below, the program is never-terminating. Also, one example of this kind is the local time after Google -search, the local time doesn't keep updating by its own. It needs a refresh click. But my
requirement is that the page should keep updating constantly not waiting for the completion or a refresh click. As the random number is generated it should be displayed on web-page and then again should read for another random number. This process should keep repeating rather than the web-page waiting for a program termination.
Here is the program, which I compiled and renamed as rand.cgi
Please advice me a method to keep updating and displaying the web-page.
It should constantly keep updating the webpage when the random number is generated.
It's a never ending program.
#include"stdio.h"
#include"pthread.h"
#include"unistd.h"
pthread_t tid;
void *print_series_forever()
{
int n;
int k = 0;
while( 1 )
{
n = rand()%100 + 1;
printf("%d",n);
k++;
if ( k == 110 )
{
printf("\n"); /** Enter next line **/
k = 0;
}
usleep(100);
}
}
int main(int argc, char **argv)
{
int err;
err = pthread_create(&tid,NULL,&print_series_forever,NULL);
pthread_join(tid,NULL);
sleep(2);
return 0;
}
Any help is welcome.
Related
I'm new to the programming world, recently I began my programming path with C, and because of that I made a program that determines if a number is whether perfect or not. I use Code::Blocks IDE, and it works just fine, the problem is when I click the option "Build and run", the IDE executes the program and works perfectly, but when I select the .exe file from my desktop, it opens up, but doesn't show any output, the window just closes suddenly. Does someone have any idea on how to solve this issue?
Code:
#include <stdio.h>
#include <conio.h>
#include <stdlib.h>
int main() {
int N;
int j;
int Sum = 0;
printf("Input a number.\n");
scanf("%d",&N);
for (j = 1; j < N; j++)
{
if (N%j==0)
{
Sum+=j;
}
}
if (Sum==N)
{
printf("The number is perfect.\n");
} else {
printf("The number is not perfect.\n");
}
return 0;
}
Running program with Code::Blocks Build and run option
The only part of the Desktop located .exe that I can reach
If someone can suggest a solution, I will be very thankful!
As it was said, the program exits immediatly after it's completion. If you want to run the program, executing it by double click, you can place a pause condition in the program, for instance if you put getchar(); just before the return 0; statement, it will only exit after it receives an input from the keyboard, that is after you enter a key.
I have a C application whose one of the jobs is to call an executable file. That file has performance measurement routines inserted during compilation, at the level of intermediate code. It can measure time or L1/L2/L3 cache misses. In other words, I have modified the LLVM compiler to insert a call to that function and print the result to stdout for any compiled program.
Now, like I mentioned at the beginning, I would like to execute the program (with this result returned to stdout) from a separate C application and save that result. The way I'm doing it right now is:
void executeProgram(const char* filename, char* time) {
printf("Executing selected program %s...\n", filename);
char filePath[100] = "/home/michal/thesis/Drafts/output/";
strcat(filePath, filename);
FILE *fp;
fp = popen(filePath, "r");
char str[30];
if (fp == NULL) {
printf("Failed to run command\n" );
exit(1);
}
while (fgets(str, sizeof(str) - 1, fp) != NULL) {
strcat(time, str);
}
pclose(fp);
}
where filename is the name of the compiled executable to run. The result is saved to time string.
The problem is, that the results I'm getting are pretty different and unstable compared to those that are returned by simply running the executable 'by hand' from the command line (./test16). They look like:
231425
229958
230450
228534
230033
230566
231059
232016
230733
236017
213179
90515
229775
213351
229316
231642
230875
So they're mostly around 230000 us, with some occasional drops. The same executable, run from within the other application, produces:
97097
88706
91418
97970
97972
94597
95846
95139
91070
95918
107006
89988
90882
91986
90997
88824
129136
94976
102191
94400
95215
95061
92115
96319
114091
95230
114500
95533
102294
108473
105730
Note that it is the same executable that's being called. Yet the measured time it returns is different. The program that is being measured consists of a function call to a simple nested loop, accessing array elements. Here is the code:
#include "test.h"
#include <stdio.h>
float data[1000][1000] = {0};
void test(void)
{
int i0, i1;
int N = 80;
float mean[1000];
for (i0 = 0; i0 < N; i0++)
{
mean[i0] = 0.0;
for (i1 = 0; i1 < N; i1++) {
mean[i0] += data[i0][i1];
}
mean[i0] /= 1000;
}
}
I'm suspecting that there is something wrong in the way the program is invoked in the code, maybe the process should be forked or something? Any ideas?
You didnt specify where exactly your time measuring subroutines are inserted, so all I can really offer is guesswork.
The results seem to hint to the exact opposite - running the application from shell is slower, so I wouldn't worry about the way you're starting the process from the C code. My guess would be - when you run your program from shell, it's the terminal that's slowing you down. When you're running the process from your C code, you pipe the output back to your 'starter' application, which is already waiting for input on the pipe.
As a side note, consider switching from strcat to something safer, like strncat.
I'm hoping that someone can help me out. I have not written much in C code in over a decade and just picked this back up 2 days ago so bear with me please as I am rusty. THANK YOU!
What:
I'm working on creating a very simple thread pool for an application. This code is written in C on CodeBlocks using GNU GCC for the compiler. It is built as a command line application. No additional files are linked or included.
The code should create X threads (in this case I have it set to 10) each of which sits and waits while watching an array entry (identified by the threads thread index or count) for any incoming data it might need to process. Once a given child has processed the data coming in via the array there is no need to pass the data back to the main thread; rather the child should simply reset that array entry to 0 to indicate that it is ready to process another input. The main thread will receive requests and will dole them out to whatever thread is available. If none are available then it will refuse to handle that input.
For simplicity sake the code below is a complete and working but trimmed and gutted version that DOES exhibit the stack overflow I am trying to track down. This compiles fine and initially runs fine but after a few passes the threadIndex value in the child thread process (workerThread) becomes corrupt and jumps to weird values - generally becoming the number of milliseconds I have put in for the 'Sleep' function.
What I have checked:
The threadIndex variable is not a global or shared variable.
All arrays are plenty big enough to handle the max number of threads I am creating.
All loops have the loopvariable reset to 0 before running.
I have not named multiple variables with the same name.
I use atomic_load to make sure I don't write to the same global array variable with two different threads at once please note I am rusty... I may be misunderstanding how this part works
I have placed test cases all over to see where the variable goes nuts and I am stumped.
Best Guess
All of my research confirms what I recall from years back; I likely am going out of bounds somewhere and causing stack corruption. I have looked at numerous other problems like this on google as well as on stack overflow and while all point me to the same conclusion I have been unable to figure out what specifically is wrong in my code.
#include<stdio.h>
//#include<string.h>
#include<pthread.h>
#include<stdlib.h>
#include<conio.h>
//#include<unistd.h>
#define ESCAPE 27
int maxThreads = 10;
pthread_t tid[21];
int ret[21];
int threadIncoming[21];
int threadRunning[21];
struct arg_struct {
char* arg1;
int arg2;
};
//sick of the stupid upper/lowercase nonsense... boom... fixed
void* sleep(int time){Sleep(time);}
void* workerThread(void *arguments)
{
//get the stuff passed in to us
struct arg_struct *args = (struct arg_struct *)arguments;
char *address = args -> arg1;
int threadIndex = args -> arg2;
//hold how many we have processed - we are unlikely to ever hit the max so no need to round robin this number at this point
unsigned long processedCount = 0;
//this never triggers so it IS coming in correctly
if(threadIndex > 20){
printf("INIT ERROR! ThreadIndex = %d", threadIndex);
sleep(1000);
}
unsigned long x = 0;
pthread_t id = pthread_self();
//as long as we should be running
while(__atomic_load_n (&threadRunning[threadIndex], __ATOMIC_ACQUIRE)){
//if and only if we have something to do...
if(__atomic_load_n (&threadIncoming[threadIndex], __ATOMIC_ACQUIRE)){
//simulate us doing something
//for(x=0; x<(0xFFFFFFF);x++);
sleep(2001);
//the value going into sleep is CLEARLY somehow ending up in index because you can change that to any number you want
//and next thing you know the next line says "First thread processing done on (the value given to sleep)
printf("\n First thread processing done on %d\n", threadIndex);
//all done doing something so clear the incoming so we can reuse it for our next one
//this error should not EVER be able to get thrown but it is.... something is corrupting our stack and going into memory that it shouldn't
if(threadIndex > 20){ printf("ERROR! ThreadIndex = %d", threadIndex); }
else{ __atomic_store_n (&threadIncoming[threadIndex], 0, __ATOMIC_RELEASE); }
//increment the processed count
++processedCount;
}
else{Sleep(10);}
}
//no need to do atomocity I don't think for this as it is only set on the exit and not read till after everything is done
ret[threadIndex] = processedCount;
pthread_exit(&ret[threadIndex]);
return NULL;
}
int main(void)
{
int i = 0;
int err;
int *ptr[21];
int doLoop = 1;
//initialize these all to set the threads to running and the status on incoming to NOT be processing
for(i=0;i < maxThreads;i++){
threadIncoming[i] = 0;
threadRunning[i] = 1;
}
//create our threads
for(i=0;i < maxThreads;i++)
{
struct arg_struct args;
args.arg1 = "here";
args.arg2 = i;
err = pthread_create(&(tid[i]), NULL, &workerThread, (void *)&args);
if (err != 0){ printf("\ncan't create thread :[%s]", strerror(err)); }
}
//loop until we hit escape
while(doLoop){
//see if we were pressed escape
if(kbhit()){ if(getch() == ESCAPE){ doLoop = 0; } }
//just for testing - actual version would load only as needed
for(i=0;i < maxThreads;i++){
//make sure we synchronize so we don't end up pointing into a garbage address or half loading when a thread accesses us or whatever was going on
if(!__atomic_load_n (&threadIncoming[i], __ATOMIC_ACQUIRE)){
__atomic_store_n (&threadIncoming[i], 1, __ATOMIC_RELEASE);
}
}
}
//exiting...
printf("\n'Esc' pressed. Now exiting...\n");
//call to end them all...
for(i=0;i < maxThreads;i++){ __atomic_store_n (&threadRunning[i], 0, __ATOMIC_RELEASE); }
//join them all back up - if we had an actual worthwhile value here we could use it
for(i=0;i < maxThreads;i++){
pthread_join(tid[i], (void**)&(ptr[i]));
printf("\n return value from thread %d is [%d]\n", i, *ptr[i]);
}
return 0;
}
Output
Here is the output I get. Note that how long it takes before it starts going crazy does seem to possibly vary but not much.
Output Screen with Error
I don't trust your handling of args, there seems to be a race condition. What if you create N threads before the first one of them gets to run? Then the first thread created will probably see the args for the N:th thread, rather than for the first, and so on.
I don't believe there's a guarantee that automatic variables used in a loop like that are created in non-overlapping areas; after all they go out of scope with each iteration of the loop.
This is my program:
#include <stdio.h>
int main() {
FILE *logh;
logh = fopen("/home/user1/data.txt", "a+");
if (logh == NULL)
{
printf("error creating file \n");
return -1;
}
// write some data to the log handle and check if it gets written..
int result = fprintf(logh, "this is some test data \n");
if (result > 0)
printf("write successful \n");
else
printf("couldn't write the data to filesystem \n");
while (1) {
};
fclose(logh);
return 0;
}
When i run this program, i see that the file is getting created but it does not contain any data. what i understand i that there is data caching in memory before the data is actually written to the filesystem to avoid multiple IOs to increase performance. and I also know that i can call fsync/fdatasync inside the program to force a sync. but can i force the sync from outside without having to change the program?
I tried running sync command from Linux shell but it does not make the data to appear on the file. :(
Please help if anybody knows any alternative to do the same.
One useful information: I was researching some more on this and finally found this, to remove internal buffering altogether, the FILE mode can be set to _IONBF using int setvbuf(FILE *stream, char *buf, int mode, size_t size)
The IO functions usingFILE pointers cache the data to be written in an internal buffer within the program's memory until they decide to perform a system call to 'really' write it (which is for normal files usually when the size of the data cached reaches BUFSIZ).
Until then, there is no way to force writing from outside the progam.
The problem is that your program does not close the file because of your while statement. Remove these lines:
while (1) {
};
If the intent is to wait forever, then close the file with fclose before executing the while statement.
I have a program that communicate trough a TCP socket, with a server and a client.
Besides other things I have a buffer with pendent requests from the client, and I have also one thread that prints the requests that are being placed in the buffer by the main-thread.
So, for example, I have 3 requests for print 3 files, and the printer_thread have to print the 3 files one after the other. For doing this, I have a function "get", that get the the file to print and a function "put" that put the files in the buffer. When I do the get of something of the buffer it works pretty well and the printing of the file works too.
The problem arises when the client want to know how many files are in the buffer to be printed. I need to have a counter that any time that a put a thing in the buffer it increment, and any time that I get something it decrement, something easy.
But it doesn't work, my program only increment the variable and doesn't make any decrement.
int count = 0;
struct prodcons buffer;
/* some other code that is not important for now and works well */
void main_thread(int port_number){
/* more code */
put(&buffer, f_open);
count++; ------> it increment every time that I do a put
nw = myWriteSocket(sc, "File has been Queued.", ARGVMAX);
/* more code */
void *printing(void *arg){
/* variables and other code that works */
file_desc = get(&buffer);
count--; ---> now it never decrement, but the get is working because the files are printed
int main (int argc, char *argv[]) {
/* more code */
pthread_create(&printer_thread,NULL,printing, (void *)terminal);
main_thread(port_number);
What can be the problem? Why the get is working and all is working too and the count-- doesn't???
Sorry if the question is not well structured.