Whats the smartest way to run an application continuously so that it doesn't exit after it hits the bottom? Instead it starts again from the top of main and only exits when commanded. (This is in C)
You should always have some way of exiting cleanly. I'd suggest moving the code off to another function that returns a flag to say whether to exit or not.
int main(int argc, char*argv[])
{
// param parsing, init code
while (DoStuff());
// cleanup code
return 0;
}
int DoStuff(void)
{
// code that you would have had in main
if (we_should_exit)
return 0;
return 1;
}
Most applications that don't fall through enter some kind of event processing loop that allows for event-driven programming.
Under Win32 development, for instance, you'd write your WinMain function to continually handle new messages until it receives the WM_QUIT message telling the application to finish. This code typically takes the following form:
// ...meanwhile, somewhere inside WinMain()
MSG msg;
while (GetMessage(&msg, NULL, 0, 0))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
If you are writing a game using SDL, you would loop on SDL events until deciding to exit, such as when you detect that the user has hit the Esc key. Some code to do that might resemble the following:
bool done = false;
while (!done)
{
SDL_Event event;
while (SDL_PollEvent(&event))
{
switch (event.type)
{
case SDL_QUIT:
done = true;
break;
case SDL_KEYDOWN:
if (event.key.keysym.sym == SDLK_ESCAPE)
{
done = true;
}
break;
}
}
}
You may also want to read about Unix Daemons and Windows Services.
while (true)
{
....
}
To elaborate a bit more, you want to put something in that loop that allows you to let the user do repeated actions. Whether it's reading key strokes and performing actions based on the keys pressed, or reading data from the socket and sending back a response.
There are a number of ways to "command" your application to exit (such as a global exit flag or return codes). Some have already touched on using an exit code so I'll put forward an easy modification to make to an existing program using an exit flag.
Let's assume your program executes a system call to output a directory listing (full directory or a single file):
int main (int argCount, char *argValue[]) {
char *cmdLine;
if (argCount < 2) {
system ("ls");
} else {
cmdLine = malloc (strlen (argValue[1]) + 4);
sprintf (cmdLine, "ls %s", argValue[1]);
system (cmdLine);
}
}
How do we go about making that loop until an exit condition. The following steps are taken:
Change main() to oldMain().
Add new exitFlag.
Add new main() to continuously call oldMain() until exit flagged.
Change oldMain() to signal exit at some point.
This gives the following code:
static int exitFlag = 0;
int main (int argCount, char *argValue[]) {
int retVal = 0;
while (!exitFlag) {
retVal = oldMain (argCount, argValue);
}
return retVal;
}
static int oldMain (int argCount, char *argValue[]) {
char *cmdLine;
if (argCount < 2) {
system ("ls");
} else {
cmdLine = malloc (strlen (argValue[1]) + 4);
sprintf (cmdLine, "ls %s", argValue[1]);
system (cmdLine);
}
if (someCondition)
exitFlag = 1;
}
Related
I want to create log level display for my C program.
For example, if launch it with ./my_program -log=warning it display my warning log on standard output.
Same for ./my_program -log=debug or ./my_program -log=errors.
I first thought about doing it manually in C and just use some printf, but I also use GLib inside my program for other reasons and I am pretty sure it has its own way of handling the problem.
So my question is : How to handle debug logs in a clean and robust way with GLib ?
You can, but it's not direct.
Two parts:
debug messages
other messages.
debug
By default, debug messages are hidden, you can show them setting the env var G_MESSAGES_DEBUG to all
other
To hide the other message, you can define your own log function that won't show the messages using g_log_set_handler
This little code will do what you want:
#include <glib.h>
/* a log function that will eat the message without displaying it */
void log_quiet(const gchar* domain, GLogLevelFlags level, const gchar *message, gpointer user_data)
{
// nop
}
int main(int argc, char *argv[])
{
/* message filter */
int filter = 0;
/* decode argument given here, it's the first argument given */
if (argc > 1)
{
if (0 == strcmp(argv[1], "debug")) {
filter = G_LOG_LEVEL_DEBUG;
} else if (0 == strcmp(argv[1], "debug")) {
filter = G_LOG_LEVEL_DEBUG;
} else if (0 == strcmp(argv[1], "message")) {
filter = G_LOG_LEVEL_MESSAGE;
} else if (0 == strcmp(argv[1], "warning")) {
filter = G_LOG_LEVEL_WARNING;
} else if (0 == strcmp(argv[1], "critical")) {
filter = G_LOG_LEVEL_CRITICAL;
} else if (0 == strcmp(argv[1], "error")) {
filter = G_LOG_LEVEL_ERROR;
}
}
/* set the message handler*/
switch (filter) {
case G_LOG_LEVEL_DEBUG:
/* in case of debug, we must make it visible */
setenv("G_MESSAGES_DEBUG", "all", 1);
break;
/* other case, hide messages progressively */
case G_LOG_LEVEL_ERROR:
g_log_set_handler(NULL, G_LOG_LEVEL_CRITICAL, log_quiet, NULL);
case G_LOG_LEVEL_CRITICAL:
g_log_set_handler(NULL, G_LOG_LEVEL_WARNING, log_quiet, NULL);
case G_LOG_LEVEL_WARNING:
g_log_set_handler(NULL, G_LOG_LEVEL_MESSAGE, log_quiet, NULL);
case G_LOG_LEVEL_MESSAGE:
g_log_set_handler(NULL, G_LOG_LEVEL_DEBUG, log_quiet, NULL);
}
/* test */
g_debug("Hello from g_debug");
g_message("Hello from g_message");
g_warning("Hello from g_warning");
g_critical("Hello from g_critical");
g_error("Hello from g_error");
return 0;
}
I'm having trouble terminating my server in my multithreaded program (one server, multiple clients).
When the variable global_var, which counts the number of currently connected clients, gets set to 0, the server should terminate, but it doesn't.
What I think is happening is since accept() is blocking , the code never reaches the break condition in main loop.
It's breaking correctly out of thread_func but then it blocks inside the while loop, just before the accept() call and after printing "Exiting thread_func".
volatile int finished = 0; // Gets set to 1 by catching SIGINT/SIGSTOP
int global_var = 0; // When it gets to 0, server should terminate
int server_fd;
void * thread_func(void* arg)
{
do_some_pre_stuff();
while(1)
{
if(!global_var)
{
close(server_fd);
finished = 1;
break;
}
if(recv(...) > 0)
{
do_more_stuff()
}
else
{
disconnect_client();
global_var--;
break;
}
}
free_more_ressources();
return NULL;
}
int main()
{
do_initial_stuff();
init_socket();
listen();
while (!finished)
{
if( (fd = accept(server_fd,...)) == -1)
exit(-1);
global_var++;
/* Some intermediate code */
if(!global_var)
break;
// Thread for the newly connected player
if(pthread_create(&thread_id[...], NULL, thread_func, (void*)some_arg)
exit(-1);
}
free_resources();
puts("Exiting thread_func");
}
I tried the advice listed here without success (except the pipe answer, not trying to mess with pipes).
I'm new to socket programming but what I tried so far looked correct but none of the solutions worked (including semaphores, pthread_cancel,etc)
PS: synchronization has been implemented, just omitted here for readability
I am writing a concurrent C program where I want to wait for all threads to finish in the main().
Based on this solution, I wrote the following code in main():
// Create threads
pthread_t cid[num_mappers];
int t_iter;
for (t_iter = 0; t_iter < num_mappers; t_iter++){
pthread_create(&(cid[t_iter]), NULL, &map_consumer, NULL);
}
// Wait for all threads to finish
for (t_iter = 0; t_iter < num_mappers; t_iter++){
printf("Joining %d\n", t_iter);
int result = pthread_join(cid[t_iter], NULL);
}
printf("Done mapping.\n");
The function passed into threads is defined as:
// Consumer function for mapping phase
void *map_consumer(void *arg){
while (1){
pthread_mutex_lock(&g_lock);
if (g_cur >= g_numfull){
// No works to do, just quit
return NULL;
}
// Get the file name
char *filename = g_job_queue[g_cur];
g_cur++;
pthread_mutex_unlock(&g_lock);
// Do the mapping
printf("%s\n", filename);
g_map(filename);
}
}
The threads are all successfully created and executed, but the join loop will never finish if num_mappers >= 2.
You return without unlocking the mutex:
pthread_mutex_lock(&g_lock);
if (g_cur >= g_numfull){
// No works to do, just quit
return NULL; <-- mutex is still locked here
}
// Get the file name
char *filename = g_job_queue[g_cur];
g_cur++;
pthread_mutex_unlock(&g_lock);
So only one thread ever returns and ends - the first one, but since it never unlocks the mutex, the other threads remain blocked.
You need something more like
pthread_mutex_lock(&g_lock);
if (g_cur >= g_numfull){
// No works to do, just quit
pthread_mutex_unlock(&g_lock);
return NULL;
}
// Get the file name
char *filename = g_job_queue[g_cur];
g_cur++;
pthread_mutex_unlock(&g_lock);
I'm new to using the pthread library in C and I have an assignment for my class to write a simple program using them. The basic description of the program is it takes 1 or more input files containing website names and 1 output file name. I then need to create 1 thread per input file to read in the website names and push them onto a queue. Then I need to create a couple of threads to pull those names off of the queue, find their IP Address, and then write that information out to the output file. The command line arguments are expected as follows:
./multi-lookup [one or more input files] [single output file name]
My issue is this. Whenever I run the program with only 1 thread to push information to the output file then everything works properly. When I make it two threads then the program hangs and none of my testing "printf" statements are even printed. My best guess is that deadlock is occurring somehow and that I'm not using my mutexes properly but I can't figure out how to fix it. Please help!
If you need any information that I'm not providing then just let me know. Sorry for the lack of comments in the code.
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <pthread.h>
#include "util.h"
#include "queue.h"
#define STRING_SIZE 1025
#define INPUTFS "%1024s"
#define USAGE "<inputFilePath> <outputFilePath>"
#define NUM_RESOLVERS 2
queue q;
pthread_mutex_t locks[2];
int requestors_finished;
void* requestors(void* input_file);
void* resolvers(void* output_file);
int main(int argc, char* argv[])
{
FILE* inputfp = NULL;
FILE* outputfp = NULL;
char errorstr[STRING_SIZE];
pthread_t requestor_threads[argc - 2];
pthread_t resolver_threads[NUM_RESOLVERS];
int return_code;
requestors_finished = 0;
if(queue_init(&q, 10) == QUEUE_FAILURE)
fprintf(stderr, "Error: queue_init failed!\n");
if(argc < 3)
{
fprintf(stderr, "Not enough arguments: %d\n", (argc - 1));
fprintf(stderr, "Usage:\n %s %s\n", argv[0], USAGE);
return 1;
}
pthread_mutex_init(&locks[0], NULL);
pthread_mutex_init(&locks[1], NULL);
int i;
for(i = 0; i < (argc - 2); i++)
{
inputfp = fopen(argv[i+1], "r");
if(!inputfp)
{
sprintf(errorstr, "Error Opening Input File: %s", argv[i]);
perror(errorstr);
break;
}
return_code = pthread_create(&(requestor_threads[i]), NULL, requestors, inputfp);
if(return_code)
{
printf("ERROR: return code from pthread_create() is %d\n", return_code);
exit(1);
}
}
outputfp = fopen(argv[i+1], "w");
if(!outputfp)
{
sprintf(errorstr, "Errord opening Output File: %s", argv[i+1]);
perror(errorstr);
exit(1);
}
for(i = 0; i < NUM_RESOLVERS; i++)
{
return_code = pthread_create(&(resolver_threads[i]), NULL, resolvers, outputfp);
if(return_code)
{
printf("ERROR: return code from pthread_create() is %d\n", return_code);
exit(1);
}
}
for(i = 0; i < (argc - 2); i++)
pthread_join(requestor_threads[i], NULL);
requestors_finished = 1;
for(i = 0; i < NUM_RESOLVERS; i++)
pthread_join(resolver_threads[i], NULL);
pthread_mutex_destroy(&locks[0]);
pthread_mutex_destroy(&locks[1]);
return 0;
}
void* requestors(void* input_file)
{
char* hostname = (char*) malloc(STRING_SIZE);
FILE* input = input_file;
while(fscanf(input, INPUTFS, hostname) > 0)
{
while(queue_is_full(&q))
usleep((rand()%100));
if(!queue_is_full(&q))
{
pthread_mutex_lock(&locks[0]);
if(queue_push(&q, (void*)hostname) == QUEUE_FAILURE)
fprintf(stderr, "Error: queue_push failed on %s\n", hostname);
pthread_mutex_unlock(&locks[0]);
}
hostname = (char*) malloc(STRING_SIZE);
}
printf("%d\n", queue_is_full(&q));
free(hostname);
fclose(input);
pthread_exit(NULL);
}
void* resolvers(void* output_file)
{
char* hostname;
char ipstr[INET6_ADDRSTRLEN];
FILE* output = output_file;
int is_empty = queue_is_empty(&q);
//while(!queue_is_empty(&q) && !requestors_finished)
while((!requestors_finished) || (!is_empty))
{
while(is_empty)
usleep((rand()%100));
pthread_mutex_lock(&locks[0]);
hostname = (char*) queue_pop(&q);
pthread_mutex_unlock(&locks[0]);
if(dnslookup(hostname, ipstr, sizeof(ipstr)) == UTIL_FAILURE)
{
fprintf(stderr, "DNSlookup error: %s\n", hostname);
strncpy(ipstr, "", sizeof(ipstr));
}
pthread_mutex_lock(&locks[1]);
fprintf(output, "%s,%s\n", hostname, ipstr);
pthread_mutex_unlock(&locks[1]);
free(hostname);
is_empty = queue_is_empty(&q);
}
pthread_exit(NULL);
}
Although I'm not familiar with your "queue.h" library, you need to pay attention to the following:
When you check whether your queue is empty you are not acquiring the mutex, meaning that the following scenario might happen:
Some requestors thread checks for emptiness (let's call it thread1) and just before it executes pthread_mutex_lock(&locks[0]); (and after if(!queue_is_full(&q)) ) thread1 gets contex switched
Other requestors threads fill the queue up and when out thread1 finally gets hold of the mutex if will try to insert to the full queue. Now if your queue implementation crashes when one tries to insert more elements into an already full queue thread1 will never unlock the mutex and you'll have a deadlock.
Another scenario:
Some resolver thread runs first requestors_finished is initially 0 so (!requestors_finished) || (!is_empty) is initially true.
But because the queue is still empty is_empty is true.
This thread will reach while(is_empty) usleep((rand()%100)); and sleep forever, because you pthread_join this thread your program will never terminate because this value is never updated in the loop.
The general idea to remember is that when you access some resource that is not atomic and might be accessed by other threads you need to make sure you're the only one performing actions on this resource.
Using a mutex is OK but you should consider that you cannot anticipate when will a context switch occur, so if you want to chech e.g whether the queue is empty you should do this while having the mutex locked and not unlock it until you're finished with it otherwise there's no guarantee that it'll stay empty when the next line executes.
You might also want to consider reading more about the consumer producer problem.
To help you know (and control) when the consumers (resolver) threads should run and when the producer threads produce you should consider using conditional variables.
Some misc. stuff:
pthread_t requestor_threads[argc - 2]; is using VLA and not in a good way - think what will happen if I give no parameters to your program. Either decide on some maximum and define it or create it dynamically after having checked the validity of the input.
IMHO the requestors threads should open the file themselves
There might be some more problems but start by fixing those.
I'm trying to find out how to fix these memory leaks I'm getting while running this program with Valgrind. The leaks occur with the two allocations in nShell_client_main. But I'm not
sure how to properly free them.
I've tried freeing them at nShell_Connect, but it's causing libUV to abort the program. I've tried freeing them at the end of nShell_client_main, but then I get read/write errors when closing the loop. Does anyone know how I'm supposed to close these handles? I've read this, which got me started. But, it seams out-dated because uv_ip4_addr has a different prototype in the latest version.
(nShell_main is the "entry" point)
#include "nPort.h"
#include "nShell-main.h"
void nShell_Close(
uv_handle_t * term_handle
){
}
void nShell_Connect(uv_connect_t * term_handle, int status){
uv_close((uv_handle_t *) term_handle, 0);
}
nError * nShell_client_main(nShell * n_shell, uv_loop_t * n_shell_loop){
int uv_error = 0;
nError * n_error = 0;
uv_tcp_t * n_shell_socket = 0;
uv_connect_t * n_shell_connect = 0;
struct sockaddr_in dest_addr;
n_shell_socket = malloc(sizeof(uv_tcp_t));
if (!n_shell_socket){
// handle error
}
uv_error = uv_tcp_init(n_shell_loop, n_shell_socket);
if (uv_error){
// handle error
}
uv_error = uv_ip4_addr("127.0.0.1", NPORT, &dest_addr);
if (uv_error){
// handle error
}
n_shell_connect = malloc(sizeof(uv_connect_t));
if (!n_shell_connect){
// handle error
}
uv_error = uv_tcp_connect(n_shell_connect, n_shell_socket, (struct sockaddr *) &dest_addr, nShell_Connect);
if (uv_error){
// handle error
}
uv_error = uv_run(n_shell_loop, UV_RUN_DEFAULT);
if (uv_error){
// handle error
}
return 0;
}
nError * nShell_loop_main(nShell * n_shell){
int uv_error = 0;
nError * n_error = 0;
uv_loop_t * n_shell_loop = 0;
n_shell_loop = malloc(sizeof(uv_loop_t));
if (!n_shell_loop){
// handle error
}
uv_error = uv_loop_init(n_shell_loop);
if (uv_error){
// handle error
}
n_error = nShell_client_main(n_shell, n_shell_loop);
if (n_error){
// handle error
}
uv_loop_close(n_shell_loop);
free(n_shell_loop);
return 0;
}
The assertion is happening at the end of the switch statement in this excerpt of code (taken from Joyent's libUV page on Github):
void uv_close(uv_handle_t* handle, uv_close_cb close_cb) {
assert(!(handle->flags & (UV_CLOSING | UV_CLOSED)));
handle->flags |= UV_CLOSING;
handle->close_cb = close_cb;
switch (handle->type) {
case UV_NAMED_PIPE:
uv__pipe_close((uv_pipe_t*)handle);
break;
case UV_TTY:
uv__stream_close((uv_stream_t*)handle);
break;
case UV_TCP:
uv__tcp_close((uv_tcp_t*)handle);
break;
case UV_UDP:
uv__udp_close((uv_udp_t*)handle);
break;
case UV_PREPARE:
uv__prepare_close((uv_prepare_t*)handle);
break;
case UV_CHECK:
uv__check_close((uv_check_t*)handle);
break;
case UV_IDLE:
uv__idle_close((uv_idle_t*)handle);
break;
case UV_ASYNC:
uv__async_close((uv_async_t*)handle);
break;
case UV_TIMER:
uv__timer_close((uv_timer_t*)handle);
break;
case UV_PROCESS:
uv__process_close((uv_process_t*)handle);
break;
case UV_FS_EVENT:
uv__fs_event_close((uv_fs_event_t*)handle);
break;
case UV_POLL:
uv__poll_close((uv_poll_t*)handle);
break;
case UV_FS_POLL:
uv__fs_poll_close((uv_fs_poll_t*)handle);
break;
case UV_SIGNAL:
uv__signal_close((uv_signal_t*) handle);
/* Signal handles may not be closed immediately. The signal code will */
/* itself close uv__make_close_pending whenever appropriate. */
return;
default:
assert(0); // assertion is happening here
}
uv__make_close_pending(handle);
}
I could call uv__tcp_close manually, but it's not in the public headers (and probably not the right solution anyway).
libuv is not done with a handle until it's close callback is called. That is the exact moment when you can free the handle.
I see you call uv_loop_close, but you don't check for the return value. If there are still pending handles, it will return UV_EBUSY, so you should check for that.
If you want to close a loop and close all handles, you need to do the following:
Use uv_stop to stop the loop
Use uv_walk and call uv_close on all handles which are not closing
Run the loop again with uv_run so all close callbacks are called and you can free the memory in the callbacks
Call uv_loop_close, it should return 0 now
I finally figured out how to stop a loop and clean up all handles.
I created a bunch of handles and SIGINT signal handle:
uv_signal_t *sigint = new uv_signal_t;
uv_signal_init(uv_default_loop(), sigint);
uv_signal_start(sigint, on_sigint_received, SIGINT);
When SIGINT is received (Ctrl+C in console is pressed) the on_sigint_received callback is called.
The on_sigint_received looks like:
void on_sigint_received(uv_signal_t *handle, int signum)
{
int result = uv_loop_close(handle->loop);
if (result == UV_EBUSY)
{
uv_walk(handle->loop, on_uv_walk, NULL);
}
}
It triggers a call back function on_uv_walk:
void on_uv_walk(uv_handle_t* handle, void* arg)
{
uv_close(handle, on_uv_close);
}
It tries to close each opened libuv handle.
Note: that I do not call uv_stop before uv_walk, as mentioned saghul.
After on_sigint_received function is called libuv loop continuous the execution and on the next iteration calls on_uv_close for each opened handle. If you call the uv_stop function, then the on_uv_close callback will not be called.
void on_uv_close(uv_handle_t* handle)
{
if (handle != NULL)
{
delete handle;
}
}
After that libuv do not have opened handles and finishes the loop (exits from uv_run):
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
int result = uv_loop_close(uv_default_loop());
if (result)
{
cerr << "failed to close libuv loop: " << uv_err_name(result) << endl;
}
else
{
cout << "libuv loop is closed successfully!\n";
}
I like the solution given by Konstantin Gindemit
I did run into a couple of problems however. His on_uv_close() function ends with a core dump. Also the uv_signal_t variable was causing valgrind to report a "definitely lost" memory leak.
I am using his code with fixes for these 2 situations.
void on_uv_walk(uv_handle_t* handle, void* arg) {
uv_close(handle, NULL);
}
void on_sigint_received(uv_signal_t *handle, int signum) {
int result = uv_loop_close(handle->loop);
if(result == UV_EBUSY) {
uv_walk(handle->loop, on_uv_walk, NULL);
}
}
int main(int argc, char *argv[]) {
uv_signal_t *sigint = new uv_signal_t;
uv_signal_init(uv_default_loop(), sigint);
uv_signal_start(sigint, on_sigint_received, SIGINT);
uv_loop_t* main_loop = uv_default_loop();
...
uv_run(main_loop, UV_RUN_DEFAULT));
uv_loop_close(uv_default_loop());
delete sigint;
return 0;
}