How to implement a "Quit Y,N" on close button pressed - c

I'm trying to understand windows threading system, but I recently got a problem, which I cannot fully comprehend:
I want my console not to close on close button pressed, but rather ask you if it should close or not. Now all the thing run in Thread() function, which just checks if global static volatile bool bActive == false if it is thread just ends. But I'd expect it to write some message and if it is no - continues work normally.
I've tried this:
// in console creation function
SetConsoleCtrlHandler((PHANDLER_ROUTINE)CloseHandler, TRUE);
and
static bool CloseHandler(DWORD event)
{
if (event == CTRL_CLOSE_EVENT)
{
printf("close event");
if(bActive)
{
InterlockedDecrement(&bActive);
WaitForSingleObject(hThread, INFINITE); // just wait for thread to finish or restart
}
}
return true;
}
but it closes the app regardless of choice it has to offer.
here is the rest of the code:
DWORD _stdcall VGameThread(void* _self)
{
while (bActive)
{
while (bActive)
{
// do its things
}
if (on())
break;
else
InterlockedIncrement(&bActive);
}
}
on()
{
int i;
scanf("%d", &i);
if (i == 1)
return 1;
else
return 0;
}

Related

Closing Libuv Loop Correctly During Initialization

I am initializing a loop in libuv, but if I need to return after I initialized the loop but before I have called uv_run, how do I correctly clean up all memory and file descriptors? Here is my example code, loop being uv_loop_t* and server being uv_tcp_t*:
if (uv_loop_init(loop) < 0) {
return -1;
}
if (uv_tcp_init(loop, server) < 0) {
// What code here?
return -1;
}
if (some_other_function() < 0) {
// What code here?
return -1;
}
uv_run(loop, UV_RUN_DEFAULT);
According to this question, I should stop, walk and run the loop, closing all the handles; but that assumes I'm already running the loop, which I'm not. I could just call uv_loop_close(loop), but that doesn't free the handles.
As mentioned in the link, you need to do something like this;
uv_loop_init(&loop);
uv_tcp_init(&loop, &server);
uv_walk(&loop,
[](uv_handle_t* handle, void* arg) {
printf("closing...%p\n", handle);
uv_close(handle, [](uv_handle_t* handle) {
printf("closed...%p\n", handle);
}
);
uv_run(&loop, UV_RUN_ONCE);
},
NULL);

using flock, open and close file to implement many readers single writer lock

I've got a project that consist of multiple processes that can read or write into a single data base. I wish to implement single writer / multi readers locks synchronized by a lock file using the system calls flock/open/close.
Upon lock failure, any re-attempt to take the lock again, will be made by the higher level that requested the lock (unlike spin-lock).
Unfortunately, while testing this model, it failed on scenario of unlocking that wasn't preceded by locking.
perhaps you can help me find what did i do wrong here:
// keep read and write file descriptors as global variables.
// assuming no more than 1 thread can access db on each process.
int write_descriptor=0;
int read_descriptor=0;
int lock_write() {
if((write_descriptor = open(LOCKFILE, O_RDWR|O_CREAT,0644))<0) {
return LOCK_FAIL;
}
if(flock(write_descriptor, LOCK_EX)<0) {
close(write_descriptor);
write_descriptor = 0;
return LOCK_FAIL;
}
return LOCK_SUCCESS;
}
int unlock_write() {
if(!write_descriptor) {
// sanity: try to unlock before lock.
return LOCK_FAIL;
}
if(flock(write_descriptor,LOCK_UN)<0) {
// doing nothing because even if unlock failed, we
// will close the fd anyway to release all locks.
}
close(write_descriptor);
write_descriptor = 0;
return LOCK_SUCCESS;
}
int lock_read() {
if((read_descriptor = open(LOCKFILE,O_RDONLY))<0) {
return LOCK_FAIL;
}
if(flock(read_descriptor, LOCK_SH)<0) {
close(read_descriptor);
return LOCK_FAIL;
}
return LOCK_SUCCESS;
}
int unlock_read() {
if(!read_descriptor) {
// sanity : try to unlock before locking first.
return LOCK_FAIL;
}
if(flock(read_descriptor, LOCK_UN)<0) {
// doing nothing because even if unlock failed, we
// will close the fd anyway to release all locks.
}
close(read_descriptor);
read_descriptor = 0;
return LOCK_SUCCESS;
}
int read_db() {
if(lock_read() != LOCK_SUCCESS) {
return DB_FAIL;
}
// read from db
if(unlock_read() != LOCK_SUCCESS) {
// close fd also unlock - so we can fail here (can i assume that ?)
}
}
int write_db() {
if(lock_write() != LOCK_SUCCESS) {
return DB_FAIL;
}
//write to db.
if(unlock_write() != LOCK_SUCCESS) {
// close fd also unlock - so we can fail here (can i assume that ?)
}
}
In both lock_read and lock_write add this as the first line:
assert ((read_descriptor == 0) && (write_descriptor == 0));
In unlock_read, add this:
assert (read_descriptor != 0);
And in unlock_write, add this:
assert (write_descriptor != 0);
And change code like:
if(flock(read_descriptor, LOCK_SH)<0) {
close(read_descriptor);
return LOCK_FAIL;
}
to:
if(flock(read_descriptor, LOCK_SH)<0) {
close(read_descriptor);
read_descriptor = 0;
return LOCK_FAIL;
}
Do the same for the write code so that any time a descriptor is closed, the corresponding global is set to zero. (You really should use -1 for an invalid file descriptor since zero is legal.)
Make a debug build and run it. When an assert trips, you'll have your culprit.

Closing libUV Handles Correctly

I'm trying to find out how to fix these memory leaks I'm getting while running this program with Valgrind. The leaks occur with the two allocations in nShell_client_main. But I'm not
sure how to properly free them.
I've tried freeing them at nShell_Connect, but it's causing libUV to abort the program. I've tried freeing them at the end of nShell_client_main, but then I get read/write errors when closing the loop. Does anyone know how I'm supposed to close these handles? I've read this, which got me started. But, it seams out-dated because uv_ip4_addr has a different prototype in the latest version.
(nShell_main is the "entry" point)
#include "nPort.h"
#include "nShell-main.h"
void nShell_Close(
uv_handle_t * term_handle
){
}
void nShell_Connect(uv_connect_t * term_handle, int status){
uv_close((uv_handle_t *) term_handle, 0);
}
nError * nShell_client_main(nShell * n_shell, uv_loop_t * n_shell_loop){
int uv_error = 0;
nError * n_error = 0;
uv_tcp_t * n_shell_socket = 0;
uv_connect_t * n_shell_connect = 0;
struct sockaddr_in dest_addr;
n_shell_socket = malloc(sizeof(uv_tcp_t));
if (!n_shell_socket){
// handle error
}
uv_error = uv_tcp_init(n_shell_loop, n_shell_socket);
if (uv_error){
// handle error
}
uv_error = uv_ip4_addr("127.0.0.1", NPORT, &dest_addr);
if (uv_error){
// handle error
}
n_shell_connect = malloc(sizeof(uv_connect_t));
if (!n_shell_connect){
// handle error
}
uv_error = uv_tcp_connect(n_shell_connect, n_shell_socket, (struct sockaddr *) &dest_addr, nShell_Connect);
if (uv_error){
// handle error
}
uv_error = uv_run(n_shell_loop, UV_RUN_DEFAULT);
if (uv_error){
// handle error
}
return 0;
}
nError * nShell_loop_main(nShell * n_shell){
int uv_error = 0;
nError * n_error = 0;
uv_loop_t * n_shell_loop = 0;
n_shell_loop = malloc(sizeof(uv_loop_t));
if (!n_shell_loop){
// handle error
}
uv_error = uv_loop_init(n_shell_loop);
if (uv_error){
// handle error
}
n_error = nShell_client_main(n_shell, n_shell_loop);
if (n_error){
// handle error
}
uv_loop_close(n_shell_loop);
free(n_shell_loop);
return 0;
}
The assertion is happening at the end of the switch statement in this excerpt of code (taken from Joyent's libUV page on Github):
void uv_close(uv_handle_t* handle, uv_close_cb close_cb) {
assert(!(handle->flags & (UV_CLOSING | UV_CLOSED)));
handle->flags |= UV_CLOSING;
handle->close_cb = close_cb;
switch (handle->type) {
case UV_NAMED_PIPE:
uv__pipe_close((uv_pipe_t*)handle);
break;
case UV_TTY:
uv__stream_close((uv_stream_t*)handle);
break;
case UV_TCP:
uv__tcp_close((uv_tcp_t*)handle);
break;
case UV_UDP:
uv__udp_close((uv_udp_t*)handle);
break;
case UV_PREPARE:
uv__prepare_close((uv_prepare_t*)handle);
break;
case UV_CHECK:
uv__check_close((uv_check_t*)handle);
break;
case UV_IDLE:
uv__idle_close((uv_idle_t*)handle);
break;
case UV_ASYNC:
uv__async_close((uv_async_t*)handle);
break;
case UV_TIMER:
uv__timer_close((uv_timer_t*)handle);
break;
case UV_PROCESS:
uv__process_close((uv_process_t*)handle);
break;
case UV_FS_EVENT:
uv__fs_event_close((uv_fs_event_t*)handle);
break;
case UV_POLL:
uv__poll_close((uv_poll_t*)handle);
break;
case UV_FS_POLL:
uv__fs_poll_close((uv_fs_poll_t*)handle);
break;
case UV_SIGNAL:
uv__signal_close((uv_signal_t*) handle);
/* Signal handles may not be closed immediately. The signal code will */
/* itself close uv__make_close_pending whenever appropriate. */
return;
default:
assert(0); // assertion is happening here
}
uv__make_close_pending(handle);
}
I could call uv__tcp_close manually, but it's not in the public headers (and probably not the right solution anyway).
libuv is not done with a handle until it's close callback is called. That is the exact moment when you can free the handle.
I see you call uv_loop_close, but you don't check for the return value. If there are still pending handles, it will return UV_EBUSY, so you should check for that.
If you want to close a loop and close all handles, you need to do the following:
Use uv_stop to stop the loop
Use uv_walk and call uv_close on all handles which are not closing
Run the loop again with uv_run so all close callbacks are called and you can free the memory in the callbacks
Call uv_loop_close, it should return 0 now
I finally figured out how to stop a loop and clean up all handles.
I created a bunch of handles and SIGINT signal handle:
uv_signal_t *sigint = new uv_signal_t;
uv_signal_init(uv_default_loop(), sigint);
uv_signal_start(sigint, on_sigint_received, SIGINT);
When SIGINT is received (Ctrl+C in console is pressed) the on_sigint_received callback is called.
The on_sigint_received looks like:
void on_sigint_received(uv_signal_t *handle, int signum)
{
int result = uv_loop_close(handle->loop);
if (result == UV_EBUSY)
{
uv_walk(handle->loop, on_uv_walk, NULL);
}
}
It triggers a call back function on_uv_walk:
void on_uv_walk(uv_handle_t* handle, void* arg)
{
uv_close(handle, on_uv_close);
}
It tries to close each opened libuv handle.
Note: that I do not call uv_stop before uv_walk, as mentioned saghul.
After on_sigint_received function is called libuv loop continuous the execution and on the next iteration calls on_uv_close for each opened handle. If you call the uv_stop function, then the on_uv_close callback will not be called.
void on_uv_close(uv_handle_t* handle)
{
if (handle != NULL)
{
delete handle;
}
}
After that libuv do not have opened handles and finishes the loop (exits from uv_run):
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
int result = uv_loop_close(uv_default_loop());
if (result)
{
cerr << "failed to close libuv loop: " << uv_err_name(result) << endl;
}
else
{
cout << "libuv loop is closed successfully!\n";
}
I like the solution given by Konstantin Gindemit
I did run into a couple of problems however. His on_uv_close() function ends with a core dump. Also the uv_signal_t variable was causing valgrind to report a "definitely lost" memory leak.
I am using his code with fixes for these 2 situations.
void on_uv_walk(uv_handle_t* handle, void* arg) {
uv_close(handle, NULL);
}
void on_sigint_received(uv_signal_t *handle, int signum) {
int result = uv_loop_close(handle->loop);
if(result == UV_EBUSY) {
uv_walk(handle->loop, on_uv_walk, NULL);
}
}
int main(int argc, char *argv[]) {
uv_signal_t *sigint = new uv_signal_t;
uv_signal_init(uv_default_loop(), sigint);
uv_signal_start(sigint, on_sigint_received, SIGINT);
uv_loop_t* main_loop = uv_default_loop();
...
uv_run(main_loop, UV_RUN_DEFAULT));
uv_loop_close(uv_default_loop());
delete sigint;
return 0;
}

First time using select(), maybe a basic question?

i've been working for a few days with this server using select(). What it does, is that, i have two arrays of clients (one is "suppliers", and the other is "consumers"), and the mission of the server is to check whether the suppliers have something to send to the consumers, and in case affirmative, send it.
The second part of the server is that, when the consumers have received the suppliers' info, they send a confirmation message to the same suppliers that sent the info.
When a client connects, it gets recognized as "undefined", until it sends a message with the word "supplier" or "consumer" (in Spanish, as i'm from there), when the server puts it in the correct clients array.
Well, what the server does is not very important here. What's important is that, i'm doing both parts with two different "for" loops, and that's where i'm getting the problems. When the first user connects to the server (be it a supplier or a consumer), the server gets stuck in the first or second loop, instead of just continuing its execution. As it's the first time i'm using select(), i may be missing something. Could you guys give me any sort of help?
Thanks a lot in advance.
for(;;)
{
rset=allset;
nready=select(maxfd+1,&rset,NULL,NULL,NULL);
if (FD_ISSET(sockfd, &rset))
{
clilen=sizeof(cliente);
if((connfd=accept(sockfd,(struct sockaddr *)&cliente,&clilen))<0)
{
printf("Error");
}
IP=inet_ntoa(cliente.sin_addr);
for(i=0;i<COLA;i++)
{
if(indef[i]<0)
{
indef[i]=connfd;
IPind[i]=IP;
break;
}
}
FD_SET(connfd,&allset);
if(connfd > maxfd)
{
maxfd=connfd;
}
if(i>maxii)
{
maxii=i;
}
if(--nready<=0)
{ continue; }
}// Fin ISSET(sockfd)
for(i=0;i<=maxii;i++)
{
if((sockfd1=indef[i])<0)
{ continue; } //!
if(FD_ISSET(sockfd1,&rset))
{
if((n=read(sockfd1,comp,MAXLINE))==0)
{
close(sockfd1);
FD_CLR(sockfd1,&allset);
indef[i]=-1;
printf("Cliente indefinido desconectado \n");
}
else
{
comp[n]='\0';
if(strcmp(comp,"suministrador")==0)
{
for(j=0;j<=limite;j++)
{
if(sumi[j]<0)
{
IPsum[j]=IPind[i];
sumi[j]=indef[i];
indef[i]=-1;
if(j>maxis)
{
maxis=j;
}
break;
}
}
}
else if(strcmp(comp,"consumidor")==0)
{
for(o=0;j<=limite;j++)
{
if(consum[o]<0)
{
IPcons[o]=IPind[i];
consum[o]=indef[i];
indef[o]=-1;
if(o>maxic)
{
maxic=o;
}
break;
}
}
}
if(--nready <=0)
{
break;
}
}
}
}//fin bucle for maxii
for(i=0;i<=maxis;i++)
{
if((sockfd2=sumi[i])<0)
{ continue; }
if(FD_ISSET(sockfd2,&rset))
{
if((n=read(sockfd2,buffer2,MAXLINE))==0)
{
close(sockfd2);
FD_CLR(sockfd2,&allset);
sumi[i]=-1;
printf("Suministrador desconectado \n");
}
else
{
buffer2[n]='\0';
for(j=0;j<=maxic;j++)
{
if((sockfd3=consum[j])<0)
{ continue; }
else
{
strcpy(final,IPsum[i]);
strcat(final,":");
strcat(final,buffer2);
write(sockfd3,final,sizeof(final));
respuesta[i]=1;
}
}
break; // ?
}
}
}//fin for maxis
for(i=miniic;i<=maxic;i++)
{
if((sockfd4=consum[i])<0)
{ continue; }
if(FD_ISSET(sockfd4,&rset))
{
if((n=read(sockfd4,buffer3,MAXLINE))==0)
{
close(sockfd4);
FD_CLR(sockfd4,&allset);
consum[i]=-1;
printf("Consumidor desconectado \n");
}
else
{
buffer3[n]='\0';
IP2=strtok(buffer3,":");
obj=strtok(NULL,":");
for(j=0;j<100;j++)
{
if((strcmp(IPsum[j],IP2)==0) && (respuesta[j]==1))
{
write(sumi[j],obj,sizeof(obj));
miniic=i+1;
respuesta[j]=0;
break;
}
}
}
}
}
Hmm, I think your logic is all wrong. It should look something more like this (warning, untested pseudo-code):
for (;;)
{
// First, set up the fd_sets to specify the sockets we want to be notified about
fd_set readSet; FD_CLR(&readSet);
fd_set writeSet; FD_CLR(&writeSet);
int maxFD = -1;
for (int i=0; i<num_consumers; i++)
{
if (consumer_sockets[i] > maxFD) maxFD = consumer_sockets[i];
FD_SET(consumer_sockets[i], &readSet);
if (consumer_has_data_he_wants_to_send[i]) FD_SET(consumer_sockets[i], &writeSet);
}
for (int i=0; i<num_producers; i++)
{
if (producer_sockets[i] > maxFD) maxFD = producer_sockets[i];
FD_SET(producer_sockets[i], &readSet);
if (producer_has_data_he_wants_to_send[i]) FD_SET(producer_sockets[i], &writeSet);
}
// Now we block in select() until something is ready to be handled on a socket
int selResult = select(maxFD+1, &readSet, &writeSet, NULL, NULL);
if (selResult < 0) {perror("select"); exit(10);}
for (int i=0; i<num_consumers; i++)
{
if (FD_ISSET(consumer_sockets[i], &readSet)
{
// There is some incoming data ready to be read from consumer_socket[i], so recv() it now
[...]
}
if (FD_ISSET(consumer_sockets[i], &writeSet)
{
// There is buffer space in consumer_socket[i] to hold more outgoing
// data for consumer_socket[i], so send() it now
[...]
}
}
for (int i=0; i<num_producers; i++)
{
if (FD_ISSET(&producer_sockets[i], &readSet)
{
// There is some data ready to be read from producer_socket[i], so recv() it now
[...]
}
if (FD_ISSET(producer_sockets[i], &writeSet)
{
// There is buffer space in producer_socket[i] to hold more outgoing
// data for producer_socket[i], so send() it now
[...]
}
}
}
Note that to really do this properly, you'd need to set all of your sockets to non-blocking I/O and be able to handle partial reads and writes (by storing the partial data into a local memory-buffer associated with that consumer/producer, until you have enough data to act on), otherwise you risk having a call to recv() or send() block, which would prevent the event loop from being able to service any of the other consumers or producers. Ideally the only place you should ever block in is select()... every other call should be non-blocking. But if you want to keep things simple to start out with, you may be able to get away with using blocking I/O for a while.
You might read the introductory tutorial
Look at blocking vs non-blocking connections

Run application continuously

Whats the smartest way to run an application continuously so that it doesn't exit after it hits the bottom? Instead it starts again from the top of main and only exits when commanded. (This is in C)
You should always have some way of exiting cleanly. I'd suggest moving the code off to another function that returns a flag to say whether to exit or not.
int main(int argc, char*argv[])
{
// param parsing, init code
while (DoStuff());
// cleanup code
return 0;
}
int DoStuff(void)
{
// code that you would have had in main
if (we_should_exit)
return 0;
return 1;
}
Most applications that don't fall through enter some kind of event processing loop that allows for event-driven programming.
Under Win32 development, for instance, you'd write your WinMain function to continually handle new messages until it receives the WM_QUIT message telling the application to finish. This code typically takes the following form:
// ...meanwhile, somewhere inside WinMain()
MSG msg;
while (GetMessage(&msg, NULL, 0, 0))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
If you are writing a game using SDL, you would loop on SDL events until deciding to exit, such as when you detect that the user has hit the Esc key. Some code to do that might resemble the following:
bool done = false;
while (!done)
{
SDL_Event event;
while (SDL_PollEvent(&event))
{
switch (event.type)
{
case SDL_QUIT:
done = true;
break;
case SDL_KEYDOWN:
if (event.key.keysym.sym == SDLK_ESCAPE)
{
done = true;
}
break;
}
}
}
You may also want to read about Unix Daemons and Windows Services.
while (true)
{
....
}
To elaborate a bit more, you want to put something in that loop that allows you to let the user do repeated actions. Whether it's reading key strokes and performing actions based on the keys pressed, or reading data from the socket and sending back a response.
There are a number of ways to "command" your application to exit (such as a global exit flag or return codes). Some have already touched on using an exit code so I'll put forward an easy modification to make to an existing program using an exit flag.
Let's assume your program executes a system call to output a directory listing (full directory or a single file):
int main (int argCount, char *argValue[]) {
char *cmdLine;
if (argCount < 2) {
system ("ls");
} else {
cmdLine = malloc (strlen (argValue[1]) + 4);
sprintf (cmdLine, "ls %s", argValue[1]);
system (cmdLine);
}
}
How do we go about making that loop until an exit condition. The following steps are taken:
Change main() to oldMain().
Add new exitFlag.
Add new main() to continuously call oldMain() until exit flagged.
Change oldMain() to signal exit at some point.
This gives the following code:
static int exitFlag = 0;
int main (int argCount, char *argValue[]) {
int retVal = 0;
while (!exitFlag) {
retVal = oldMain (argCount, argValue);
}
return retVal;
}
static int oldMain (int argCount, char *argValue[]) {
char *cmdLine;
if (argCount < 2) {
system ("ls");
} else {
cmdLine = malloc (strlen (argValue[1]) + 4);
sprintf (cmdLine, "ls %s", argValue[1]);
system (cmdLine);
}
if (someCondition)
exitFlag = 1;
}

Resources