Permission denied when trying to write into log file - c

I have a problem writing into a log file in my C/C++ prog.
Here's an example of the code where the problem occurs
EnterCriticalSection(&critical);
printf("\nWaiting for a connection on TCP port %d (nbr of current threads = %d)...\n", pServer->TCPServerPort, (*pServer->lChildInfo));
AddLog("Waiting for a connection on TCP port %d (nbr of current threads = %d)...", pServer->TCPServerPort, (*pServer->lChildInfo));
LeaveCriticalSection(&critical);
// creating variables to be passed to the thread
struct*ThreadData = (struct*) malloc(sizeof(struct));
ThreadData->csock = (int*)malloc(sizeof(int));
memcpy(&ThreadData->pServer,&pServer,sizeof(pServer));
if((*ThreadData->csock = accept( pServer->ListenSocket, (SOCKADDR*)&sadr, &addr_size))!= INVALID_SOCKET ){
ThreadData->dwIP = sadr.sin_addr.s_addr;
ThreadData->wPort = sadr.sin_port;
printf("Received connection from %s:%d \n",inet_ntoa(sadr.sin_addr), ntohs(sadr.sin_port));
AddLog("Received connection from %s:%d ",inet_ntoa(sadr.sin_addr), ntohs(sadr.sin_port));
AddLog is the function i wrote in order to write into the file :
FILE *fichier = NULL;
va_list ap;
va_start(ap, log);
//fichier = fopen("log.log","a");
fichier = _fsopen("log.log", "a", SH_DENYNO);
if (fichier == NULL)
printf("Error log: %d (%s)\n", errno, strerror(errno));
else {
fprintf(fichier,":");
vfprintf(fichier, log, ap);
fprintf(fichier,"\n");
va_end(ap);
fclose(fichier);
}
What I can't really explain is that the first AddLog ("Waiting for...." and all the ones before..) are correctly written into the file. But when i try a connection, the logs coming then (received connection from...) are not written into the file and i always get the error 13 "Permission denied".
I used chmod 777 into the file, i also tried _fsopen function and i still get this error once i enter into the thread.
If someone have any idea it would be reaaally helpful.
Thanks to all

I don't know exactly if it the problem , But i would suggest using "a+" inside _fsopen
for shared append because thread another process.

I don't know if it is still relevant , but I have to suggest you to use a bit better solution:
(I encountered the same problem few days ago and the solution was more than trivial)
I just implemented a shared queue and all the logs I added in to the queue afterwards I run a worker thread that was checking the queue and was writing to the file if the queue wasn't empty.
I hope it helped have a nice day :)

Related

Where would my data be getting lost at within this mutex/pthread_cond_wait structure?

FINAL EDIT: Solution to problem was stated by the answer I have selected. The representative example code is shown in the diff here
EDIT: Full compile-able code at the bottom of the post.
I have this rudimentary multithreaded server that simply accepts a connection and is supposed to pass the file descriptor off to a thread to allow this thread to handle it directly until the client disconnects.
For some reason, even with the following code flow inside of the server, some clients "Fall through the cracks" and get stuck in limbo. (They never get handled by the server so they just hang after accepting the connection)
The following block is my server main running loop:
while(g_serv.b_running)
{
//printf("Awaiting connection.\n");
client_fd = accept(g_serv.serv_listener_fd,
(struct sockaddr*)&cli_addr,
&clilen);
if (0 > client_fd)
{
fprintf(stderr,
"Error accepting connection. [%s]\n",
strerror(errno));
continue;
}
err = sem_trywait(&(g_serv.client_count_sem));
if (0 > err)
{
fprintf(stderr,
"Max connections reached. [%s]\n",
strerror(errno));
notify_client_max_connections(client_fd);
close(client_fd);
client_fd = 0;
continue;
}
printf("A client has connected.\n");
char byte[2] = "0";
err = send(client_fd, byte, 1, 0);
// Set up client FD in global position and wake up a thread to grab it
//
pthread_mutex_lock(&(g_serv.new_connection_fd_lock));
g_serv.new_connection_fd = client_fd;
if (0 != g_serv.new_connection_fd)
{
pthread_cond_signal(&(g_serv.new_connection));
}
pthread_mutex_unlock(&(g_serv.new_connection_fd_lock));
}
This block is the thread handling function:
void* thread_handler(void* args)
{
serv_t* p_serv = (serv_t*)args;
bool thread_client_connected;
int thread_client_fd;
while(p_serv->b_running)
{
pthread_mutex_lock(&(p_serv->new_connection_fd_lock));
while (0 == p_serv->new_connection_fd && p_serv->b_running)
{
pthread_cond_wait(&(p_serv->new_connection),
&(p_serv->new_connection_fd_lock));
}
thread_client_fd = p_serv->new_connection_fd;
p_serv->new_connection_fd = 0;
pthread_mutex_unlock(&(p_serv->new_connection_fd_lock));
// In the case of a pthread cond broadcast for exiting the server.
//
if (0 == thread_client_fd)
{
continue;
}
thread_client_connected = true;
while (thread_client_connected)
{
thread_client_connected = handle_client(thread_client_fd);
}
close(thread_client_fd);
thread_client_fd = 0;
sem_post(&(p_serv->client_count_sem));
}
return NULL;
} /* thread_handler */
Just for data reference here is my serv_t struct:
typedef struct serv_t {
bool b_running;
int max_connections;
int serv_listener_fd;
sem_t client_count_sem;
pthread_mutex_t new_connection_fd_lock;
pthread_cond_t new_connection;
int new_connection_fd;
pthread_t* p_thread_ids;
} serv_t;
Basically, if I run netcat or a client program I have against it with multiple instances via a bash command to "background" the application, some of these instances get stuck. I have it redirecting the output to a file, but what's happening is that particular instance of the client/netcat is just getting stuck after the accept call.
More specifically, if I run my program with two threads, one instance of a program gets stuck and no subsequent copies get stuck, even running 6500 instances against the server.
If I run it with ten threads, as many as 8 or 9 instances get stuck, but the threads still function properly within the server.
EDIT:
Client code I refer to, starting from the server letting the client know that the server is ready to receive data:
char buff[2] = { 0 };
err = recv(client_socket_fd, buff, 1, 0);
if ('0' != buff[0] && 1 != err)
{
fprintf(stderr,
"Server handshake error. [%s]\n",
strerror(errno));
close(client_socket_fd);
return EXIT_FAILURE;
}
if (NULL != p_infix_string)
{
if (MAX_BUFFER_SIZE < strlen(p_infix_string))
{
fprintf(stderr,
"Infix string is over 100 characters long.\n");
return EXIT_FAILURE;
}
errno = 0;
char* p_postfix = infix_to_postfix(p_infix_string);
if (EINVAL == errno || NULL == p_postfix)
{
fprintf(stderr, "Error converting provided string.\n");
}
bool success = send_postfix(p_postfix, client_socket_fd);
free(p_postfix);
if (false == success)
{
fprintf(stderr,
"An error occured while sending the equation to the server.\n");
close(client_socket_fd);
return EXIT_FAILURE;
}
}
The client is getting stuck at the receive call here:
bool send_postfix(char* p_postfix, int client_socket_fd)
{
if (NULL == p_postfix)
{
fprintf(stderr, "No postfix string provided to send to server.\n");
return false;
}
printf("Sending postfix to server\n");
int err = send(client_socket_fd,
p_postfix,
strnlen(p_postfix, MAX_BUFFER_SIZE),
0);
if(strnlen(p_postfix, MAX_BUFFER_SIZE) > err)
{
fprintf(stderr,
"Unable to send message to server. [%s]\n",
strerror(errno));
return false;
}
char response[MAX_BUFFER_SIZE] = { 0 };
printf("Waiting for receive\n");
err = recv(client_socket_fd, &response, MAX_BUFFER_SIZE, 0);
if (0 == err)
{
fprintf(stderr,
"Connection to server lost. [%s]\n",
strerror(errno));
return false;
}
else if (0 > err)
{
fprintf(stderr,
"Unable to receive message on socket. [%s]\n",
strerror(errno));
return false;
}
printf("Server responded with: \n%s\n", response);
return true;
} /* send_postfix */
EDIT: https://github.com/TheStaplergun/Problem-Code
I uploaded the code to this repo and removed the need for the extraneous files I use and filled them with placeholders.
You can recreate this problem using the server with the command ./postfix_server -p 8888 -n 2 and the client issue in another terminal with for i in {1..4}; do ./postfix_client -i 127.0.0.1 -p 8888 -e "3 + $i" &> $i.txt & done
The output of each client will be forcefully flushed because of the setbuf at the top of client. Run it, see if any programs hang, if not run that command again. Just type PS and see if one of them is hanging, and look at the resulting text file. You will see it is stuck at the receive call.
If you sigint the server (CTRL + C), the client that was stuck will close with a Connection reset by peer response from the server, so the server still does have that file descriptor locked up somewhere.
I believe a race condition is happening somehow, because it only happens randomly.
A curious thing is it only happens ONCE PER SERVER INSTANCE.
If I kill that hung instance and proceed to do it again 10000 times it never does another hang until the server is reset.
For some reason, even with the following code flow inside of the
server, some clients "Fall through the cracks" and get stuck in limbo.
(They never get handled by the server so they just hang after
accepting the connection)
There may be other issues, but the first one I see is that main loop does not ensure that a new connection is actually picked up by any handler thread before it tries to hand off the next connection. Even if there are handler threads already blocked on the CV when a new connection is accepted, it is possible for the main server thread to signal the CV, loop back around, accept another connection, reacquire the mutex, and overwrite the new-connection FD before any handler thread picks up the previous one. The chances of that increase if you have more threads than cores.
Note that that will also interfere with your semaphore-based counting of available handlers -- you decrement the semaphore for every semaphore accepted, but you increment it again only for those that are successfully handled.
There are various ways that you could make the main server thread wait for the new connection to be picked up by a handler. One group would involve the server waiting on a CV itself, and relying on a handler to signal it after picking up the connection. Another, perhaps simpler, approach would involve using a semaphore to similar effect. But I would suggest instead not waiting, but instead creating a thread-safe queue for available connections, so that the server doesn't have to wait. That would even allow for queueing more connections than presently available handlers, if that would be useful to you.

Correctly close file descriptor when opened through network mount

I'm currently trying to figure out how to correctly close a file descriptor when it points to a remote file and the connection is lost.
I have a simple example program which opens a file descriptor on a sshfs mount folder and start to write to the file.
I'm not able to find how to handle the case when the connection is lost.
void *write_thread(void* arg);
int main()
{
pthread_t thread;
int fd = -1;
if(-1 == (fd = open("/mnt/testfile.txt", O_CREAT | O_RDWR | O_NONBLOCK, S_IRWXU)))
{
fprintf(stderr, "Error oppening file : %m\n");
return EXIT_FAILURE;
}
else
{
if(0 > pthread_create(&thread, NULL, write_thread, &fd))
{
fprintf(stderr, "Error launching thread : %m\n");
return EXIT_FAILURE;
}
fprintf(stdout, "Waiting 10 seconds before closing\n");
sleep(10);
if(0 > close(fd))
{
fprintf(stderr, "Error closing file descriptor: %m\n");
}
}
}
void *write_thread(void* arg)
{
int fd = *(int*)arg;
int ret;
while(1)
{
fprintf(stdout, "Write to file\n", fd);
if(0 > ( ret = write(fd, "Test\n", 5)))
{
fprintf(stderr, "Error writing to file : %m\n");
if(errno == EBADF)
{
if(-1 == close(fd))
{
fprintf(stderr, "Close failed : %m\n");
}
return NULL;
}
}
else if(0 == ret)
{
fprintf(stderr, "Nothing happened\n");
}
else
{
fprintf(stderr, "%d bytes written\n", ret);
}
sleep(1);
}
}
When the connection is lost (i.e. I unplug the ethernet cable between my boards), The close in the main thread always blocks whether I use the flag O_NONBLOCK or not.
The write call sometimes immediately fails with EBADF error or sometimes continues for a long time before failing.
My problem is that the write call doesn't always fail when the connection is lost so I can't trigger the event into the thread and I also can't trigger it from the main thread because close blocks forever.
So my question is : How to correctly handle this case in C ?
question is: how to correctly handle this case in C?
Simply you can not. File handles are designed to be unified and simple, no matter where they point to. When a device is mounted, and the connection (physical or virtual) to it crashes down, things become tricky even at the command line level.
There is a fundamental problem with remote filesystems, where on the one hand you have to cache things in order for performance remain at a usable level, and on the other hand caching in multiple clients can lead to conflicts that are not seen by the server.
NFS, for example, chooses caching by default and if the cache is dirty, it will simply hang until the connection resumes.
Documentation for sshfs suggests similar behavior.
From grepping sshfs' source code, it seems that it doesn't support O_NONBLOCK at all.
None of that has anything to do with C.
IMO your best option is to switch to nfs, and mount with e.g. -o soft -o timeo=15 -o retrans=1.
This could cause data corruption/loss in certain situations when there is a network disconnect, mainly when there are multiple clients or when the client crashes, but it does support O_NONBLOCK and in any case will return EIO if the connection is lost while a request is in-flight.
After some diggin around I found that the SSH mount could be configured to drop the connection and disconnect from server if nothing happens.
Setting ServerAliveInterval X on client side to disconnect if the server is unresponsive after X sec.
Setting ClientAliveCountMax X on server side to disconnect if the client is unresponsive after X sec.
ServerAliveCountMax Y and ClientAliveCountMax Y can also be used in order to retry Y times before dropping the connection.
With this configuration applied, the sshfs mount is automatically removed by Linux when the connection is unresponsive.
With this configuration, the write call fails with Input/output error first and then with Transport endpoint is not connected.
This is enough to detect that the connection is lost and thus cleaning up the mess before exiting.

Broken Pipe C with sendfile on socket

I'm trying to recode a FTP server in C.
I open a data socket to my client (PASV), and when it try to do RETR on a valid file, I use sendfile from the file asked to the data socket:
int fd;
struct stat s;
if (cmd->arg && (fd = open(cmd->arg, O_RDWR)) != -1)
{
fstat(fd, &s);
if ((size = sendfile(client->data_soc, fd, NULL, s.st_size))
== -1)
perror("sendfile failed:");
else
printf("datas sended\n");
close(client->data_soc);
}
Client is a structure containing the data socket client->data_soc already open, and cmd is the client's command, containing the name of the file to open cmd->arg, wich is a char *.
The problem is when I do it, the sendfile function stop with SIGPIPE.
I really don't understand why, I think I use it correctly, and I can't find any solution to this issue in particular.
Thanks for your help :)
This happens because:
1) the client closed the connection in the middle of the transfer; and
2) the system is configured to raise a signal instead of returning the EPIPE error.
So you need to fix both the client and the server: the client must not close the connection in the middle and the server must be robust against client abuse.
Use, for example, sigprocmask() to disable SIGPIPE.

Sending structs with ZeroMQ and ProtocolBuffers

I'm writing a program that's supposed to send C structures via ZeroMQ.
Therefore I'm using Google's ProtocolBuffers to serialize the structs.
I do now have the problem that my subscriber side is not receiving anything.
The Publisher prints out "Message successfully sent" so I think the Error occurs on the Subscribers side.
Publisher:
int main (void)
{
Message protomsg = MESSAGE__INIT;
void *buf;
unsigned len;
void *context = zmq_ctx_new();
void *subscriber = zmq_socket(context, ZMQ_PUB);
zmq_bind(subscriber, "ipc://my.sock");
//Initialising protomsg (not so important)
//sending message
len = message__get_packed_size(&protomsg);
buf = malloc(len);
message__pack(&protomsg, buf);
zmq_msg_t output;
zmq_msg_init_size(&output, len);
zmq_msg_init_data(&output, buf, len, NULL, NULL);
if(zmq_msg_send(&output, subscriber, 0) == -1)
perror("Error sending message \n");
else
printf("Message successfully sent \n");
zmq_msg_close(&output);
free(buf);
zmq_close (subscriber);
zmq_ctx_destroy (context);
return 0;
}
Subscriber:
int main (void){
Message *protomsg;
void *context = zmq_ctx_new ();
void *publisher = zmq_socket (context, ZMQ_SUB);
zmq_connect(publisher, "ipc://my.sock");
zmq_setsockopt(publisher, ZMQ_SUBSCRIBE, "", 0);
// Read packed message from ZMQ.
zmq_msg_t msg;
zmq_msg_init(&msg);
if(zmq_msg_recv(&msg, publisher, 0) == -1)
perror("Error receiving message \n");
else
printf("Message received");
memcpy((void *)protomsg, zmq_msg_data(&msg), zmq_msg_size(&msg));
// Unpack the message using protobuf-c.
protomsg = message__unpack(NULL, zmq_msg_size(&msg), (void *)&data);
if (protomsg == NULL)
{
fprintf(stderr, "error unpacking incoming message\n");
exit(1);
}
printf("Address: %u, Type: %u, Information[0]: %u, Information[1]: %u \n", protomsg->address-48, protomsg->frametype, protomsg->information[0], protomsg->information[1]);
zmq_msg_close (&msg);
// Free the unpacked message
message__free_unpacked(protomsg, NULL);
//close context,socket..
}
Don't know if anyone still cares about this, but here goes... I agree with #Steve-o that this is a timing issue, although I think the problem is that you are closing the publisher socket too soon.
Your publisher code publishes the message then immediately closes the socket and terminates the context. So the message exists in the publisher for milliseconds and then is gone forever.
If you run the publisher first, it does it's thing, exits and the message is gone. When you start the subscriber it attempts to connect to an IPC socket that is no longer there. ZeroMQ allows this and the subscriber will block until there is an IPC socket to connect to.
I have not reviewed the ZeroMQ IPC source code, but I suspect that, under the covers, subscriber is periodically attempting to connect to the publisher socket. Now if you run the publisher again, it might work but you have a serious race condition. If you start the publisher at the exact instant the ZeroMQ worker was attempting to retry, the connect might happen and you might even get your message before the publisher destroys everything.
I am pretty sure the problem has nothing to do with structs and protobuf. From the ZeroMQ point of view you are just sending bytes. There is no difference. If your test cases for ZeroMQ strings were truly identical with the test cases for ZeroMQ structs - then perhaps the code change added or removed a few nano-seconds that was able to break the race condition the wrong way.
Specific suggestions:
rename the socket in publisher to be "publisher" instead of subscriber (copy/paste error)
add a sleep for 30 seconds just before zmq_close (publisher);
hopefully this will fix the problem for your test code
if this does not fix it, consider switching to tcp transport and use wireshark to diagnose what is really going on.

Problem with C server function

Hi I have a problem with my function, which responsible for contact between client and server:
#define MAX 1024
void connection(int sock)
{
char buffer[MAX];
int newsock;
int n;
int r;
if(write(sock,"Hello!\n", 6) < 0)
{
perror("Error: ");
}
do {
if(write(sock, "\n> ",3) < 0)
{
perror(" Error: ");
}
memset(buffer,'0',MAX); // fill buffer
n = read(sock,buffer,MAX -1 );
if (strncmp("get",buffer,3) == 0)
{
execl("/usr/bin/top","/usr/bin/top","-n 1");
}
else if (strncmp("quit",buffer,4) == 0)
{
write(sock, "Exit from program\n",17);
close(sock);
}
else
{
write(sock,"Wrong order!\n", 12);
}
}
while(n);
}
When client send "get" the program should sends him view from "top" order, unfortunately it does not work in my program.
Secondly, please judge this code. This is my first server program. I will be very grateful
And finally, how to change function to give clients possibility to action in program after send "get" order.
Regards and Happy New Year!
You are calling exec without calling fork. So you are replacing your entire server process with a copy of top. This is really unlikely to do what you want.
Very likely, you could accomplish your aims by opening a suitable pseudo-file from the /proc file system, reading the information, and sending it into your socket.
If you really want to use top, you have to use pipe, fork and exec(l) to run top, read it's output from a pipe, and then send that output to the client.
It occurs to me that you might be running in an environment in which the server automatically forks you (like some sort of CGI gateway), in which case your problem is that you need to fdopen to move the socket to be descriptor #1 before exec-ing. It would really help if you would tell us all about your environment by editing your question.
The output of "top" goes to the server's stdout, not out through the socket to the client. You'd have to adjust the stdout of the "top" process for this to work.

Resources