Libssh channel request exec failed in C - c

I'm using Libssh to log into an ethernet switch and run commands. I can connect and log in just fine but commands cannot be run. The program just returns "Channel request exec failed" as the ssh error. Normal ssh outside of the program works as expected and commands can be executed. The program can run commands on my localhost just fine. I've tried different commands as well, including just the help command or things that would never produce errors if run. This is the function I'm using to send the command:
int send_command(ssh_session session, char *command){
int rc;
ssh_channel channel;
channel = ssh_channel_new(session);
if (channel == NULL){
fprintf(stderr, "***Error in channel creation: %s***\n", ssh_get_error(session));
exit(-1);
}
rc = ssh_channel_open_session(channel);
if (rc != SSH_OK){
fprintf(stderr, "***Error opening channel: %s***\n", ssh_get_error(session));
exit(-1);
}
rc = ssh_channel_request_exec(channel, command);
if (rc != SSH_OK){
fprintf(stderr, "***Error sending command: %d, %s***\n", rc, ssh_get_error(session));
exit(-1);
}
char buffer[256];
int nbytes;
nbytes = ssh_channel_read(channel, buffer, sizeof(buffer),0);
while (nbytes > 0){
if (fwrite(buffer, 1, nbytes, stdout) != nbytes){
fprintf(stderr,"***Error writing result: %s***\n", ssh_get_error(session));
exit(-1);
}
nbytes = ssh_channel_read(channel, buffer, sizeof(buffer), 0);
}
ssh_channel_send_eof(channel);
ssh_channel_close(channel);
ssh_channel_free(channel);
return SSH_OK;
}
I think it might have something to do with the custom CLI the switch is using - it's not a true linux terminal. The switch processor I'm logging into is running some kind of simplistic linux OS that I can't find information on. Are there different types of consoles I need to account for? Should I be opening a shell, even though the commands are programmatic and won't require user input? I've mostly been following the libssh tutorial to get this working so I don't know much about why things work or don't work, just that they do. I'm running this from a Cygwin enviroment on a windows machine if that matters.

Related

Where would my data be getting lost at within this mutex/pthread_cond_wait structure?

FINAL EDIT: Solution to problem was stated by the answer I have selected. The representative example code is shown in the diff here
EDIT: Full compile-able code at the bottom of the post.
I have this rudimentary multithreaded server that simply accepts a connection and is supposed to pass the file descriptor off to a thread to allow this thread to handle it directly until the client disconnects.
For some reason, even with the following code flow inside of the server, some clients "Fall through the cracks" and get stuck in limbo. (They never get handled by the server so they just hang after accepting the connection)
The following block is my server main running loop:
while(g_serv.b_running)
{
//printf("Awaiting connection.\n");
client_fd = accept(g_serv.serv_listener_fd,
(struct sockaddr*)&cli_addr,
&clilen);
if (0 > client_fd)
{
fprintf(stderr,
"Error accepting connection. [%s]\n",
strerror(errno));
continue;
}
err = sem_trywait(&(g_serv.client_count_sem));
if (0 > err)
{
fprintf(stderr,
"Max connections reached. [%s]\n",
strerror(errno));
notify_client_max_connections(client_fd);
close(client_fd);
client_fd = 0;
continue;
}
printf("A client has connected.\n");
char byte[2] = "0";
err = send(client_fd, byte, 1, 0);
// Set up client FD in global position and wake up a thread to grab it
//
pthread_mutex_lock(&(g_serv.new_connection_fd_lock));
g_serv.new_connection_fd = client_fd;
if (0 != g_serv.new_connection_fd)
{
pthread_cond_signal(&(g_serv.new_connection));
}
pthread_mutex_unlock(&(g_serv.new_connection_fd_lock));
}
This block is the thread handling function:
void* thread_handler(void* args)
{
serv_t* p_serv = (serv_t*)args;
bool thread_client_connected;
int thread_client_fd;
while(p_serv->b_running)
{
pthread_mutex_lock(&(p_serv->new_connection_fd_lock));
while (0 == p_serv->new_connection_fd && p_serv->b_running)
{
pthread_cond_wait(&(p_serv->new_connection),
&(p_serv->new_connection_fd_lock));
}
thread_client_fd = p_serv->new_connection_fd;
p_serv->new_connection_fd = 0;
pthread_mutex_unlock(&(p_serv->new_connection_fd_lock));
// In the case of a pthread cond broadcast for exiting the server.
//
if (0 == thread_client_fd)
{
continue;
}
thread_client_connected = true;
while (thread_client_connected)
{
thread_client_connected = handle_client(thread_client_fd);
}
close(thread_client_fd);
thread_client_fd = 0;
sem_post(&(p_serv->client_count_sem));
}
return NULL;
} /* thread_handler */
Just for data reference here is my serv_t struct:
typedef struct serv_t {
bool b_running;
int max_connections;
int serv_listener_fd;
sem_t client_count_sem;
pthread_mutex_t new_connection_fd_lock;
pthread_cond_t new_connection;
int new_connection_fd;
pthread_t* p_thread_ids;
} serv_t;
Basically, if I run netcat or a client program I have against it with multiple instances via a bash command to "background" the application, some of these instances get stuck. I have it redirecting the output to a file, but what's happening is that particular instance of the client/netcat is just getting stuck after the accept call.
More specifically, if I run my program with two threads, one instance of a program gets stuck and no subsequent copies get stuck, even running 6500 instances against the server.
If I run it with ten threads, as many as 8 or 9 instances get stuck, but the threads still function properly within the server.
EDIT:
Client code I refer to, starting from the server letting the client know that the server is ready to receive data:
char buff[2] = { 0 };
err = recv(client_socket_fd, buff, 1, 0);
if ('0' != buff[0] && 1 != err)
{
fprintf(stderr,
"Server handshake error. [%s]\n",
strerror(errno));
close(client_socket_fd);
return EXIT_FAILURE;
}
if (NULL != p_infix_string)
{
if (MAX_BUFFER_SIZE < strlen(p_infix_string))
{
fprintf(stderr,
"Infix string is over 100 characters long.\n");
return EXIT_FAILURE;
}
errno = 0;
char* p_postfix = infix_to_postfix(p_infix_string);
if (EINVAL == errno || NULL == p_postfix)
{
fprintf(stderr, "Error converting provided string.\n");
}
bool success = send_postfix(p_postfix, client_socket_fd);
free(p_postfix);
if (false == success)
{
fprintf(stderr,
"An error occured while sending the equation to the server.\n");
close(client_socket_fd);
return EXIT_FAILURE;
}
}
The client is getting stuck at the receive call here:
bool send_postfix(char* p_postfix, int client_socket_fd)
{
if (NULL == p_postfix)
{
fprintf(stderr, "No postfix string provided to send to server.\n");
return false;
}
printf("Sending postfix to server\n");
int err = send(client_socket_fd,
p_postfix,
strnlen(p_postfix, MAX_BUFFER_SIZE),
0);
if(strnlen(p_postfix, MAX_BUFFER_SIZE) > err)
{
fprintf(stderr,
"Unable to send message to server. [%s]\n",
strerror(errno));
return false;
}
char response[MAX_BUFFER_SIZE] = { 0 };
printf("Waiting for receive\n");
err = recv(client_socket_fd, &response, MAX_BUFFER_SIZE, 0);
if (0 == err)
{
fprintf(stderr,
"Connection to server lost. [%s]\n",
strerror(errno));
return false;
}
else if (0 > err)
{
fprintf(stderr,
"Unable to receive message on socket. [%s]\n",
strerror(errno));
return false;
}
printf("Server responded with: \n%s\n", response);
return true;
} /* send_postfix */
EDIT: https://github.com/TheStaplergun/Problem-Code
I uploaded the code to this repo and removed the need for the extraneous files I use and filled them with placeholders.
You can recreate this problem using the server with the command ./postfix_server -p 8888 -n 2 and the client issue in another terminal with for i in {1..4}; do ./postfix_client -i 127.0.0.1 -p 8888 -e "3 + $i" &> $i.txt & done
The output of each client will be forcefully flushed because of the setbuf at the top of client. Run it, see if any programs hang, if not run that command again. Just type PS and see if one of them is hanging, and look at the resulting text file. You will see it is stuck at the receive call.
If you sigint the server (CTRL + C), the client that was stuck will close with a Connection reset by peer response from the server, so the server still does have that file descriptor locked up somewhere.
I believe a race condition is happening somehow, because it only happens randomly.
A curious thing is it only happens ONCE PER SERVER INSTANCE.
If I kill that hung instance and proceed to do it again 10000 times it never does another hang until the server is reset.
For some reason, even with the following code flow inside of the
server, some clients "Fall through the cracks" and get stuck in limbo.
(They never get handled by the server so they just hang after
accepting the connection)
There may be other issues, but the first one I see is that main loop does not ensure that a new connection is actually picked up by any handler thread before it tries to hand off the next connection. Even if there are handler threads already blocked on the CV when a new connection is accepted, it is possible for the main server thread to signal the CV, loop back around, accept another connection, reacquire the mutex, and overwrite the new-connection FD before any handler thread picks up the previous one. The chances of that increase if you have more threads than cores.
Note that that will also interfere with your semaphore-based counting of available handlers -- you decrement the semaphore for every semaphore accepted, but you increment it again only for those that are successfully handled.
There are various ways that you could make the main server thread wait for the new connection to be picked up by a handler. One group would involve the server waiting on a CV itself, and relying on a handler to signal it after picking up the connection. Another, perhaps simpler, approach would involve using a semaphore to similar effect. But I would suggest instead not waiting, but instead creating a thread-safe queue for available connections, so that the server doesn't have to wait. That would even allow for queueing more connections than presently available handlers, if that would be useful to you.

Correctly close file descriptor when opened through network mount

I'm currently trying to figure out how to correctly close a file descriptor when it points to a remote file and the connection is lost.
I have a simple example program which opens a file descriptor on a sshfs mount folder and start to write to the file.
I'm not able to find how to handle the case when the connection is lost.
void *write_thread(void* arg);
int main()
{
pthread_t thread;
int fd = -1;
if(-1 == (fd = open("/mnt/testfile.txt", O_CREAT | O_RDWR | O_NONBLOCK, S_IRWXU)))
{
fprintf(stderr, "Error oppening file : %m\n");
return EXIT_FAILURE;
}
else
{
if(0 > pthread_create(&thread, NULL, write_thread, &fd))
{
fprintf(stderr, "Error launching thread : %m\n");
return EXIT_FAILURE;
}
fprintf(stdout, "Waiting 10 seconds before closing\n");
sleep(10);
if(0 > close(fd))
{
fprintf(stderr, "Error closing file descriptor: %m\n");
}
}
}
void *write_thread(void* arg)
{
int fd = *(int*)arg;
int ret;
while(1)
{
fprintf(stdout, "Write to file\n", fd);
if(0 > ( ret = write(fd, "Test\n", 5)))
{
fprintf(stderr, "Error writing to file : %m\n");
if(errno == EBADF)
{
if(-1 == close(fd))
{
fprintf(stderr, "Close failed : %m\n");
}
return NULL;
}
}
else if(0 == ret)
{
fprintf(stderr, "Nothing happened\n");
}
else
{
fprintf(stderr, "%d bytes written\n", ret);
}
sleep(1);
}
}
When the connection is lost (i.e. I unplug the ethernet cable between my boards), The close in the main thread always blocks whether I use the flag O_NONBLOCK or not.
The write call sometimes immediately fails with EBADF error or sometimes continues for a long time before failing.
My problem is that the write call doesn't always fail when the connection is lost so I can't trigger the event into the thread and I also can't trigger it from the main thread because close blocks forever.
So my question is : How to correctly handle this case in C ?
question is: how to correctly handle this case in C?
Simply you can not. File handles are designed to be unified and simple, no matter where they point to. When a device is mounted, and the connection (physical or virtual) to it crashes down, things become tricky even at the command line level.
There is a fundamental problem with remote filesystems, where on the one hand you have to cache things in order for performance remain at a usable level, and on the other hand caching in multiple clients can lead to conflicts that are not seen by the server.
NFS, for example, chooses caching by default and if the cache is dirty, it will simply hang until the connection resumes.
Documentation for sshfs suggests similar behavior.
From grepping sshfs' source code, it seems that it doesn't support O_NONBLOCK at all.
None of that has anything to do with C.
IMO your best option is to switch to nfs, and mount with e.g. -o soft -o timeo=15 -o retrans=1.
This could cause data corruption/loss in certain situations when there is a network disconnect, mainly when there are multiple clients or when the client crashes, but it does support O_NONBLOCK and in any case will return EIO if the connection is lost while a request is in-flight.
After some diggin around I found that the SSH mount could be configured to drop the connection and disconnect from server if nothing happens.
Setting ServerAliveInterval X on client side to disconnect if the server is unresponsive after X sec.
Setting ClientAliveCountMax X on server side to disconnect if the client is unresponsive after X sec.
ServerAliveCountMax Y and ClientAliveCountMax Y can also be used in order to retry Y times before dropping the connection.
With this configuration applied, the sshfs mount is automatically removed by Linux when the connection is unresponsive.
With this configuration, the write call fails with Input/output error first and then with Transport endpoint is not connected.
This is enough to detect that the connection is lost and thus cleaning up the mess before exiting.

IOCTL: invalid argument for HDIO_GET_IDENTITY

I wrote a program to get the details of hard disk drive using HDIO_ ioctl calls.
For writing program, I'm referring Documentation/ioctl/hdio.txt in kernel source(2.6.32).
Here is my main part of code:
unsigned char driveid[512];
fd = open("/dev/sda", O_RDONLY); // validated fd.
retval = ioctl(fd, HDIO_GET_IDENTITY, &driveid);
if(retval < 0) {
perror("ioctl(HDIO_GET_IDENTITY)");
exit(3);
}
When I run(as root) the above code, I got below error:
ioctl(HDIO_GET_IDENTITY): Invalid argument
What is the wrong in the program?
Why I'm getting error?
Additional Info: OS: CentOS-6.5, kernel version: 2.6.32, IA:x86_64 (running on VMware).
the HDIO_GET_IDENTITY ioctl() doesn`t take a raw character buffer as its 3rd argument.
it uses a struct defined in linux/hdreg.h.
struct hd_driveid driveid;
fd = open("/dev/sda", O_RDONLY); // validated fd.
retval = ioctl(fd, HDIO_GET_IDENTITY, &driveid);
if(retval < 0) {
perror("ioctl(HDIO_GET_IDENTITY)");
exit(3);
}
this way it should work. Be aware that it only works for IDE/SATA drives, SCSI is not supported.
has
if you are wondering on how to get the information after the command ioctl() has returned succesfully, I suggest going through
http://lxr.free-electrons.com/source/include/linux/hdreg.h?v=2.6.36

Pseudo terminal problems (Mac/Linux): SIGTTOU & Inappropriate ioctl

I am working on a pseudo terminal library. The code is implemented in C code and the code is used by a web based terminal. The code works as long as I do not use sudo or login.
This is the error I get when I run the server on a Mac:
sh-3.2$ sudo ls
Password:
[1]+ Stopped(SIGTTOU)
sh-3.2$
The above works on Linux:
$ sudo ls
readme.txt
However, I get the following on Linux with sudo bash:
$ sudo bash
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
]0;root#ubuntu: /tmproot#ubuntu:/tmp#
Note: the above works, but I have no job control.
I have probably forgot to set some controlling bits on the terminal, but Google has not been very helpful in finding this.
Also, do you know of any good books that explains Pseudo terminal management in great detail.
I have the setsid call, but I am not using openpty. I use the following code when opening the pty:
static int createPty(lua_State* L, char* ttyName, int* pty)
{
*pty = getpt();
if (*pty < 0 || grantpt(*pty) < 0 || unlockpt(*pty) < 0)
return lDoErr(L,"Cannot open PTY: %s",strerror(errno));
if(ptsname_r(*pty, ttyName, PTY_NAME_SIZE-1))
return lDoErr(L,"ptsname_r: %s",strerror(errno));
return 0;
}
I have edited the code below and this code works. The reason my first version did not work was that I tried to create two PTY channels. I wanted to be able to differentiate between stdout and stderr, but the Linux kernel does not allow multiple TIOCSCTTY calls.
static int
childOpenTTY(const char* ttyName)
{
struct termios termbuf;
int fd=open(ttyName, O_RDWR);
if(fd < 0)
doClientError("open %s: %s",ttyName, strerror(errno));
tcsetpgrp(fd, getpid());
ioctl(fd,TIOCSCTTY,NULL);
tcgetattr(fd, &termbuf);
cfmakeraw(&termbuf); /* turn off NL to CR/NL mapping on output. */
tcsetattr(fd, TCSANOW, &termbuf);
return fd;
}
if( (ret = createPty(L, ttyName, &te->pty)) != 0)
return ret;
if ((te->pid = zzbafork()) < 0)
return lDoErr(L,"fork: %s",strerror(errno));
if(te->pid == 0)
{ /* Child process */
static const char efmt[]={"Cannot set '%s' (dup2 err)"};
int fd;
if(setsid() < 0) /* make new process group */
doClientError("setsid: %s",strerror(errno));
fd=childOpenTTY(ttyName);
if(dup2(fd, STDIN_FILENO) != STDIN_FILENO)
doClientError(efmt,"stdin");
if(dup2(fd, STDOUT_FILENO) != STDOUT_FILENO)
doClientError(efmt,"stdout");
if(dup2(fd, STDERR_FILENO) != STDERR_FILENO)
doClientError(efmt,"stderr");
if(fd != STDIN_FILENO && fd != STDOUT_FILENO && fd != STDERR_FILENO)
close(fd);
execve(cmd, (char**)cmdArgv, environ);
/* execve should not return, unless error exec cmd */
doClientError("Executing %s failed: %s",cmd,strerror(errno));
}
It's hard to be sure since there's no actual code shown here, but I suspect you're running into POSIX-style "session" management. You need to execute a setsid call, then open the pty (slave side) such that it becomes the controlling terminal. The openpty and login_tty routines do the low level grunge work for you; are you using those?

Sending while receiving in C

I've made a piece of code in what's on my server as multiple threads
The problem is that it doesn't send data while im receiving on the other socket.
so if i send something from to client 1 to client 2, client2 only receives if he sends something himself(jumps out of the recv function) .. how can i solve this ?
/* Thread*/
while (! stop_received) {
nr_bytes_recv = recv(s, buffer, BUFFSIZE, 0);
if(strncmp(buffer, "SEND", 4) == 0) {
char *message = "Text asads \n";
rv = send(users[0].s, message, strlen(message), 0);
rv = send(users[1].s, message, strlen(message), 0);
if (rv < 0) {
perror("Error sending");
exit(EXIT_FAILURE);
}
}else{
char *message = "Unknown command \n";
rv = send(s, message, strlen(message), 0);
if (rv < 0) {
perror("Error sending");
exit(EXIT_FAILURE);
}
}
}
To be a little more specific, there are a few types of I/O. What you're doing currently is called blocking i/o. In general that means that when you call send or recv the operation will "block" until it has completed.
In contrast to that there is what is known as non-blocking i/o. In this i/o model an operation will return immediately if it's unable to complete. Typically the select function is used with this i/o model.
You can see an example program here at the Select Tutorial. The full source code is at the bottom of the page.
As others have noted, your other option is to use threads.
Your code will block on the recv() call. Either write a multi-threaded application, or investigate the use of the select() function.
Put send and receive in separate threads.
I notice that you are using perror() (the POSIX error function), which leads me to believe you are using a POSIX operating system, which makes me suspect its GNU/Linux.
select() is portable, poll() is POSIX centric and epoll() is Linux centric. If using GNU/Linux, I strongly suggest avoiding select() and using:
poll() if you are polling only a few dozen file descriptors
epoll() if you need to scale to thousands of connections, and its available.
If your application need not be portable, and no requirement prohibits using extensions, use poll() or epoll(). Once you learn how select() works, you'll be very happy to get rid of it, especially for something that has to scale to serve many clients.
If portability is a requirement, see if either poll() or epoll() exist during your build configuration and use either in favor of select().
Note, epoll() did not appear until Linux 2.5(something), so its best to get used to using both.
You shoud separete the code in two threads, one transmitter and one receiver.
Somewthing like this:
/* 1st Thread*/
while (! stop_received) {
nr_bytes_recv = recv(s, buffer, BUFFSIZE, 0);
}
/* 2nd Thread*/
while (! stop_received) {
if(strncmp(buffer, "SEND", 4) == 0) {
char *message = "Text asads \n";
rv = send(users[0].s, message, strlen(message), 0);
rv = send(users[1].s, message, strlen(message), 0);
if (rv < 0) {
perror("Error sending");
exit(EXIT_FAILURE);
}
}else{
char *message = "Unknown command \n";
rv = send(s, message, strlen(message), 0);
if (rv < 0) {
perror("Error sending");
exit(EXIT_FAILURE);
}
}
}
The concurrency will bring some issues, like access to the buffer variable.
There are two ways of achieving the goal you want:
1.) implement the sending and receiving codes in different threads. but there will be some issues, like increasing no of clients might get you into troubles to handle the code. also there will be some some problem of concurrency (as mentioned by pcent).
you can go for no blocking sockets but i suggest not to do so, as i hope you dont want a cpu hog.
2.) The other way is to use of select() function which will let you monitor multiple sockets of different types at the same time. for more description of "select()" you can google it. :)

Resources