Libssh remote commands not executing on server - c

Having read the relevant docs and tutorials and found a similar question, I am still unable to proceed. My aplogies in advance if this is a common question. I did searches but I wasn't really sure what I was looking for...
I am experimenting with the Libssh for C in Debian.
rc = ssh_channel_request_exec(channel, "ls -l");
if (rc != SSH_OK) {
ssh_channel_close(channel);
ssh_channel_free(channel);
return rc;
}
This returns SSH_OK to state that the command was sent successfully. As I understand from a similar question this is because the return listens for the successful 'sending' of the command. The return does not listen to see if it has been successfully executed.
My questions is, how can I:
Execute the command (which by the above function presently does not execute it merely sends the command)
Listen for it's execution
print the returning output?
I am aware of the ssh_channel_read() function but as the command never executes, I usually get the output
Read (256) buffered : 0 bytes. Window: 64000

Take a look at examples/exec.c in the libssh source code!

Related

Simple buffer overflow via xinetd

I'm trying to make a simple buffer overflow tutorial that runs the program below as a service on port 8000 via xinetd. Code was compiled using
gcc -o bof bof.c -fno-stack-protector
ubuntu has stack protection turned off as well.
Exploiting locally i.e
python -c ---snippet--- | ./bof
is successful and the hidden function was executed, displaying text file contents.
However, running it as a service and performing
python -c ---snippet--- | nc localhost 8000
returns nothing when exploiting. Am I missing something here?
#include <stdio.h>
void secret()
{
int c;
FILE *file;
file = fopen("congratulations.txt", "r");
if (file) {
while ((c= getc(file)) !=EOF)
putchar(c);
fclose(file);
}
void textdisplay()
{
char buffer[56];
scanf("%s", buffer);
printf("You entered: %s\n", buffer);
}
int main()
{
textdisplay();
return 0;
}
Output is buffered by default. To disable this you can do the following at the top of main:
setbuf(stdin, NULL);
This should fix your issue.
This is an issue that I am running into as well. Almost exactly the same.
However, here is one piece that I have found out that might be helpful to you. I believe the issue has something to do with xinetd not executing the binary as a terminal and having job control.
So what I did was to have xinetd do:
server = /usr/bin/python
server_args = /opt/shell.py
Then within the /opt/shell.py I had:
import pty
pty.spawn("/opt/oflow.elf")
/opt/oflow.elf being my overflowed binary
When I do this, I can actually send and receive data. Thats when I run the following command via netcat to try and overflow the service remotely:
**printf "\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80AAAAAAAAAAAAAAAAAAAAAAAAABCDEFGHIJKLMNOPQ\x7c\xfc\xff\xbf" | nc 192.168.1.2 9000**
This does nothing. However, I test the local version and it works PERFECTLY. Works every time.
Not when its being wrapped in a python pty and xinetd.
When I run the xinetd pointing directly to /opt/oflow.elf, I get absolutely nothing back from netcat.
So that doesn't exactly answer your question but it should whittle it down for you.
UPDATED COMPLETE ANSWER:
I figured out why this wasnt working. No need to use python at all. After every printf statement you must also include:
fflush(stdout);
Otherwise, xinetd doesnt know to send the stdout.
You may also need to do this for stdin:
fflush(stdin);

Using RPCGen to understand RPC

I am trying to understand basics of RPC using RPCGen. I followed a basic tutorial and wrote the follwing myrpc.x file
program MESSAGEPROG {
version EVALMESSAGEVERS {
int EVALMESSAGE(string) = 1;
} = 1;
} = 0x20000002;
I compile it by running
rpcgen -a -C myrpc.x
In the resulting server.c file, I added a printf statement as below
printf("Message is: %s,\n", *argp);
Then i run make -f Makefile.myrpc and start the server by running myrpc_server. Now when i run the client 'myrpc_client', I get the following message printed in the server
Message is: H���5�
Now my question is from where does this argument come from "H���5�" as this is not the argument which i am when running the client? Also can someone explain me how do i start running complex programs with rpcgen?
The garbage value is from code on line 15 in client.c, where is uninitialized variable used as an argument for your rpc call. My version of rpc show an error:
call failed: RPC: Can't encode arguments"
15 char * evalmessage_1_arg;
"How do I start running complex programs with rpc?" It' just on you. We cannot say when you need to use rpc. You probably have some reason for what you chose this implementation.
Some use case for rpc is thin client on slow computer, which needs some expensive computation. Client sends data to powerful server, that do the hard work and returns result.

libmpdclient: detect lost connection to MPD daemon

I'm writing a plugin for my statusbar to print MPD state, currently using the libmpdclient library. It has to be robust to properly handle lost connections in case MPD is restarted, but simple checking with mpd_connection_get_error on existing mpd_connection object does not work – it can detect the error only when the initial mpd_connection_new fails.
This is a simplified code I'm working with:
#include <stdio.h>
#include <unistd.h>
#include <mpd/client.h>
int main(void) {
struct mpd_connection* m_connection = NULL;
struct mpd_status* m_status = NULL;
char* m_state_str;
m_connection = mpd_connection_new(NULL, 0, 30000);
while (1) {
// this check works only on start up (i.e. when mpd_connection_new failed),
// not when the connection is lost later
if (mpd_connection_get_error(m_connection) != MPD_ERROR_SUCCESS) {
fprintf(stderr, "Could not connect to MPD: %s\n", mpd_connection_get_error_message(m_connection));
mpd_connection_free(m_connection);
m_connection = NULL;
}
m_status = mpd_run_status(m_connection);
if (mpd_status_get_state(m_status) == MPD_STATE_PLAY) {
m_state_str = "playing";
} else if (mpd_status_get_state(m_status) == MPD_STATE_STOP) {
m_state_str = "stopped";
} else if (mpd_status_get_state(m_status) == MPD_STATE_PAUSE) {
m_state_str = "paused";
} else {
m_state_str = "unknown";
}
printf("MPD state: %s\n", m_state_str);
sleep(1);
}
}
When MPD is stopped during the execution of the above program, it segfaults with:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fb2fd9557e0 in mpd_status_get_state () from /usr/lib/libmpdclient.so.2
The only way I can think of to make the program safe is to establish a new connection in every iteration, which I was hoping to avoid. But then what if the connection is lost between individual calls to libmpdclient functions? How often, and more importantly how exactly, should I check if the connection is still alive?
The only way I could find that really works (beyond reestablishing a connection with each run) is using the idle command. If mpd_recv_idle (or mpd_run_idle) returns 0, it is an error condition, and you can take that as a cue to free your connection and run from there.
It's not a perfect solution, but it does let you keep a live connection between runs, and it helps you avoid segfaults (though I don't think you can completely avoid them, because if you send a command and mpd is killed before you recv it, I'm pretty sure the library still segfaults). I'm not sure if there is a better solution. It would be fantastic if there was a reliable way to detect if your connection was still alive via the API, but I can't find anything of the sort. It doesn't seem like libmpdclient is well-built for very long-lived connections that have to deal with mpd instances that go up and down over time.
Another lower-level option is to use sockets to interact with MPD through its protocol directly, though in doing that you'd likely reimplement much of libmpdclient itself anyway.
EDIT: Unfortunately, the idle command does block until something happens, and can sit blocking for as long as a single audio track will last, so if you need your program to do other things in the interim, you have to find a way to implement it asynchronously or in another thread.
Assuming "conn" is a connection created with "mpd_connection_new":
if (mpd_connection_get_error(conn) == MPD_ERROR_CLOSED) {
// mpd_connection_get_error_message(conn)
// will return "Connection closed by the server"
}
You can run this check after almost any libmpdclient call, including "mpd_recv_idle" or (as per your example) "mpd_run_status".
I'm using libmpdclient 2.18, and this certainly works for me.

Using DHCP libraries results infinite loop

I have a code that uses DHCP libararies(package : 4.2.6) to get the hardware address of the DHCP client connected to the system. During this process after DHCP objects got initialized, i tried dhcp_connect() as follows which results into an infinte loop.
dhcpctl_initialize ();
status=dhcpctl_connect (&connection, "127.0.0.1", 7911, 0);
When i tried to debug the issue, i found a function "omapi_wait_for_completion"(in ompai/dispatch.c), has a do-while check for waiter object and its status, the object should change its state to ready to come out of the this loop, but this is never happened which results into an infinite loop.
Here i am just copying the loop as a reference.
do {
status = omapi_one_dispatch ((omapi_object_t *)waiter, t);
if (status != ISC_R_SUCCESS)
return status;
} while (!waiter || !waiter -> ready);
NOTE:
There is no issue when i run the binary generated from the code in system command line, but when i trigger the same command through an application we have this issue.
The application that triggers my binary doesn't uses DHCP libraries or files.
Please note that same binary with the same application is working
fine with older DHCP package (3.0.5).
Thanks in advance for your help.

cygwin c sem_init

if((sem_init(sem, 1, 1)) == 1) perror("error initiating sem");
If I include this line of code my program simply starts and exits. I just started learning how to use semaphores. I'm using cygwin and when this line is commented out the printf's ABOVE this print to console but when include this, nothing happens.
I did the following to get cygserver going-
CYGWIN=server
ran /bin/cygserver-config
ran /usr/sbin/cygserver
for the config it said the cygserver is already running
And for the sygserver it saids-
initailaizing complete
failed to created named pipe: is the daemon already running?
fatal error on IPC transport: closing down
Any ideas?
I figured out what was wrong. I was using data(struct) = shmat() before I was assigning any memory to data. That for some reason was stopping my 'printf' from working.

Resources