When does a thread get it's associated access token? - c

my problem could also be stated as "why am I getting ERROR_NO_TOKEN from call to OpenThreadToken"?
The faulting piece is:
hMainThread = OpenThread(THREAD_QUERY_INFORMATION, FALSE, threadID);
if (hMainThread == NULL)
{
printf("Couldn't open thread. : %d\n", GetLastError());
return -1;
}
if (!OpenThreadToken(hMainThread, TOKEN_READ, FALSE, &hMainThreadToken))
{
printf("Couldn't open thread token: %d\n", GetLastError());
return -1;
}
I am getting the second error line with 1008. The owner process of the thread is started with runas /user:someoneelse
I believe I understand something wrong about impersonation. Does runas not impersonate? Also funny thing is that I went trying this code on several main thread IDs in my system and it worked for main thread of taskmgr.exe.. So the code is probably ok and so is the behaviour of Windows leaving us with me thinking "If you use runas, your main thread automaticaly gets an acces token set.. right?" - which is probably the only thing that is wrong here. So when does the thread get it's asociated access token?

RunAs does not use thread impersonation, the whole process runs as the specified user.
When OpenThreadToken fails you usually just fall back to OpenProcessToken.
See this topic on MSDN for the ways a thread can impersonate a user/client. ImpersonateNamedPipeClient is perhaps the most commonly used function, especially when a service needs to impersonate the client.

Related

Let a daemon simulate keypress with xdo

I'm trying to make a daemon simulate a keypress. I got it already working for a non daemon process.
#include <xdo.h>
int main()
{
xdo_t * x = xdo_new(NULL);
xdo_enter_text_window(x, CURRENTWINDOW, "Hallo xdo!", 500000);
return 0;
}
If I try the same code for my daemon I get the following error
Error: Can't open display: (null)
Is there a way to still make it work with xdo or something else?
Your process must know the $DESKTOP environment variable specifying the desktop session to send these keys to, and yours doesn't seem to have that environment set.
Which also means you must realize it needs to have all the necessary privileges, which means access to ~/.Xauthority of the owner of the session, and the sockets in /tmp/.X11-unix

libmpdclient: detect lost connection to MPD daemon

I'm writing a plugin for my statusbar to print MPD state, currently using the libmpdclient library. It has to be robust to properly handle lost connections in case MPD is restarted, but simple checking with mpd_connection_get_error on existing mpd_connection object does not work – it can detect the error only when the initial mpd_connection_new fails.
This is a simplified code I'm working with:
#include <stdio.h>
#include <unistd.h>
#include <mpd/client.h>
int main(void) {
struct mpd_connection* m_connection = NULL;
struct mpd_status* m_status = NULL;
char* m_state_str;
m_connection = mpd_connection_new(NULL, 0, 30000);
while (1) {
// this check works only on start up (i.e. when mpd_connection_new failed),
// not when the connection is lost later
if (mpd_connection_get_error(m_connection) != MPD_ERROR_SUCCESS) {
fprintf(stderr, "Could not connect to MPD: %s\n", mpd_connection_get_error_message(m_connection));
mpd_connection_free(m_connection);
m_connection = NULL;
}
m_status = mpd_run_status(m_connection);
if (mpd_status_get_state(m_status) == MPD_STATE_PLAY) {
m_state_str = "playing";
} else if (mpd_status_get_state(m_status) == MPD_STATE_STOP) {
m_state_str = "stopped";
} else if (mpd_status_get_state(m_status) == MPD_STATE_PAUSE) {
m_state_str = "paused";
} else {
m_state_str = "unknown";
}
printf("MPD state: %s\n", m_state_str);
sleep(1);
}
}
When MPD is stopped during the execution of the above program, it segfaults with:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fb2fd9557e0 in mpd_status_get_state () from /usr/lib/libmpdclient.so.2
The only way I can think of to make the program safe is to establish a new connection in every iteration, which I was hoping to avoid. But then what if the connection is lost between individual calls to libmpdclient functions? How often, and more importantly how exactly, should I check if the connection is still alive?
The only way I could find that really works (beyond reestablishing a connection with each run) is using the idle command. If mpd_recv_idle (or mpd_run_idle) returns 0, it is an error condition, and you can take that as a cue to free your connection and run from there.
It's not a perfect solution, but it does let you keep a live connection between runs, and it helps you avoid segfaults (though I don't think you can completely avoid them, because if you send a command and mpd is killed before you recv it, I'm pretty sure the library still segfaults). I'm not sure if there is a better solution. It would be fantastic if there was a reliable way to detect if your connection was still alive via the API, but I can't find anything of the sort. It doesn't seem like libmpdclient is well-built for very long-lived connections that have to deal with mpd instances that go up and down over time.
Another lower-level option is to use sockets to interact with MPD through its protocol directly, though in doing that you'd likely reimplement much of libmpdclient itself anyway.
EDIT: Unfortunately, the idle command does block until something happens, and can sit blocking for as long as a single audio track will last, so if you need your program to do other things in the interim, you have to find a way to implement it asynchronously or in another thread.
Assuming "conn" is a connection created with "mpd_connection_new":
if (mpd_connection_get_error(conn) == MPD_ERROR_CLOSED) {
// mpd_connection_get_error_message(conn)
// will return "Connection closed by the server"
}
You can run this check after almost any libmpdclient call, including "mpd_recv_idle" or (as per your example) "mpd_run_status".
I'm using libmpdclient 2.18, and this certainly works for me.

libssh2 session cleanup without blocking?

My app uses libssh2 to communicate over SSH, and generally works fine. One problem I have is when the remote host dies unexpectedly -- the remote host in this case is an embedded device that can lose power at any time, so this isn't uncommon.
When that happens, my app detects that the remote computer has stopped responding to pings, and tears down the local end of the SSH connection like this:
void SSHSession :: CleanupSession()
{
if (_uploadFileChannel)
{
libssh2_channel_free(_uploadFileChannel);
_uploadFileChannel = NULL;
}
if (_sendCommandsChannel)
{
libssh2_channel_free(_sendCommandsChannel);
_sendCommandsChannel = NULL;
}
if (_session)
{
libssh2_session_disconnect(_session, "bye bye");
libssh2_session_free(_session);
_session = NULL;
}
}
Pretty straightforward, but the problem is that the libssh2_channel_free() calls can block for a long time waiting for the remote end to respond to the "I'm going away now" message, which it will never do because it's powered off... but in the meantime, my app is frozen (blocked in the cleanup-routine), which isn't good.
Is there any way (short of hacking libssh2) to avoid this? I'd like to just tear down the local SSH data structures, and never block during this tear-down. (I suppose I could simply leak the SSH session memory, or delegate it to a different thread, but those seem like ugly hacks rather than proper solutions)
I'm not experienced with libssh2, but perhaps we can get different behavior out of libssh2 by using libssh2_session_disconnect_ex and a different disconnect reason: SSH_DISCONNECT_CONNECTION_LOST.
libssh2_session_disconnect is equivalent to using libssh2_session_disconnect_ex with the reason SSH_DISCONNECT_BY_APPLICATION. If libssh2 knows that the connection is lost, maybe it won't try to talk to the other side.
http://libssh2.sourceforge.net/doc/#libssh2sessiondisconnectex
http://libssh2.sourceforge.net/doc/#sshdisconnectcodes
Set to non-blocking mode and take the control of reading data from the socket to your hand by setting callback function to read data from the soket using libssh2_session_callback_set with LIBSSH2_CALLBACK_RECV for cbtype
void *libssh2_session_callback_set(LIBSSH2_SESSION *session, int cbtype, void *callback);
If you can't read data from the socket due to error ENOTCONN that means remote end has closed the socket or connection failed, then return -ENOTCONN in your callback function

sem_unlink permission denied

I have a school assignment where we are supposed to solve readers-writers problem. As I found earlier sem_init is not supported on osx machine so I went with sem_open. However the code below does not work as expected.
if(sem_open(sem_reader, O_CREAT, 1, 0600) == SEM_FAILED)
perror("sem_reader");
The semaphore IS created, but when I try to unlink it with following code:
if(sem_unlink(sem_reader) != 0)
perror("unlink_sem_reader");
I get the output:
unlink_sem_reader: Permission denied
I tried to play with permissions like 0700, 0660 etc., but I always get permission denided. They are both wrapped in function and no action is taken with them. I am not sure where the problem is. My question is:
Did I set permissions incorrectly or the problem is somewhere else?
EDIT: Working in Xcode / 10.7
I think you switched arguments mode and value of sem_open.

correct way to run setuid programs in C

I have a process with permissions 4750. Two users exist in my Linux system. The root user and the appz user. The process inherits the permissions of a process manager that runs as "appz" user.
I have two basic routines:
void do_root (void)
{
int status;
status = seteuid (euid);
if (status < 0) {
exit (status);
}
}
/* undo root permissions */
void undo_root (void)
{
int status;
status = seteuid (ruid);
if (status < 0) {
exit (status);
}
status = setuid(ruid);
if (status < 0) {
exit (status);
}
}
My flow is the following:
int main() {
undo_root();
do some stuff;
do_root();
bind( port 80); //needs root perm
undo_root();
while(1) {
accept commads()
if ( commands needs root user access)
{
do_root();
execute();
undo_root();
}
}
As you can see I want to execute some commands as root. I am trying to drop permissions temporarily and if the tasks needs root access I wrap the command between a do_root and undo_root call.
However it seems that my program is not working.
What is the canonical way to do it?
The old-school way is to in both do_root and undo_root to use setreuid() to swap ruid and euid:
setreuid(geteuid(), getuid());
This is perfectly acceptable if the program is small enough to do a complete security audit.
The new-school way is far more complex and involves fork()ing off a child that accepts directives for what to do as root and then doing setuid(getuid()) to drop root permanently in the parent.. The child is responsible for validating all directives it receives. For a large enough program, this drops the amount of code that must be security audited, and allows the user to manage the process with job control or kill it, etc.
There is a paper 'Setuid Demystified' by Hao Chen, David Wagner, and Drew Dean. It was presented at USENIX 2002. It describes how setuid() and transitions work in great detail (correct as of 2002). It is well worth reading (several times - I must be a year or two overdue on a re-read of it).
Fundamentally, as Petesh noted in a comment, when a process with EUID 0 does setuid(nuid) with nuid != 0, there is no way back to root (EUID 0) privileges. And, indeed, it is vital that it is so. Otherwise, when you login, the root process that logs you in could not limit you to your own privileges - you'd be able to get back to root. Saved UID complicates things, but I don't believe it affects the one-way trap of EUID 0 doing setuid().
The setuid man page says the following:
... a set-user-ID-root program wishing to temporarily drop root
privileges, assume the identity of a non-root user, and then regain
root privileges afterwards cannot use setuid()
Meaning that you cannot use setuid(). You have to use seteuid() and, possibly, setreuid(). See Setuid Program Example for more details.

Resources