run external .exe under thread - wpf

I need to run an external exe "embed.exe" under my WPF project,
here's a snippet
ProcessStartInfo processInf = new ProcessStartInfo("embed.exe");
processInf.Arguments = string.Format(#"Some arguments");
processInf.WindowStyle = ProcessWindowStyle.Hidden;
Process run = Process.Start(processInf);
my problem is that it's block my UI,
is there a way to include embed.exe using a thread or any code that won't block the UI ?

OK,
Try to put your previous snippet inside a method, then create a new thread and initialize it to that method.
here's how to make it
//hone code
private void EmbedMethod()
{
ProcessStartInfo processInf = new ProcessStartInfo("embed.exe");
processInf.Arguments = string.Format(#"Some arguments");
processInf.WindowStyle = ProcessWindowStyle.Hidden;
Process run = Process.Start(processInf);
}
Thread embedThread=new Thread(EmbedMethod);
embedThread.start();

The process you started is running on its own thread, not the thread your application used to start it.
To terminate your embed.exe process you need to keep a reference to the Process started. In this case the run variable. To terminate the process call either:
run.CloseMainWindow() or run.Kill().
Kill forces a termination of the process, while CloseMainWindow only requests a termination.

Related

Let a daemon simulate keypress with xdo

I'm trying to make a daemon simulate a keypress. I got it already working for a non daemon process.
#include <xdo.h>
int main()
{
xdo_t * x = xdo_new(NULL);
xdo_enter_text_window(x, CURRENTWINDOW, "Hallo xdo!", 500000);
return 0;
}
If I try the same code for my daemon I get the following error
Error: Can't open display: (null)
Is there a way to still make it work with xdo or something else?
Your process must know the $DESKTOP environment variable specifying the desktop session to send these keys to, and yours doesn't seem to have that environment set.
Which also means you must realize it needs to have all the necessary privileges, which means access to ~/.Xauthority of the owner of the session, and the sockets in /tmp/.X11-unix

CreateJobObject behaviour when already inside a job

My scenario: I have a C program in Windows that, in running time, corresponds to process A; it needs to start another process (process B) and monitor it so that, under some external event (say, lock file removed) it can terminate B and all its eventual children (processes started by B).
My approach was to place B into a job created with CreateJobObject, so that process A can terminate it (together with its children) with TerminateJobObject - and then it can terminate itself.
HANDLE jobHandle = CreateJobObject(NULL, NULL); // creates job
...
res=CreateProcess(NULL,cmdline, .... &pi); // creates process B
AssignProcessToJobObject(jobHandle,pi.hProcess); // add process B to job
...
if(...) {
TerminateJobObject(jobHandle,exitCode); // terminate job: process B and children
....
}
This works. Except that, under certain circumstances [*] the process A happens to be already included in a job. In this case CreateJobObject(NULL, NULL) does not create a new job, but it returns the current one - which is not what I want.
How do create a wholly new job?
I don't want to rely on nested jobs, because I want to support Windows 7.
[*] I'm looking at you, Eclipse - but that does not matter much now.
From comments:
The solution was to add the flag CREATE_BREAKAWAY_FROM_JOB to the CreateProcess call:
res=CreateProcess(NULL,cmdline, NULL, // Process handle not inheritable
NULL, // Thread handle not inheritable
FALSE, // Set handle inheritance to FALSE
CREATE_BREAKAWAY_FROM_JOB, // don't place inside old job
NULL, // Use parent's environment block
NULL, // Use parent's starting directory
&si, // Pointer to STARTUPINFO structure
&pi ); // Pointer to PROCESS_INFORMATION structure
Two caveats:
For this to work, the old job must have the JOB_OBJECT_LIMIT_BREAKAWAY_OK flag enabled - it was in my case.
I had wrongly believed that, in my original code, the call CreateJobObject(NULL, NULL) returned the old job and that's why the new process ended in the old job - if that were true, then this solution would not work. But it was not true, what happened was that AssignProcessToJobObject failed (my fault for not checking return code) because the newly created process was already placed in the old job. CreateJobObject(NULL, NULL) returns a new job.

How to configure GDB in Eclipse such that all prcoesses keep on running including the process being debugged?

I am new in C programming and I have been trying hard to customize an opensource tool written in C according to my organizational needs.
IDE: Eclipse,
Debugger: GDB,
OS: RHEL
The tool is multi-process in nature (main process executes first time and spawns several child processes using fork() ) and they share values in run time.
While debugging in Eclipse (using GDB), I find that the process being debugged is only running while other processes are in suspended mode. Thus, the only running process is not able to do its intended job because the other processes are suspended.
I saw somewhere that using MI command in GDB as "set non-stop on" could make other processes running. I used the same command in the gdbinit file shown below:
Note: I have overridden above .gdbinit file with an another gdbinit because the .gdbinit is not letting me to debug child processes as debugger terminates after the execution of main process.
But unfortunately debugger stops responding after using this command.
Please see below commands I am using in the gdbinit file:
Commenting non-stop enables Eclipse to continue usual debugging of the current process.
Adding: You can see in below image that only one process is running while others are suspended.
Can anyone please help me to configure GDB according to my requirement?
Thanks in advance.
OK #n.m.: Actually, You were right. I should have given more time to understand the flow of the code.
The tool creates 3 processes first and then the third process creates 5 threads and keeps on wait() for any child thread to terminate.
Top 5 threads (highlighted in blue) shown in the below image are threads and they are children of Process ID: 17991
The first two processes are intended to initiate basic functionality of the tool and hence they just wait to get exit(0). You can see below.
if (0 != (pid = zbx_fork()))
exit(0);
setsid();
signal(SIGHUP, SIG_IGN);
if (0 != (pid = zbx_fork()))
exit(0);
That was the reason I was not actually able to step in these 3 processes. Whenever, I tried to do so, the whole main process terminated immediately and consequently leaded to terminate all other processes.
So, I learned that I was supposed to "step-into" threads only. And yes, actually I can now debug :)
And this could be achieved because I had to remove the MI command "set follow-fork-mode child". So, I just used the default " .gdbinit" file with enabled "Automatically debug forked process".
Thanks everyone for your input. Stackoverflow is an awesome place to learn and share. :)

Underlying mechanism when pausing a process

I have a program that requires it to pause and resume another program. To do this, I use the kill function, either from code with: -
kill(pid, SIGSTOP); // to pause
kill(pid, SIGCONT); // to resume
Or from the command line with the similar
kill -STOP <pid>
kill -CONT <pid>
I can trace what's going on with the threads using this code, taken from Mac OS X Internals.
If a program is paused and immediately resumed, the state of threads can show as UNINTERRUPTIBLE. Most of the time, they report as WAITING, which is not surprising and if a thread is doing work, it will show as RUNNING.
What I don't understand is when I pause a program and view the states of the threads, they still show as WAITING. I would have expected their state to be either STOPPED or HALTED
Can someone explain why they still show as WAITING and when would they be STOPPED or HALTED. In addition, is there another structure somewhere that shows the state of the program and its threads being halted in this way?
"Waiting" is shown in your case because you did not terminate the program rather paused it, where as Stopped or Halted state usually occurs when the program immediately stopped working due to some runtime error. As far as your second question is concerned, I do not think there is some other structure out there to show the state of the program. Cheers
After researching and experimenting with the available structures, I've discovered that it is possible to show the state of a program being halted by looking at the process suspend count. This is my solution: -
int ProcessIsSuspended(unsigned int pid)
{
kern_return_t kr;
mach_port_t task;
mach_port_t mytask;
mytask = mach_task_self();
kr = task_for_pid(mytask, pid, &task);
// handle error here...
char infoBuf[TASK_INFO_MAX];
struct task_basic_info_64 *tbi;
int infoSize = TASK_INFO_MAX;
kr = task_info(task, TASK_BASIC_INFO_64, (task_info_t)infoBuf, (mach_msg_type_number_t *)&infoSize);
// handle error here....
tbi = (struct task_basic_info_64 *) infoBuf;
if(tbi->suspend_count > 0) // process is suspended
return 1;
return 0;
}
If suspend_count is 0, the program is running, else it is in a paused state, waiting to be resumed.

Two mains in program

here is the situation: I need to send a data to a neighbor(socket) and then switch to listening mode. Ive got a client part in client.c, which just listens, and server part in server.c - sends data. Using sockets I need to have a main() in both of them. How should I get them "cooperate" together, so both mainss are not going result in error?
Or any other ideas how to solve this issue with sending and listening?
Thanks in advance!
Lucas
You can always create two executables from the sources. Each of them will have its own main.
Or, you can create a single executable and let it fork another process or create another thread. When creating a new thread you'll specify the second "main" to be the thread function.
When fork-ing, you should create two functions main_server and main_client and let the actual main decide which of them to call, just after the fork. See snippet:
int main_server(int argc, int argv){
//TODO: complete
return 0;
}
int main_client(int argc, int argv){
//TODO: complete
return 0;
}
int main(int argc, int argv){
//TODO: parse args and get argv_server, argv_client, argc_server, argc_client
int pid = fork();
if (pid < 0) {
//TODO: handle error and leave
} else if (pid) {
// start client here for example
main_client(argc_client, argv_client);
} else {
main_server(argc_server, argv_server);
wait(pid);
}
return 0;
/* TODO: each of the above calls should be checked for errors */
}
Hope it helps.
Note: it's better to create a separate executable but if you are required to have only one, use the above snippet.
The thing to remember is that these programs will compile into separate binaries that become separate processes. You will start the "server" program (which will run its main) and then the client program (which will run its main). They communicate over the socket you're creating.
Another solution to do this is using "select()" method. This is only for the socket programming in Linux/Unix environment. Using this you can have both sending and listening task done in the same main(). Here's the tutorial for this method.
http://beej.us/guide/bgnet/output/html/singlepage/bgnet.html#selectman
What it does is that instead of using fork() it puts all the sockets in a read_set. and then it goes into an infinite do-while() loop. Now this is very useful for socket programming in Linux/Unix. What happens in Linus/Unix each socket is assigned a File Descriptor(FD) in which they write the data and then it is transferred. It treats I/O console as a FD. So it puts the console FD in read_set, then all the other listening ports in read_set and then waits for the data from any of the above FD. So if you have data in console it will select that FD and perform the task you've written. Or will be in the listening mode until you close the program.
Now this is better than the fork() one because while using fork(), if didn't handled properly it could create a fork-bomb which would create processes recursively and will bomb your main memory. So its better to create a single process and have both functionality in it.

Resources