Taskeng.exe multiple process - batch-file

I've created two Batch scripts that I launch via the windows 7's Task Scheduler, and I've noticed that from times to times the execution of one of these scripts freezes. When this happens I get a Taskeng.exe window, and the only way to get rid of it is to kill the equivalent process.
And since I'm running those scripts on differents machines, whenever I log on in the morning, I find multiple Taskeng.exe windows from the previous executions.
Is there anyway to solve this problem ? Is there a way to kill those processes whenever their execution is incomplete ?
Thanks for reading.
Hicham

Related

Kill particular gstreamer instance running in background

I have a shell script that launches multiple instances of gstreamer (for different cameras) when called from within my C program using system().
All my gstreamer instances are running in background.
Now I want to be able to get the process ID corresponding to a particular instance only and kill that instance (while not terminating other instances).
Right now, I have a kill.sh script that simply gets PID of all instances of gstreamer and closes them all.
Is there any way to do the selective killing of gstreamer instances? Any help or ideas will be really helpful!
Thanks!

Batch development

I run a program in batch file and I use /bg command to run it in background. My question is, how can I see the process in task manager? I am asking this because /bg hide also the process.
In task manager you may only be seeing processes running under your user, click the "Show processes from all users" to see all processes.

How to display information from processes which are running in session 0?

For those who don't know what SCCM is, a little information, to understand better what i want to know. SCCM is an application, with you can deploy software packages. It is also possible to create a so called "Task Sequence". A Task sequence can contain multiple packages, which will be installed one after the other.
The Task Sequence execution occurs in Session 0. Of course the packages query some processes, if they are running. If they are, a window will pop up, to ask the user to close the application.
Here comes the problem. If an administrator deploys packages using task sequences (and they do), the users won't see the window, and won't close the required process. If the process is not closed, the script execution aborts.
I found this Link, and created a simple exe, according to the description. This simple exe is able to start a process from session 0 in session 1(or above), where the user is logged on(i know the security risks). So far so good, but how do i get the packages to display their windows? Obviously i could change the command line, so my exe will start the installation of every package, but this is not an option. There would be to much work.
The ideal solution would be if my exe would be the first in the task sequence, it would do "something" so the windows could be visible.
And that's where i am stuck.
Does anyone has any idea how i could achieve what i want?
Thanks in advance!

Debugging an app launched through scheduled tasks

How do I debug an application which is launched via scheduled tasks?
I have a simple application which works fine when double clicked to launch, but it doesn't work when launched through scheduled tasks.
I know how to debug projects on a local computer, but this application has no issues running on a local computer or on a different computer if launched manually by the user by double clicking the executable file.
I need a way to debug the application when it's being launched by scheduled tasks. Is this possible?
I would primarily suggest putting in some decent logging so that you can diagnose problems without resorting to the debugger. However, to launch the debugger, you can either attach it to an existing process in Visual Studio (using Debug/Attach to process... menu), or change the code to include the Debugger.Launch() method which will launch the debugger and attach it to the process. Of course all of this is dependent on your program actually being executed by the scheduler. If the scheduler doesn't execute the program, then the debugger obviously can't attach to it

Building an "odometer" for time spent on a server

I want to build an odometer to keep track of how long I've been on a server since I last reset the counter.
Recently I've been logging quite a bit of time working on one of my school's unix servers and began wondering just how much time I had racked up in the last couple days. I started trying to think of how I could go about writing either a Bash script or C program to run when my .bash_profile was loaded (ie. when I ssh into the server), background itself, and save the time to a file when I closed the session.
I know how to make a program run when I login (through the .bash_profile) and how to background a C program (by way of forking?), but am unsure how to detect that the ssh session has been terminated (perhaps by watching the sshd process?)
I hope this is the right stack exchange to ask how you would go about something like this and appreciate any input.
Depending on your shell, you may be able to just spawn a process in the background when you log in, and then handle the kill signal when the parent process (the shell) exits. It wouldn't consume resources, you wouldn't need root privileges, and it should give a fairly accurate report of your logged in time.
You may need to use POSIX semaphores to handle the case of multiple shells logged in simultaneously.
Have you considered writing a script that can be run by cron every minute, running "who", looking at its output for lines with your uid in them, and bumping a counter if it finds any? (Use "crontab -e" to edit your crontab.)
Even just a line in crontab like this:
* * * * * (date; who | grep $LOGNAME)>>$HOME/.whodata
...would create a log you could process later at your leisure.

Resources