I have a script I want to run with Launchd over and over. In other words, as soon as it finishes a run, I just want it to run again whether it errors out or finishes with a 0.
How do I do that?
Thanks for any help!
Related
I am trying to create a scheduled task to run a batch file. I know that my batch file runs fine, because I have no problem running it manually. However, when the task calls it, it says that it's running, but it's not. The reason I know that it's not running is because it calls a python script, and the python script sends an email saying that the process has started. And I'm not receiving that email.The python process doesn't take too long (maybe 5 minutes at most), and the task keeps saying that it's "Running" after an hour.
I have the current settings with "Run whether user is logged on or not" (doesn't seem to work at all if I have it as "Run only when user us logged on", because the status never changes from "Ready" even if I tell it to run). I also have the setting with "Run with highest privileges", and just the name of the batch file under "Program/script" and the path to the batch file under "Start in". I also want to note, that I have the user account as "DOMAIN\Administrator".
However, I've tried other ways of calling it. I've tried putting the entire path with the batch file under "Program/script" (G:\GOM3_Update\FeatureServices\copies\test.bat), or putting the path to the python program, and then putting the path to the python script as an argument, but that doesn't seem to work either.
I'm not sure if this issue is caused by some major security settings with windows 10, or something minor in the task scheduler settings.
Here are my current settings:
Full path of the Start in is: "G:\GOM3_Update\FeatureServices\copies\"
The batch file:
"C:\Users\Administrator.DOMAIN\AppData\Local\ESRI\conda\envs\arcgispro-py3-clone\python.exe" "G:\GOM3_Update\FeatureServices\copies\database.py"
I would include a full path to batch file here:
From my personal experience it is usually the environment problem, aka things like current working directory etc..
Also, make sure to click the refresh button in the right pane of Task Scheduler because it is known for not updating the status of task unless you manually refresh. You may see it change from 'running' to 'ready' sooner if you do that.
The reason why I feel like your screen is not refreshing is because normally Task Scheduler does not allow any scheduled task to execute for longer that 1 hour and you said yours was still saying 'running' after an hour.
What does the exit code and messages say under tasks history?
Exit code 0 means no errors captured by Task Scheduler.
Another idea is to log the start of batch script (inside batch) to a log file before you do anything. And do the same for python file. This will help you narrow down the problem.
I struggled with this one also a lot, changing the User Account
AND Group. User Account alone was not enough for me.
This helped for me:
And then clicking on the Locations...you'll have to choose "WINSTADM"
I tried to run a script in Terminal that led to the following recurring code:
Recurring code:
When I try to press "Ctrl+C" this will stop the command from running on the screen, but if I open Terminal again, the command recurs over and over again. I tried various scripts to kill the process, but nothing is working. Any advice?
Thanks.
I have a PowerShell script for which I've done a GUI.
Every time a do or for loop is running the GUI becomes unresponsive and have to wait until it's done before I can close the GUI or press any other button.
The problem is anoying when for any number of reasons the loop runs infinitely and the only way to stop it is to kill PowerShell from task manager.
I've tried using break but this causes some sort java error although it stop and kills they gui...but I don't want to have the error.
If I use break in the script without GUI the script stops.
Is there a way to have the GUI responding when a loop is running and to stop the script when ever I what it to stop?
You could run the PowerShell script as a job and have the GUI polling the status of that job (in a loop with so you can still react on user input).
Cancelling a script is then also possible by just cancelling the job.
Ok so I'm trying to end Comodo (free firewall service) so that I can install NVidia driver updates, and since I'm super lazy I want this all to be automatic whenever NVidia tries to update. After I want to reopen Comodo.
However I'm stuck on the first step since I have no idea on how to properly end Comodo, when I try taskkill it says:
ERROR: The process "cis.exe" with PID 6204 could not be terminated.
Reason: Element not found.
I'm new to batch and have no idea what this error message means since I can clearly see that "cis.exe" (which is Comodo) is running in the background at least and I can close it from the icon tray if I wanted to, and it spits out the same message when I try using "CisTray.exe".
Also any advice on where I should go for the next few parts would be greatly appreciated, thinking on using task scheduler to open the batch file when some kind of log or something is made by NVidia.
I've coded a program in c for an embedded system (Devkit8000, which is a clone of the well known BeagleBoard) running Angstrom Linux.
The program creates a couple of threads, on of them is responsible of taking pictures with a camera connected to the board, and right now the second thread only moves that images to another path. The program should be running during the whole day, and the only way to stop it is sending a signal.
I edited the crontab to launch the program in a specific hour and to send a signal when it has to stop, the issue is that launching the program in this way cause the process to be killed after some time running, but, if i launch the program manually (through the command line), it works perfectly and dont get stopped.
I have no idea about the reason of this different behaviour between crontab and command line. I've checked the system logs but didnt find anything useful. I've also been reading a little and find that the OS can kill a process if it is using so much resources, but doesnt make sense that this happens in only 1 scenario (crontab vs manually)...
Any clue about what is happening?
Thank you in advance!
The main difference is that running a job through cron invokes a non-interactive non-login shell. The effect of that depends on the default shell for your user. For example, if you are using Korn shell or Bash then your .profile will not be executed, as it would on an interactive login shell. Korn shell 88 will execute .kshrc (the $ENV file) but ksh93 will not.
So, a good start might be to call your program from a script, after first "sourcing" your .profile file:
. $HOME/.profile
Failing that... When you say that the process is "killed", do you get such a message? If so, then that sounds like someone sending SIGKILL, i.e. kill -9. If not, then maybe you could run strace or ltrace to find out at what point it dies.