I am making a C++ program that creates a sytemd service file dynamically. This service executes a script on reboot. This script is also dynamically created and it deletes certain files, disables and deletes the service and then deletes itself.
My service is something like this:
[Unit]
Description = Run script on Reboot
Before =
[Service]
Type=simple
ExecStart=/var/tmp/myscript
[Install]
WantedBy=multi-user.target
And my script is:
rm -rf files
systemctl disable myservice.service
rm -f /etc/systemd/system/myservice.service
rm -f /var/tmp/myscript
After creation of the script and service, the program reboots the system using reboot API, passing LINUX_REBOOT_CMD_RESTART as parameter. However, my service fails after reboot and the files don't get deleted. I have even added the sync API before the reboot.
I have tested that service on its own, manually creating it and then manually rebooting the system, in that case, it works as intended. I even tried replacing the reboot API with
system("init 6")
And that too gives expected results. So, why is the reboot api in particular giving problem? what does that API do extra that can affect the service? I also have observed that there is some delay between the API call and the reboot starting, while calling system("init 6") has very less delay, almost starts reboot instantly.
What does the reboot do that could interfere with the service, and would passing some other parameter make behave same way as init 6?
Also, I am executing the program in sudo mode, hence the init command doesn't have extra sudo
Edit: I also tested out other reboot commands using the system C api. I tried system("reboot") and system("systemctl reboot") and both of them are working, the service runs fine on using them. So, the issue is only caused on using the reboot C Api. So, are there some differences in the api and that command? does the API do something extra that could interfere with the service?
Related
On Ubuntu 18.04 I have Unattended Upgrades update apps regularly including a 3rd party PPA that installs a binary /usr/bin/some_app. My systemd unit file runs that service via ExecStart=/usr/bin/some_app. I can verify the updates work on schedule in /var/log/apt/history.log.
However even when the binary is updated via Unattended Upgrades systemd doesn't restart the app, I assume because some_app is started via a custom unit file unrelated to that PPA. So from the cli some_app --version shows v2.0.0 but systemd is still running v1.0.0.
Does systemd have a method to track a file or detect the binary referenced in ExecStart has changed on disk and it should restart? A backup hack for me would be using RuntimeMaxSec= which would get the job done, but I was hoping something more elegant existed
You could try adding a .path unit (man systemd.path) to watch for a close() after write() change to the file, which then restarts your app service. Not tested:
/etc/systemd/system/myappwatch.path
[Unit]
Description=watch for changed file
[Path]
PathChanged=/usr/bin/some_app
#Unit=myappwatch.service
/etc/systemd/system/myappwatch.service
[Service]
ExecStart=/bin/systemctl restart myapp.service
You might be able to replace the systemctl restart with something magic like Conflicts=myapp but I'll let you experiment. You also need to enable the .path unit as usual with an appropriate WantedBy=. I'm not sure what happens if the path is a symbolic link, so perhaps you should resolve the path to the real file is that is the case.
I have a use case to deploy a compiled Native C executable as a Microservice on PCF:
The compiled executable is called like so:
"mycbinary inputfile outputfile"
and terminates after the operation. The Microservice is thus not an LRP.
It is possibly a Task in PCF palance, but it does not rely on the existence of other Microservices.
It must be a standalone Microservice but not a long-running one.
How can I achieve this use case with PCF please, i.e what possibilities do I have?
The Microservice terminates when the binary is done with its work until it is needed again.
To test the feasibility of what I could do, I tried pushing some compiled C code to PCF-DEV.
I am using cf push since that's my understanding of a standalone App on PCF
cf push HelloServiceAgain -c './helloworld' -b https://github.com/cloudfoundry/binary-buildpack.git -u process --no-route
The push crashed with the following message:
Waiting for app to start...
Start unsuccessful
TIP: use 'cf.exe logs HelloService --recent' for more information
In the log file there was this entry:
OUT Process has crashed with type: "web"
Then I pushed another command which expects parameters. This started without a problem, but the same message in the log file
cf push HelloServiceGCC -c 'gcc -o ./hellogcc ./hello1.c' -b https://github.com/cloudfoundry/binary-buildpack.git -u process --no-route
I have the following additional questions please:
1) Is the message "Process has crashed with type: "web" an ERROR? And why is the command called multiple times?
2) The second push which succeeded is supposed to create a compiled hellogcc executable which I expect to see in the same root directory. Where is the output file created and how can I access it from the local file system?
Sorry for asking so many questions but I am a newbie in the PCF business.
The Microservice is thus not an LRP. It is possibly a Task in PCF palance, but it does not rely on the existence of other Microservices. It must be a standalone Microservice but not a long-running one.
It's definitely a task. It's fine that it does not rely on other services. A task is just simple a process that runs for a finite amount of time and exits 0 for success or something else for error.
cf push HelloServiceAgain -c './helloworld' -b https://github.com/cloudfoundry/binary-buildpack.git -u process --no-route
I would recommend using this slight variation: cf push HelloServiceAgain -c 'sleep 5000' -b binary_buildpack -u process --no-route.
This will...
Assume that the compiled binary is in the current directory.
Uses the system provided binary buildpack which should be fine.
Sets the health check to be based on the process & sets no route.
Runs sleep, which is merely to pass the health check.
The purpose of this is so that your application will upload & stage. We set the command to sleep because we just need the app to stage and start (this is a workaround to make sure that staging occurs, you have to run once to trigger staging at least at the moment). Once the app starts, just run cf stop and stop the app. Since all you have is the task, you don't need the app to continue running. If you update your app, you do need to follow this process again though, as that will restage your changes.
At this point, you can cf run-task and execute your program. Ex: cf run-task app-name './helloworld'.
Hope that helps!
The bundled XPC Service in my macOS application need to do some post processing work with files dumped by the parent app, which most probably can't be completed within the usage time of the application. So, Is there a way to make the XPC service keep running even after the user quits the main app?
You could install it as launch daemon (running in root context as long as computer is switched on) or as launch agent (running in user context as long as user is logged in).
It sounds like you should be using the WatchPaths or QueueDirectories feature of launchd.
WatchPaths starts the job whenever any of the paths being watched have changed
or
QueueDirectories starts your job whenever the given directories are non-empty, and it keeps your job running as long as those directories are not empty
Both of these are covered by Apple's launchd documentation.
I'm trying to run an exe on multiple pcs on sync.
Im using psexec, this is what I have till now:
I have a batch file with this:
start psexec \\pc01 -i -s -d c:\videos360\video360.exe
start psexec \\pc02 -i -s -d c:\videos360\video360.exe
With this I can start the exe in the 2 pcs, but never totally on sync.
Anyone has some idea of how can I make them run more on sync?
Thanks in advance.
Sorry for my bad English...
First sync the clocks on both machines. You can run a script on one of them to sync to the other or have them both sync to a central time source. Then add a task to Task Scheduler on each machine to start the application at the same time. That's about as close as you're going to get without resorting to some sort of IPC mechanism between the processes (requires source code access to video360.exe).
See
schtasks.exe
Windows time service tools
You won't need psexec because schtask can be used to manage tasks on the remote machines. It would be up to your script to change the next time to fire the task, or you could setup a repetitive task that fires every minute or two and just enable/disable the task. I believe there's a one-shot option as well.
i am having trouble while starting a batch file as a service. the batch file runs fine when started manually but it doesnt starts a service and no ouput is observed. i have used nssm service manager to start the service.
below are the commands which i have used :
D:\nssm-2.24\win32>nssm install call
D:\nssm-2.24\win32>nssm start call
while installing i have provided the path of batch file.
the batch file contains the windows script to start few programs automatically.
you cannot just install any old executable as a service, and certainly not a batch file. a service is a program with a specific API which makes it react to service manager calls. ignore that, i just read up on nssm. still, there are probably better ways.
your use case sound rather like you might want to put your batch file in the autostart startmenu folder, to run it at login/startup.
or a scheduled task, if you want to restart it regularly.
one thing to consider, too, is the user under which the script is executed.