how to automate jcl to run a cobol program on mainframe - batch-file

We have a COBOL batch program that we are able to execute manually from JCL. We want to automate this process so that it can execute every 15 minutes.
Is there a way to automate the execution of a batch program on the mainframe?
I'm a PC guy and I know in windows I can create a .BAT file and set it up in Task Scheduler to run every 15 minutes. I'm essentially trying to do the same thing on the mainframe.

Is there a way to automate the execution of a batch program on the
mainframe?
Yes.
Many mainframe shops have job schedulers. Control-M from BMC is one, ASG has Zeke, there are others.
Having said that, it sounds like the application in question is written to periodically poll for some event. Mainframes typically have better ways of accomplishing tasks people normally solve via polling. Event monitoring, for example.

Mainframe Scheduling software like Control-M from BMC is one, ASG has Zeke, CA7 from CA and IBM TWS for ZOS formerly OPCA can be used to schedule a job every 15 minutes.
You could add a job for every 15 minute period or have the first step of the job be to add the 1 that will run in the next 15 minutes.
Pros
Operators will be notified of the job failing
Cons
Will end up allot of the same jobs in the schedule
TWS for ZOS (what I am know) you would need to add nearly 96 jobs and set the corresponding times for it
The option I would recommend is using an automation product such System Automation from IBM, Control-O from BMC or OPS from CA.
With any of the above automation products you could setup a started task and get them to start it every 15 minutes. It is much easier say for example using 1 panel in System Automation to set it up to run a start task every 15 minutes
If you wanted to know if it fails you could use the automation products to schedule it in any of the above schedulers.

There are so many solutions to this, it really depends on what you are monitoring. Besides the standard "use a job scheduler like CA7" (with the disadvantage of having so many jobs that run during the day, just kind-of messy).
You could either define an address space (started task) that invokes your COBOL code, and within your COBOL code have it sleep (i.e. wait on a timer) for 15 minutes, wake-up check whatever and go back to sleep. Alternatively, run the job on JES2 but you might have to a little extra so that JES keeps the job active all day!
If this code finds a problem then it can also issue a console message (maybe, you might have to write a little bit of assembler code to do issue a WTO or WTOR), so the the operator either knows (WTO) or knows and has to reply (WTOR) (write to operator with reply).

Related

I want to schedule a batch file without Windows Task Scheduler

I have to login into SAPBW application via SAPGUI but at specific times and I want that to happen without Windows Task Scheduler. If it's possible to code something within the bat file, then please suggest something. Open to other suggestions as well.
I don't know why you don't want to use task scheduler (since it is the easiest way to schedule your batch file) but you will need at least a 3rd party software as a scheduler and task scheduler happens to be the most convenient one. If it's the case that you don't want any 3rd party software I've seen it once where someone put the batch file in the HKEY_LOCAL_MACHINE with the files that are executed on start. So he just restarted hes server every day and then the batch file was executed on start, but you will need to be creative to find good ways around 3rd party software.

Can a 4D database function be started from a unix shell script (Mac)?

We have large data sets in a 4D database. Recently we needed to automate certain tasks to export, import, or process data, which requires some connection between 4D and the unix shell (bash and zsh).
We did fine whenever a 4D function had to call a shell command or shell script, using the LAUNCH EXTERNAL PROCESS instruction. But I am having trouble doing the opposite, starting a 4D function from a shell script. The on-line documentation did not give me any suggestion on how to do this.
How would I run a 4D function at a specific date+time? 4D itself does not seem to provide a way to schedule an action. OS X provides a perfectly good way to start a command at any future time (weekly or monthly). I just do not know 4D well enough to figure out how to call a 4D function from a shell script. If it was MySQL or PostgreSQL then I had no problem, because I could call the mysql or pgsql client, which can work without a GUI. Can I do something similar with 4D?
There are a couple of ways you could accomplish this in 4D. The way I would typically do this, would be to use the web server in 4D. Then I would trigger that service with a GET or POST using whatever scheduler (cron, windows scheduler, etc) that I like. This does require some web server licensing from 4D.
This gives you flexibility to trigger the event at given times or from an external process. Say your data has been cleaned and is now ready to be imported, you don't have to wait until the next import time, it could just be triggered.
Alternatively you could create a 4D method that runs in its own process. You launch the process at database started and then it would loop indefinitely, checking to see if it needed to trigger whatever action you want.
Take a look at the New process, DELAY PROCESS, and While...End While commands. You could launch the new process to run on the server or a client depending your needs.
Basically your On Startup method calls New Process to run your 'scheduler' method. In that method you have a While (some true condition) loop that checks to see if it should run. Then it sleeps (delay process) for a bit so it doesn't hog resources, wakes up again and runs the loop.
If this is something you want to be able to stop programmatically you can use SET PROCESS VARIABLE to set the value of a variable in the scheduler process from outside. Then have the loop check that value to see if it should exit.
To answer your question, if you want call 4D on Mac from shell you should use a plugin from Pluggers (cfr https://www.pluggers.nl/product/scripting-tools/ ) and osascript.

Setup priority on Azkaban parallel flows/depedencies

I'm using Azkaban 3.4.1 and one of my flow has more than 30 dependencies. Some dependencies are takes more longer than another. So, I want to prioritize these flows to started before another flows. (because the running thread is limited)
Currently the number of parallel execution is limited with flow.num.job.threads which is 10 by default. I tried increase that property and make sure the long process started right away, but the cpu get very high, so I am not sure that is a good option.
Using this fork https://github.com/hanip-ss/azkaban/releases/tag/3.4.2.
I can now add job.priority value in job properties file.

Scheduling multiple executions of multiple bat files on different machine

I find myself in need of testing a multi desktop (between 2-4 different pc's on the same network) long running (each test runs for over an hour, and I cannot predict how long each test will take) program and I am trying to automate the system as much as possible (due to the large number of tests required)
Since I am testing for performance and the system uses collaboration (sharing of data) between nodes, it is vital that all machines run a test at the same time (system clock accuracy is enough, I am not looking at nano seconds) and that only a single test is run at a time (a single process is run on each machine and the next test can only be run once the process on each machine is complete). Therefore it would be ideal to detect when the system has completed execution in order to run the next test.
Each process is started from a batch file (which I prepare before hand). Note also that due to the fact that each process is rendering an image, all cores of the CPU are in use, thus any automation used will need to have low impact on performance.
I am proficient in both C++ and Java (the system in question is in c++), any help is appreciated. Thanks
(If I left anything unclear, please do ask, I do not have a lot of experience on stackoverflow)
You can remotely invoke tasks using the dos command SCHTASKS
so lets say each of your test machines has a scheduled task on them, when you are ready to run them you could have an application or script invoke each of the tasks practically simultaneously (within a second ?)
schtasks /Run /S nameofpctoruntaskon /U usernameofacct /P password /TN nameoftaskonpc /I

Building an "odometer" for time spent on a server

I want to build an odometer to keep track of how long I've been on a server since I last reset the counter.
Recently I've been logging quite a bit of time working on one of my school's unix servers and began wondering just how much time I had racked up in the last couple days. I started trying to think of how I could go about writing either a Bash script or C program to run when my .bash_profile was loaded (ie. when I ssh into the server), background itself, and save the time to a file when I closed the session.
I know how to make a program run when I login (through the .bash_profile) and how to background a C program (by way of forking?), but am unsure how to detect that the ssh session has been terminated (perhaps by watching the sshd process?)
I hope this is the right stack exchange to ask how you would go about something like this and appreciate any input.
Depending on your shell, you may be able to just spawn a process in the background when you log in, and then handle the kill signal when the parent process (the shell) exits. It wouldn't consume resources, you wouldn't need root privileges, and it should give a fairly accurate report of your logged in time.
You may need to use POSIX semaphores to handle the case of multiple shells logged in simultaneously.
Have you considered writing a script that can be run by cron every minute, running "who", looking at its output for lines with your uid in them, and bumping a counter if it finds any? (Use "crontab -e" to edit your crontab.)
Even just a line in crontab like this:
* * * * * (date; who | grep $LOGNAME)>>$HOME/.whodata
...would create a log you could process later at your leisure.

Resources