How to start SQL Server Agent etl job after complete Oracle process - sql-server

I have Oracle source and SQL Server. Every day at 12.00 AM Oracle data populate for oracle database tables, after that sql server agent jobs run manually by me after Oracle Data Extraction completed. for SQL Server databases, I want to automate this process. when oracle data population completed i need to run sql server agent jobs automatically, how can i do this.

You write a little program that will monitor the oracle process and when it "sees" that is has completed (probably successfully - you skipped that part) it starts the sql server job. Now - how do you do that? How do you know that the process both completed and successfully? Go figure it out based on what functionality Oracle offers and (perhaps) the characteristics of the output that signify success.
Perhaps you can vastly simplify things and schedule some sort of "do it" job that will look for "good" output. Schedule it for a time that is more than sufficient for the oracle process to complete. Do some checking and if things look good, start the other job.

Related

Schedule SQL Job X minutes after SQL Server Agent starts

I have a Job in SQL server which is scheduled to 'Start automatically when SQL Server Agent starts', however, this is creating some issues in our environment. The issue is not SQL specific but because of it we need to delay the execution of this particular job. I cannot schedule this as a recurring job and assign a particular time of a day, it needs to run every time when the SQL Server starts. Is there a way in which I can schedule the job that runs after 15 mins after SQL Server Agent starts?
Sure, just make the first step:
WAITFOR DELAY '00:15:00';
(Or, probably better, you could try to resolve whatever "some issues" are.)
However, note that someone can restart the Agent service without restarting SQL Server; or, they could set SQL Server Agent to not automatically start at startup, so that the next time the SQL Server service is restarted, your procedure will not run.
If you want to tie some startup activity to SQL Server starting, you could probably look into startup procedures, but if you need it to wait 15 minutes, the first thing in the procedure would be the above WAITFOR. Also startup procedures can't have input parameters (or output parameters, but that is less likely to be an issue for a stored procedure you're calling from a job).
Finally, SQL Server 2008 R2? That was barely still in support at the beginning of the last decade.

Deadlock issue after upgrade IBM Datastage and SQL Server

I just upgraded Datastage from 6.x to 11.x and SQL Server 2008 to 2016.
I don't understand why after upgrading, deadlocks often occur. The upgrade method that I use only moves jobs on Datastage and move databases on SQL Server.
SSMS deadlock report
Based on this article: link
Does that apply to SQL Server too? Is there anything I should pay attention to from this new environment?
Are you getting deadlocks for "Server" jobs or "Parallel" jobs?
In 11.7, the default apt configuration file (/opt/IBM/InformationServer/Server/Configurations/default.apt) is a 2-node configuration file. That means parallel jobs running parallel stages (such as the ODBC Connector stage that could be used to connect to an SQL Server database) runs 2 copies in parallel, each processing part of the data. If the job is only doing inserts, it should not deadlock table, but if job is doing parallel upserts (inserts and updates at same time, from 2 different database connections), then a deadlock could occur If you are using default partition method rather than use hash partition on key records to ensure all records with same key are handled by the same database connection.
In DataStage version 6 (over 10 years old), I am not sure job stages had option to run in parallel.
In any case, for your current job that is deadlocking, you can either backup then update default.apt to be a one-node config file (so all jobs would then run on one node which would avoid a job deadlock with self, but would also require more time for job to complete), or a better option would be to make copy of default.apt, one-node.apt, edit it to one node config file, then add to job the environment variable APT_CONFIG_FILE and set it to absolute path to the one node config file, so job will run on one node. If it no longer hangs then that is indication that job was deadlocking itself and to run on 2-nodes or more you should examine partition method used for records going to database stage.
If the failing job is a Server job, using one of the old server plug-in stages on Server canvas, then I would not expect database to deadlock, unless it is run at same time as another job updating the same database table.

Why is SQL Server Agent Job running endlessly?

Currently I'm working on system that gathers data from different websites/apis and stores this data in a SQL Server database. Then reporting service generates reports based on this data.
We have a lot of jobs running in SQL Server agent (each job has some steps, each step can be of type - PowerShell script or SQL...).
Version of SQL Server is 2017.
We have a problem that almost every day there are jobs that starts but never ends (status is "Executing"). This job can not run that long.
Does anybody has an idea how to solve this problem? Or at least how to investigate it?
CPU usage on virtual machine is +- 20%. Memory usage - 50%. So it is not a resource problem.

SQL Server 2008 R2 Job Launched Step 1 hundreds of times

I have an ETL SSIS package that is scheduled via job to run nightly at 7pm. It is the only step in the job, and the failure action is "quit the job reporting failure". The server is Windows Server 2008 R2, and the SQL Server version is 2008 R2. There is also an instance of SQL Server 2012 installed on this server, but the services are not started for that instance.
I've made no changes to the job, package, or server, and tonight it behaved strangely. When I look at the history of the job and expand tonight, it shows starting step 1 over 400 times, all at exactly 7 PM. It looks like it just kept launching it until the transaction log filled the entire drive and had no more space to grow, then exited the job reporting failure. I shrunk the transaction log by setting recovery mode to simple and running DBCC SHRINKFILE. I then restarted all of the SQL services for that instance and re-ran the job. So far, it seems to be running as expected, although I suppose time will tell.
I did a search of stack overflow and have seen nothing like this mentioned. We're actually starting a project to virtualize the box, then upgrade to 2012, so this may end up being one of those oddball things that never happens again, but I thought I'd ask in case anyone has any idea why this might have happened.
open the job step and go to the advanced tab. Look at the retry attempts. could it be that it has a big number? this would make the step run many times if it fails.
:

SQL Server Agent Job Running Slow

I am executing a stored procedure using SQL Server Agent Job in SQL Server 2005.
This job was running fast until yesterday. Since yesterday this job is taking more than 1 hour instead of 2 mins.
I executed the stored procedure in SSMS, it just took less than 1 minute to execute.
I could not figure out why it is taking more than 1 hour when executed as a SQL Server Agent job?
After some time commenting and assuming that the SP performs with the same input parameters and data well when executed in SSMS, I finnaly think I can give a last tip:
Depending on what actions are performed within the SP (e.g. inserting/updating/deleting a lot of data within a loop or cursor), you should set nocount on at the beginning of your code.
set nocount on
If this is not the case or does not help, please add more information, already mentioned in the comments (e.g. all settings of the Job and each Jobstep, what has been logged, what is in the Jobhistory, check SQLerrorlogs, eventlogs,....).
Also take a look at the "SQL Server Logs" maybe you can gather some info here. Also a look into the Application/System eventlo of the Databaseserver is always a good idea.
To get a basic overview you can use the Activitymonitor in SSMS, by selecting the Databaseserver and selecting "Activity monitor" from contextmenu and search for the sql agent.
My last try would be to try to run a sql trace for the agent. In this case you would start a trace and filter e.g. by the user that the SQLAgent Service runs. There are so many options you can set for traces, so I would recommend to google for it, search on MSDN or ask another question here on stackoverflow.
We have a large proc that runs in 88 seconds in SSMS and 30-45 minutes in SQL Server Agent. I added the dbo. prefix on all the table names and now it runs just as fast as SSMS.
I've noticed that SQL Agent jobs ignore the server's MAXDOP setting and run everything with a MAXDOP of 1. If I run a stored procedure in a query windows, it obeys the server settings and uses 4 processes. If I use SQL Agent, any stored procedure I run uses only one process.
I have a similar issue with a script that calls a number of UDFs that I created. The UDF's themselves normally run subsecond under SSMS. Likewise, running the reports I generate with them is bearable under SSMS (30d data in 8s, 365d data in 22s). I've always done NOCOUNT ON with my SQL Agent jobs as they normally generate text files out for pickin up by other processes or Excel and I do not want the extra data at the end, so it was not a solution for me.
In this case, when we run the exact same script under SQL Agent as a job, my times grow exponentially. My 8s script takes 2m30s and my 22s script takes 2h20m. This is the same whether I run it midday with other user activity and jobs or after hours with no user activity, nor jobs or backups running. Our server is idle and at best I get one of the 8 cores being utilized when run. DB is only about 10GB running on SSD with a cached RAID card and 16 of 32GB RAM is free. Since my SQL runs efficiently in SSMS, I am pretty well of the belief that I am hitting a threading limit of some sort. I have researched and tried adjusting MAXDOP just prior to the scripts in the SQL Agent with no luck.
Since this is an activity I want to schedule, it needs to be automated one way or another. I could let these scripts take the hours they need to run as SQL steps in SQL Agent jobs, but I decided to run from command line instead and I get the same performance I see in SSMS.
sqlcmd -S SQLSRVRHost -i "C:\My Script Loc With Spaces.sql" -v MyVar="VarValue" >"C:\MyOutputFile.txt"
So I created a batch script with the SQL jobs run from sqlcmd. Then I run the batch script from a SQL Agent job, so I still have the same management and control in place. My 4 SQL jobs that collectively took over 3 hours to run complete in 1 min and a few seconds from a single batch script executed by SQL Agent.
I hope this helps...

Resources