SQL Full backup job is taking too long time to execute - sql-server

I am facing a strange issue with SQL Server 2012(Standard) Full backup job for the last 2 days.
Our database size is 1.3 TB and we are taking backups on a daily bases for all databases. From last 2 days without any error backups not happening. But I got job email alert with success in 13 sec.
I have checked on SQL server Full backup is running for last 10 hours.
Please suggest me for further.
I have searched in the error log and event viewer but I haven't found any error related to this backup job. And also I haven't found anything from job history as well it was showing last success history.

Related

Where is SQL Server backup job running from?

We run our daily/weekly backups from CA D-Series job scheduling tool. I perform daily differentials and weekly fulls on our SQL Server 2012 databases. I am finding every few days both the differential and full backup will run on our largest databases (the maintenance jobs are split into 4 streams going from smallest to largest databases). The full backup job does not look to be running from D-Series.
For instance, the last full ran on 7/29 and today 8/1 a differential ran at 1 am and a full executed at 3 am. I am trying to find where the job is running from. It filled up the disk. I looked at the application log / SQL Server log / Agent log and cannot find any location. It is driving me insane!
This is a production server so I cannot implement auditing due to the high activity. Any advice would be greatly appreciated!

SQL Server 2008 log shipping keeps getting out of sync

I have inherited a SQL Server 2008 live server with a hot swap-able backup server which has transaction logs shipped every 15 minutes from multiple production servers. One of the production servers keeps getting out of sync, when I came it the last successful log restore on the backup was over a year ago... So clearly my predecessor wasn't watching this, I restored the database and ensured that the logs synced correctly at the next 15 minute interval. However, every couple of days or so (it is random sometimes an hour sometimes 3 days) it gets back out of sync and I have to spend 10 minutes of my morning restoring the database.
Basically I am wondering what I need to be looking at to figure out why these keep getting out of sync? I ran a query I found on sqlauthority.com which shows me the trn's and their license numbers for the primary database and when I try to just restore the transaction logs on the backup database with the file which is supposed to be the next license to get it back in sync it says the license number is too recent, so I try the next file and it says is out of sync and can't be restored.
Any help would be appreciated :)
Upon digging into the job history further I found that my LS_Restore job was succeeding with errors, which were that the sql server agent service account didn't have access to the folder the trn files were stored which it was trying to restore the database from.
I researched what the original developer had setup as the sql server agent service account and then granted that user the necessary permissions, the logs are now restoring as needed.
I hope this information helps someone in the future!!

Transactional replication failing

i am facing issue in sql server Transactional replication and not able to get the root cause for it. First, let me tell you that i am not a DBA, so i may be dumb on few DBA concepts.
i am .Net developer and i have been given responsibility to setup the replication.
i have a Database in Headoffice and replicating few Tables to another server at retail Store.
First time, i configured the replication with selected articles.
the replication was continuous. it was running fine, but one Sunday night, it got failed with error "process could not execute 'sp_replcmds'".
after spending sometime on google, i couldn't find any solution. so, i rebuilt the Replication, but this time the replication was scheduled (every 15 Min), also i configured it as PULL instead of PUSH. it started, but again next Sunday night it got crashed.
So, i analyzed that in Sunday night, i had configured the Reindexing Job on the database, and Since, the recovery model was full, it was generating a very large TLOG and Repolication agent was not able to parse that.
Now, the third time, i again Rebuilt the Replication, and this time i scheduled the replication every 15 minutes but from 8:00 AM Morning to 11:30 PM, because after 11:30, no store do any transaction. Also, for Reindexing Job, i added 2 more steps. before Re-Indexing, i was changing the recovery model to simple and then Re-Indexing and after that i was changing the Recovery model back to Full. i was changing the recovery to Full, irrespective of the result from Re-Indexing step.
This setup was working fine and worked properly for around 2 Months.
Now, after 2 Months, again one Sunday night it got failed, with the same reason ("process could not execute 'sp_replcmds'"). Actually, i had scheduled the backup job, and i was taking Full Backup everyday and Log backup every 15 minutes, and no differential backup.
after, discovering that i had not configured the differential backup, i also configured the same (every 6 Hours). but, after configuring the Differential backup, in Sunday night Replication got failed.
Now, anybody, please help me with the recommended setup for my scenario.
my setup is
sql server - SQL Server 2008 R2 Enterprise on Windows Server 2008 R2
Distributor and Publisher are on same machine.
Subsriber is on the Retail Store server.
sp_replcmds is run by the log reader agent against the published database to get, well, replicated commands. According to the documentation, one needs to be at least db_owner to run that command. Make sure whatever account is running the log reader agent has at least db_owner in the published database.

SQL Server 2000 DTS Package Failing with "The number of failing rows exceeds the maximum specified"

I have inherited a SQL Server 2000 DTS package that migrates data from SQL Server to Oracle. This package moves about 20 tables' data to Oracle every night with no transformations, and it is then transformed by a set of SPs and used by a GIS application.
Twice this week, during the migration between SQL Server and Oracle, the package has failed with "The number of failing rows exceeds the maximum specified". It has failed on a different table each time, though.
Each time it's failed, we've rerun the process the next morning and it has worked. Because the process works the second time it's run, it makes me think the data is being changed by someone or something between the initial failure and our successful second run.
I would like to change the DTS package to log the failing rows in a text document so we can compare them later.
Can someone help me with that? I can't seem to figure that part out.
Scott
Never mind. I found the Exceptions File under the Options tab.

SQL Server job (stored proc) trace

I need your suggestion on tracing the issue.
We are running data load jobs at early morning and loading the data from Excel file into SQL Server 2005 db. When job runs on production server, many times it takes 2 to 3 hours to complete the tasks. We could drill down to one job step which is taking 99% of the total time to finish.
While running the job step (stored procs) on staging environment (with the same production database restored) takes 9 to 10 minutes, the same takes hours on production server when it run at early morning as part of job. The production server always stuck up at the very job step.
I would like to run trace on the very job step (around 10 stored procs run for each user in while loop within the job step) and collect the info to figure out the issue.
What are the ways available in SQL Server 2005 to achieve the same? I want to run the trace only for these SPs and not for certain period time period on production server, as trace give lots of information and it becomes very difficult for me (as not being DBA) to analyze that much of trace information and figure out the issue. So I want to collect info about specific SPs only.
Let me know what you suggest.
Appreciate your time and help.
Thanks.
Use SQL Profiler. It allows you to trace plenty of events, including stored procedures, and even apply filters to the trace.
Create a new trace
Select just stored procedures (RPC:Completed)
Check "TextData" for columns to read
Click the "Column Filters" button
Select "TextData" from the left hand nav
Expand the "Like" tree view, and type in your procedure name
Check "Exclude Rows that Do Not Contain Values"
Click "OK", then "Run"
What else is happening on the server at that time is important when it is faster on other servers but not prod. Maybe you are running into the daily backup or maintenance of statistics or indexes jobs?

Resources