There are several packages in my oracle DBMS which sometimes fails to complete the task of takes more than 6 to 7 hours to complete the task even if the procedures and packages are error free.
This only occurs when multiple clients try to run different packages from their system into a server pc which contains the database at same time like the picture shown.
There are 20 client systems and 1 single server PC in my office.
I am asking if I use 2 or more pcs to cluster my oracle db will that be helpful to enhance performance i.e. will it reduce package run time and will it reduce chances of failing package run ?
Related
Currently I'm working on system that gathers data from different websites/apis and stores this data in a SQL Server database. Then reporting service generates reports based on this data.
We have a lot of jobs running in SQL Server agent (each job has some steps, each step can be of type - PowerShell script or SQL...).
Version of SQL Server is 2017.
We have a problem that almost every day there are jobs that starts but never ends (status is "Executing"). This job can not run that long.
Does anybody has an idea how to solve this problem? Or at least how to investigate it?
CPU usage on virtual machine is +- 20%. Memory usage - 50%. So it is not a resource problem.
I have an SSIS package that moves data from SQL server to a Postgres 9.5 database. I'm using the latest Postgres ODBC drivers to connect to the postgres server. The bulk of the operations are inserts and updates and upserts. When I run this package from my development machine (Win 10 64-bit) in Visual Studio 2012 it's quite speedy. It can transfer about 80k rows in around 40 seconds.
When I deploy it to the server (an SQL Server 2012 instance) and run it using SQL management studio it executes painfully slow. It took around 20 seconds to transfer less than 10 rows, and takes forever to work on the full data set. I never let if finish because it just takes too long.
Both my development machine and the server have the exact same postgres driver installed and identically configured ODBC sources.
Edit: I should also note that I have other packages deployed to that server that run just fine, though these packages don't touch postgres or use ODBC for anything.
If all else is equal:
Same drivers
Same configurations - ODBC (i.e. ssl, same boxes ticked), package parameters
Same tables - with same amount of data in them and indexes
Same bit-ness (64x)
I would look towards resource differences. I.e.
is the IO the same. Assuming this is writing to the same database (and files), then this should not be an issue.
Memory - is the ram constrained on the dev instance? If the dataflow has large buffers this could really slow things down.
CPU - is the dev server more limited than your dev machine
Network - is the dev server sitting in a different subnet with differing QoS. I would guess this is not an issue since other packages are not affected. Unless there is something weird when connecting to Postgres.
Another way to zero in on the issue would be to run different versions of the package, stripping out parts of the dataflow. i.e. remove the odbc / ado.net destination and run the package. If it finishes quickly, you know the issue is there. If it is still slow, keep going upstream until you've identified the component that's slow.
Lastly, I would consider using psql over odbc. psql is a postgres utility, like SQL's bcp, which would allow you to bulk copy data to postgres. The odbc driver will only allow row by row inserts which tend to be sluggish. Writing the data to a tab delimited file and then using psql to bulk copy the data into postgres is actually significantly faster (80k rows might take 5s all told).
OK… I’ve been tasked to figure out why an intranet site is running slow for a small to medium sized company (less than 200 people). After three days of looking on the web. I’ve decided to post what I’m looking at. Here is what I know:
Server: HP DL380 Gen9 (new)
OS: MS Server 2012 – running hyper-v
RAM: 32GB
Server 2012 was built to run at most 2 to 3 VMs at most (only running one VM at the moment)
16GB of RAM dedicated for the VHD (not dynamic memory)
Volume was created to house the VHD
The volume has a fixed 400GB VHD inside it.
Inside that VHD is server 2008r2 running SQL 2008r2 and hosting an iis7 intranet.
Here is what’s happening:
A page in the intranet is set to run a couple of stored procedures that do some checking against data in other tables as well as insert data (some sort of attendance db) after employee data is entered. The code looks like it creates and drops approximately 5 tables in the process of crunching the data. The page takes about 1min50secs to run on the newer server. I was able to get hold of the old server & run a speed test: 14 seconds.
I’m at a loss… a lot of sites say alter the code. However it was running quick before.
Old server is a 32bit 2003 server running SQL2000… new is obviously 64bit.
Any ideas?
You should find out where the slowness is coming from.
The bottleneck could be in SQL-Server, in IIS, in the code, on the network?
Find the SQL statements that are executed and run them directly in SQL server.
Run the code outside of IIS web pages
Run the code from a different server
Solved my own issue... just took a while for me to get back to this. Hopefully this will help others.
Turned on SQL Activity Monitor under tools\options => at startup => Open Object Explorer and Activity Monitor.
Opened Recent Expensive Queries. Right clicked on the top queries and selected Show Execution Plan. This showed a missing index for the db. Added index by clicking the plan info at the top. Added the index.
Hope this helps!
Currently running SQL Server 2008 R2 SP1 on 64-Bit Windows Server 2008 R2 Enterprise on a Intel dual 8-core processor server with 128 GB RAM and 1TB internal SCSI drive.
Server has been running our Data Warehouse and Analysis Services packages since 2011. This server and SQL instance is not used for OLTP.
Suddenly and without warning, all of the jobs that call SSIS packages that build the data warehouse tables (using Stored Procedures) are failing with "Deadlock on communication buffer" errors. The SP that generates the error within the package is different each time the process is run.
However, the jobs will run fine if SQL Server Profiler is running to trace at the time that the jobs are initiated.
This initially occured on our Development server (same configuration) in June. Contact with Microsoft identified Disk I/O issues, and suggested setting MaxDOP = 8, which has mitigated the deadlock issue, but introduced an issue where the processes can take up to 3 times longer at random intervals.
This just occurred today on our Production server. MaxDOP is currently set to zero. There have been no changes to OS, SQL Server or the SSIS packages in the past month. The jobs ran fine overnight on September 5th, but failed with the errors overnight last night (September 6th) and continue to fail on any retry.
The length of time that any one job will run before failing is not consisent nor is there consistency between jobs. Jobs that take 2 minutes to run to completion previously will fail in seconds, where jobs that normally take 2 hours may run anywhere from 30 - 90 minutes before failing.
Have you considered changing the isoloation level of the database. This can help when parallel reads and writes are happening on the database.
I've written a .Net application which has an SQL Server 2008 R2 database with relatively small number of tables, but in some tables there might be some 100,000,000 records! For improving performance of SELECTs, I've created necessary indexes and it works well. But, as everyone knows, indexes need to be rebuilt when they are fragmented.
We have installed an SQL Server 2008 R2 Express on one of customer PCs plus my Winforms application. Three more PCs connect to this database over regular LAN, and everything seems fine.
Now, the problem is that, I want to rebuild indexes, for example every time a user starts using my program on ANY of the machines. Well, I can execute several ALTER INDEXes, but as stated in MS docs, OFFLINE indexing will lock the tables for period of indexing. Which means other users will lose access to tables when a user starts the program! I know there is an ONLINE option, but it doesn't work in Express edition of SQL Server.
In other environments with a real server running all the time, I would create an Agent Job which rebuilt indexes over night.
How can I solve this problem?
Without a normal 24/7 server running, it's difficult to do such maintenance automatically without disturbing users. I don't think putting that job at the application startup is a good idea, as it can really start many times together without a real reason, and also slows down startup significantly if tables are big, in addition to keep everyone else out as you say.
I would opt for 2 choices:
Setup a job on the "server" to do the rebuild on either SQL Server startup or computer startup. It will slow down the initialization of that PC when the user first power it on, but once done, it should work OK, and most likely with similar results to the nightly job.
Add an option in the application to launch the reindexing job manually when the user wants to do it, warning that it will take some time and during the process anyone else cannot use it. While this provides maximum flexibility, it relies on the user doing so when they start noting delays.