I have tried to find answers for my problem, but until now without success.
We have migrated a customers DB-Server from MS SQL 2012 to MS SQL 2019.
There is a Linked Server, fetching data from an IBM/DB2 database.
With MS SQL Server 2012 it takes less than 1 second to get 10 rows from the Linked Server. With MS SQL Server 2019 ist takes 20 seconds or more!
If I increase the number of records to 100/1000 the result is displayed immediately, but the Query has Status "executing" for 20 Seconds.
Any help or hints in which direction I should is very much appreciated.
Best regards
Guenter
The customer has confirmed, that after downgrading the OLEDB Driver for DB/2 v5.0 the performance is as good as on the old system.
https://www.microsoft.com/en-us/download/details.aspx?id=55183
Related
Microsoft® Access® for Microsoft 365 MSO (Version 2202 Build 16.0.14931.20888) 64-bit
Microsoft SQL Server 2019 - 15.0.4261.1 (X64) Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0
System Type 64 bit operating system, x64-based processor
I've created an ODBC 64-bit file DSN connection for a MS Access Pass-Through Query to a SQL Server database. I've got a large query that runs on the client side in around five minutes; the query appears to run and correctly return the requested records. The ODBC Timeout is set to 540 (seconds). The problem is that the server shows that the query ran for over forty-five minutes before I was contacted by a DBA. I terminated Access and that severed the connection.
Would anyone know why this might happen or how I could troubleshoot?
You can trace a query to see when different phases of the query complete.
Typically, when something impossible seems to be going on, when you look closer there's some easy explanation. Is there a transaction that's left open? Does running the query trigger updating statistics? Why does the DBA think the query keeps running? There's nothing unique about ODBC querying the database that would allow it to keep running without terminating. A first step might be to just run the query directly within SQL Studio and see if you can reproduce.
https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/performance/troubleshoot-never-ending-query?tabs=2008-2014
I have had a running SQL Server 2008 instance for years where I have several interfaces to an ora11 database exchanging data day for day, everything has been running correctly.
For few days now since upgrading from SQL Server 2008R2 to 2016 and importing the whole SQL Server 2008 database into our new SQL Server 2016 instance.
We were re-establishing our interfaces and everything looked good. I had created the linked server with an ora11 client connecting to the ora11-db. connected worked fine.
But since then, the stored-procedures which I was using in my interfaces, took a lot of more time for running. On SQL Server 2008R2, the linked server updates to ora11 took about a few minutes. The completely same procedure now takes about a few hours for the same number of rows updating to oracle.
My interface procedures almost look like this:
collect local data using a cursor
iterate through the cursor, looking in the remote database using linked server if the record exists
insert the missing record using linked server in Oracle db
close everything and complete
Why is it suddenly that much slower using the exactly the same source? Is there anything new in SQL Server 2016 which needs to be configured properly?
What is wrong?
Want to upgrade SQL Server Analytic Services (SSAS) all cubes and dimensions from MSSQL-Server 2005 to 2012, Along with the SSAS Database data.
I have already upgraded the SSAS Project in Visual Studio(SSDT) using Upgrade Adviser, But I am not sure whether this is the right way or not?
So, please guide/suggest to upgrade SSAS Database from 2005 to 2012.
Also is it possible to move Directly from sql-server 2005 to sql-server 2012? or i need to go step by step from 2005→2008→2012 ??
Please guide.
Thank you in advance.
It's ok to move from 2005 directly to 2012.
I usually do this way:
create XMLA backup on 2005
restore it on 2012
Two possible issues I faced several times:
if XMLA is not restoring because of "preserve null" (do not remember
exact name) for Distinct Count measure -- change this option to
"Automatic" in XMLA directly and re-run again. has to work.
for huge cubes (1TB+) I also switch off all calculations (just comment them
out). because it is validating at the end of processing and roll
everything back if any calculation is not valid because of upgrade.
I have old MS SQL 2000 server. We have issue(very slow) in our web application(.Net), and we thought it is because of high memory usage in SQL Server. Because we found that (in Task manager processes) SQl server is consuming a lot of memory and becomes very slow.
So my questions are -
1. Do you think SQL Server issue is the cause for our application issue?
2. How do we check memory usage of SQL server for past few days.
Thanks in adavance.
Kindly do reply
Sharda
Recently, we migrated the servers from SQL Server 2012 to 2016. Without any changes, a few of the queries got performance degraded.
If I run the query in SQL Server 2012, it takes 10 seconds, but the same query takes 50+min in SQL Server 2016.
If I updated the SQL Server 2016 database with Legacy CE = ON then I am able to get the results very quickly as same as 2012. But I believe it's not recommended to use Legacy CE ON in SQL Server 2016.
ALTER DATABASE SCOPED CONFIGURATION SET
LEGACY_CARDINALITY_ESTIMATION = ON;
I also updated the statistics of all tables, even thought I did get any improvement in execution.
So, Is this related to DB configuration issue (or) really do I need to update the query?
I am not sure, which configuration details I need to check. Could you please suggest anything?
Thanks advance..
solution: Link1 /
Link2
Route cause : CARDINALITY_ESTIMATION
Thanks to all.