cfquery taking much longer - sql-server

A simple query like
SELECT TOP 1 ColName FROM <TABLE> WITH (NOLOCK) WHERE SomeFieldName='xxxxx'
is taking lot of time in CF. Same query when run in management studio, runs without any issues. It has the index on SomeFieldName. I have FusionReactor installed. It shows it taking 25-35 seconds. The query plan in dev doesn't suggest anything.
What is going wrong here? Could indexes an issue? (I do not have access to them in prod).
ColdFusion 2018.
Edit: Same queries run alright on CF2016

We finally found the answer for slow running queries though still do not know why does that happen.
The SomeFieldName column is data type varchar. CF2018, somehow is sending varchar parameters as nvarchar to SQL Server. So, indexes does not work correctly. We found that in SQL monitor.
We checked the String Format check box in CF Admin, that checkbox is not enabled, so not sure why it would send varchar as nvarchar.
We reset the CF admin setting, checked the box, saved, unchecked again, saved and reset the instance and it started working correctly.

Related

Sorting columns is very slow in Access after moving back-end to SQL Server

I used the SQL Server Migration Assistant for Access tool to move a database to SQL Server and keep Access as the front end. Everything went pretty smoothly and the data looks right, reports are working well, etc. I'm just having one problem that is making it basically unusable.
When I open a table in Access and try to sort a column it is very very slow. When I click the column and choose an option (like Sort A-Z for example) it will will say "Calculating..." in the bottom left for about a minute before it will actually sort the column.
Is there any way to speed this up? Did I do something in the migration that might have caused this? It wasn't having any issues with this before.
Without more info about the table (i.e. its size, indexes and network speed) its hard to say for sure why you are seeing this delay, but there are some steps you can take to narrow down the source of the delay.
In SQL Server Management Studio, create a query that sorts that same column in the table. (i.e. Select * from Tablename ORDER BY Columnname;).
If this runs quickly, then you can be pretty sure the table is fine along with its indexes, and you need to look at the network speed between your computer and the SQL server. If it runs slowly, then the issue is with the table, its indexes or maybe even the SQL server is under powered.

SQL Query Param type affects query speed

We had a slow page loading and upon investigating it was the query execution. I noticed the page on the dev server was fine and I noticed when I ran the query via MSSQL Mgt Studio it ran fine, only on the CF page it was slow. I noticed the ids were being checked with a cfsqltype of cf_sql_numeric instead of cf_sql_integer. For kicks I changed it to integer and it now loaded as expected.
So, what gives, why would that make a difference?
As to why it may have suddenly started going slow was our DBA added some indexes recently to speed up the DB in general. But still, why so slow just based on the param type.

Why Would Remote Execution of a Query Cause it to be Suspended?

I apologize in advance for not having all of the specifics available, but the machine is building an index probably for a good while still and is almost completely unresponsive.
I've got a table on SQL Server 2005 with a good number of columns, maybe 20, but a mammoth number of rows (tens, more likely hundreds of millions). In order to simplify the amount of JPA work I'd need to do to access it, I created a view that contained the bits I was interested in. The view was created as:
SELECT bigtable.ID, bigtable.external_identification, mediumtable.hostname,
CONVERT(VARCHAR, bigtable.datefield, 121) AS datefield
FROM schema.bigtable JOIN schema.mediumtable ON bigtable.joinID = mediumtable.ID;
When I want to select from the view, I do:
SELECT * FROM vwTable WHERE external_identification = 'some string';
This works just fine in SQL Management Studio. The external_identification column has a non-unique, non-clustered index in bigtable. This also worked just fine on our remotely executing Java program in our test environment. Now that we're a day or two away from production, the code has been changed a bit (although the fundamental JPA NamedQuery is still straightforward), but we have a new SQLServer installation on new hardware; the test version was on a 32-bit single core machine, the new hardware is 64-bit multi-core.
Whenever I try to run the code that uses this view on the new hardware, it either hangs indefinitely on the first call of this query or times out if I have a timeout specified. After doing some digging, something like:
SELECT status, command, wait_type, last_wait_type FROM sys.dm_exec_requests;
confirmed that the query was running, but showed it in the state:
suspended, SELECT, CXPACKET, CXPACKET
for as long as I cared to wait for it. Whenever I ran the exact same query from within the Management Studio, it completed immediately. So I did some research, and found out this is due to waiting on some kind of concurrent operation to start/finish. In an attempt to circumvent that, I set the server-wide MAXDOP to 1 (disabled concurrency). After that, the query still hangs, but the sys.dm_exec_requests would show:
suspended, SELECT, PAGEIOLATCH_SH, PAGEIOLATCH_SH
This indicates that it's some sort of HD/scanning issue. While certainly the machine is less responsive than I'd expect for newer hardware, I wouldn't expect this query (even over the view) to require much scanning, since the column I'm searching by is indexed in the underlying table and it works if I run it locally. But just because I'm out of ideas and under the gun, I'm adding indexes to the view; first I have to add the unique clustered index (over ID) before I can attempt to add the non-unique non-clustered index over external_identification.
I'm the only one using this database; when I select from sys.dm_exec_requests the only two results are the query I'm actively inspecting and the select from sys.dm_exec_requests query. So it's not like it's under legitimately heavy, or even at all concurrent, load.
But I suspect I'm grasping at straws. I'm no DBA, and every time I have to interact with SQL Server outside of querying it it baffles my intuitions. Does anyone have any ideas why a query executed remotely would immediately go into a suspended state while the same query locally would execute immediately?
Wow, this one caught me straight out of left field. It turns out that by default, the MSSQL JDBC driver sends its String datatypes as Unicode, which the table/view might not be prepared to handle specifically. In our case, the columns and indexes were not, so MSSQL would perform a full table scan for each lookup.
In our test environment, the table was small enough that this didn't matter, so I was tricked into thinking it worked fine. In retrospect, I'm glad it didn't -- I can't stand it when computers give the illusion of inconsistency.
When I added this little parameter to the end of my JDBC connection string:
jdbc:sqlserver://[IP]:1433;databaseName=[db];sendStringParametersAsUnicode=false
things immediately and magically started working. Sorry for the slightly misleading question (I barely even mentioned JPA), but I had no idea what the cause was and really did believe it was something SQL Server side. Task Manager didn't report heavy CPU/Memory usage while the query was suspended, so I just thought it was idling even though it was really under heavy disk usage.
More info about MSSQL JDBC and Unicode can be found where I stumbled across the solution, at http://server.pramati.com/blog/2010/06/02/perfissues-jdbcdrivers-mssqlserver/ . Thanks, Ed, for that detailed shot in the dark -- it may not have been the problem, but I certainly learned a lot (and fast!) about MSSQL's gritty parts!
It is likely that the query run in SSMS and by your application are using different query plans - from the wait types you're seeing in dm_exec_requests it sounds like the plan created for the application is doing a table scan where the plan for SSMS is using an index seek.
This is possible because the SSMS and application database connections likely use different connection options, some of which are used as a key to the database plan cache.
You can find out which options your application is using by running a default SQL server profiler trace against the server; the first command after the connection is created will be a number of SET... options:
SET DATEFORMAT dmy
SET ANSI_NULLS ON
...
I suspect this list will be different between your application and your SSMS connection - a common candidate is SET ARITHABORT {ON|OFF}, since that forms part of the key of the cached plan.
If you run the SET... commands in an SSMS window before executing the query, the same (bad) plan as is being used by the application should then be picked up.
Assuming this demonstrates the problem, the next step is to work out how to prevent the bad plan getting into cache. It's difficult to give generic instructions about how to do this, since there are a few possible causes.
It's a bit of a scattergun approach (there are other more targetted ways to attempt to resolve this problem but they require more detailed understanding of the issue that I have now), but one thing to try is to add OPTION (RECOMPILE) to the end of your query - this forces a new plan to be generated for every execution, and should prevent the bad plan being reused:
SELECT * FROM vwTable WHERE external_identification = 'some string' OPTION (RECOMPILE);
Assuming you can replicate the bad performance is SSMS using the steps above, you should be able to test this there.
Beware that this can have negative performance consequences if the query is executed very frequently (since each recompilation requires CPU) - this depends on the workload of your application and will need testing.
A couple of other thoughts:
Check the schemas between the test and production systems; this might be as simple as a missing index from one of the tables in the production database, although given that SSMS queries perform OK this is unlikely.
You should re-enable parallelism by taking the server-wide MAXDOP=1 off, since this will limit the performance of your system overall. The problem is almost certainly the query plan, not parallelism
You also need to beware of the consequences of adding indexes to the view - doing so effectively materialises the view, which will (given the size of the table) require a lot of storage overhead - the indexes will also need to be maintained when INSERT/UPDATE/DELETE statements take place on the base table. Indexing the view is probably unnecessary given that (from SSMS) you know it's possible for the query to perform.

Why would an ODBC query against MSSQL 2008SP2 take 100 times as long as the same query in Studio?

I have a really odd query involving a join to a complex view. I analyzed the heck out of the view, built some indexes, and got the query working in under a second when run from MSSQL Management Studio. However, when run from Perl via ODBC, the exact same query takes around 80 seconds to return.
I've dumped almost 8 hours into this and it continues to baffle me. In that time I've logged the query from Perl and copied it verbatim into Studio, I've wrapped it in a stored procedure (which makes it take a consistent 2.5 minutes from BOTH clients!), I've googled ODBC & MSSQL query caches, I've watched the query run via the Activity Monitor (it spends most of its time in the generic SLEEP_TASK wait state) and Profiler (the select statement gets one line which doesn't show up until it's done running), and I've started reading up on performance bottlenecks.
I haven't noticed this problem with any other queries from Perl and unfortunately we don't have a DBA on site. I'm a programmer who's done some DBA but I feel like I'm groping in the dark with this one. My best guess is that there is some sort of query cache available from Studio that the ODBC client can't access, but restarting Studio does not make the query's first execution take longer so it doesn't look like it's just because each new ODBC connection starts with an empty cache.
Without going into the view definitions, the base query is very simple:
SELECT * FROM VIEW1 LEFT OUTER JOIN VIEW2 WHERE SECTION = ? AND ID = ?
The delay goes away when I drop VIEW2, but I need the data from that view. I've already rewritten the view three times in attempts to simplify and improve efficiency but this feels kinda like a dead end since the query runs fine from Studio. The query only returns a single row but even dropping the ID criteria and selecting all 56k rows for an entire section only takes 40 seconds from Studio. Any other ideas?
Edit 2/8:
The article #Remus Rusanu linked was pretty clear, but I'm afraid it didn't quite apply. It now seems pretty clear that it's not ODBC at all, but that when I hard-code arguments vs. parametrize them, I get different execution plans. I can reproduce this in SSMS:
SELECT * FROM VIEW1 LEFT OUTER JOIN VIEW2 WHERE SECTION = 'a' AND ID = 'b'
is 100 times faster than
DECLARE #p1 VARCHAR(8), #p2 VARCHAR(3)
SET #p1 = 'a'
SET #p2 = 'b'
SELECT * FROM VIEW1 LEFT OUTER JOIN VIEW2 WHERE SECTION = #p1 AND ID = #p2
Unfortunately, I'm still at a loss to explain why the first should get an execution plan that takes two orders of magnitude less time than the parametrized version, for any values of SECTION & ID I can throw at it. It may be a deficiency in SQL Server, but it just seems stupid that the inputs are known in both places yet the one takes so much longer. If SQL server recomputed the parametrized plan from scratch every time, as it must be doing for the different constant values I am supplying, it would be 100 times faster. None of the RECOMPILE options suggested by the article seem to help either.
I think #GSerg called it below. I've yet to prove that it doesn't happen with the window function used externally to the view, but he describes the same thing and the timing discrepancy remains baffling.
If I don't run out of time to work on this, I'll try to adapt some of the article's advice and force the constant execution plan on the parametrized version, but it seems like an awful lot of what should be unnecessary trouble.
As explained by Martin Smith, it's an issue with predicate pushing in SQL Server which hasn't been fixed in full.
As suggested in answers and comments in the linked question, you can either
Wrap the query into something that appends option(recompile)
Convert it to a table-valued function.
Attach a plan guide with option (recompile)
Everything you ever wanted to know on the subject: Slow in the Application, Fast in SSMS? Understanding Performance Mysteries.
There is no 'cache' available in SSMS that ODBC cannot access. Is just that you're getting different execution plans in SSMS vs. ODBC, either because of parameter sniffing or because of data type precedence rules. Read the article linked, it has both the means to identify the problem and recommendations on how to fix it.
Compare the plans for the two queries and check the SET settings for each connection (you can do this by looking in sys.dm_exec_sessions). I bet you'll see a difference in quoted_identifier, ansi_nulls or arithabort (or possibly all three). This usually causes vast differences in the execution plan. You should be able to set these settings manually in your ODBC version in order to match the settings that are being used by Management Studio.
Some related questions - there could be other obscure circumstances at play that you'll want to check into:
SQL Query slow in .NET application but instantaneous in SQL Server Management Studio
SQL Server Query Slow from PHP, but FAST from SQL Mgt Studio - WHY?
Query times out from web app but runs fine from management studio
SQL Server 2005 stored procedure fast in SSMS slow from VBA

A T-SQL query executes in 15s on sql 2005, but hangs in SSRS (no changes)?

When I execute a T-SQL query it executes in 15s on sql 2005.
SSRS was working fine until yesterday. I had to crash it after 30min.
I made no changes to anything in SSRS.
Any ideas? Where do I start looking?
Start your query in SSIS then look into the Activity Monitor of Management Studio. See if the query is currently blocked by any chance, and in that case, what it is blocked on.
Alternatively you can use sys.dm_exec_requests and check the same thing, w/o the user interface getting in the way. Look at the session executing the query from SSIS, check it's blocking_session_id, wait_type, wait_time and wait_resource columns. If you find that the query is blocked, the SSIS has no fault probably and something in your environment is blocking the query execution. If on the other hand the query is making progress (the wait_resource changes) then it just executes slowly and its time to check its execution plan.
Have you tried making the query a stored procedure to see if that helps? This way execution plans are cached.
Updated: You could also make the query a view to achieve the same affect.
Also, SQL Profiler can help you determine what is being executed. This will allow you to see if the SQL is the cause of the issue, or Reporting Services rendering the report (ie: not fetching the data)
There are a number of connection-specific things that can vastly change performance - for example the SET options that are active.
In particular, some of these can play havoc if you have a computed+persisted (and possibly indexed) column. If the settings are a match for how the column was created, it can use the stored value; otherwise, it has to recalculate it per row. This is especially expensive if the column is a promoted column from xml.
Does any of that apply?
Are you sure the problem is your query? There could be SQL Server problems. Don't forget about the ReportServer and ReportServerTempDB databases. Maybe they need some maintenance.
The first port of call for any performance problems like this is to get an execution plan. You can either get this by running an SQL Profiler Trace with the ShowPlan Xml event, or if this isn't possible (you probably shouldn't do this on loaded production servers) you can extract the cached execution plan that's being used from the DMVs.
Getting the plan from a trace is preferable however, as that plan will include statistics about how long the different nodes took to execute. (The trace wont cripple your server or anything, but it will have some performance impact)

Resources