I have following query which takes almost 1 minute to execute.
public static Func<Entities, string, IQueryable<string>> compiledInvoiceQuery =
CompiledQuery.Compile((Entities ctx, string orderNumb) =>
(from order in ctx.SOP10100
where order.ORIGNUMB == orderNumb
select order.SOPNUMBE).Union(
from order in ctx.SOP30200
where order.ORIGNUMB == orderNumb
select order.SOPNUMBE)
);
It filters on basis of ORIGNUMB which is not my primary key, i can not even put any index on it. Do we have any other way to make it faster? I tested on sql server and found that only query
from order in ctx.SOP10100
where order.ORIGNUMB == orderNumb
select order.SOPNUMBE
or
select SOPNUMBE
from SOP10100
where ORIGNUMB = #orderNumb
is taking more than 55 seconds. Please suggest.
If it's taking 55 seconds on the server, then it's nowto to do with linq.
Why can't you have an index on it, because you need one....
Only other option is to rejig your logic to filter out records (using indexed columns), before you start searching for an ordernumber match.
One of the big problems with LINQ to SQL is that you have very little control over the SQL being generating.
Since you are running a union and not a join, it should be a pretty simple SQL. Something like this:
SELECT *
FROM SOP10100
WHERE ORIGNUMB = 'some number'
UNION
SELECT *
FROM SOP30200
WHERE ORIGNUMB = 'some number'
You can use SQL Server Profiler to see the SQL statements that are being run against the database to see if the SQL is like this or something more complicated. You can then run the SQL generated in SQL Server Management Stuido and turn on Include Client Statistics and Include Actual Execution Plan to see what exactly is causing the performance issue.
Related
I need to diagnose some issues in production but I cannot query the event_log, query times out.
I was trying to executing the following query on Master database in my Azure database,
select * from sys.event_log where start_time>='2016-02-20:12:00:00' and end_time<='2016-02-20 12:00:00'
Query starts executing, and runs over more than 8 mins and Cancels query execution. I am pretty sure that the eventlog must be a very large one in this database server. How to overcome this situation and query the sys.event_log table?
Even the top 10 query times out. Need some help!
Query I ran was, this might also get a time out, just keep trying (worked for me in the 3rd time)
SELECT *
,CAST(event_data AS XML).value('(/event/#timestamp)[1]', 'datetime2') AS TIMESTAMP
,CAST(event_data AS XML).value('(/event/data[#name="error"]/value)[1]', 'INT') AS error
,CAST(event_data AS XML).value('(/event/data[#name="state"]/value)[1]', 'INT') AS STATE
,CAST(event_data AS XML).value('(/event/data[#name="is_success"]/value)[1]', 'bit') AS is_success
,CAST(event_data AS XML).value('(/event/data[#name="database_name"]/value)[1]', 'sysname') AS database_name
FROM sys.fn_xe_telemetry_blob_target_read_file('el', NULL, NULL, NULL)
WHERE object_name = 'database_xml_deadlock_report'
This gives very useful details in the xml data field.
Use an XML viewer to view details. I used XMLGrid.
It will show what are the two processes (deadlock victim and winner) and the good news is that it gives you the SQL statements those processes were trying to execute.
In my case two processes were trying to update one data table, but two different rows. Winner process was using a SQL "Merge" which creates a table lock for the row update. Solution was I changed that Merge query to use SQL UPDATE.
I'm using SQL Server 2008R2
I'd like your views on the the two SQL statements below as regards to performance and best practice.
select
*
from BcpSample1
where dataloadid = (select MAX(id) from LoadControl_BcpSample1 where Status = 'completed')
And
select
a.*
from CiaBcpSample1 a
inner join (select ActiveDataLoadId = MAX(id) from LoadControl_BcpSample1 where Status = 'completed') as t
on a.DataLoadId = t.ActiveDataLoadId
I tried the Query plan in SQL Server Studio but after 2 runs both are returning showing the same query plans.
Thanks
In "regards to performance and best practice." it depends on many things. What works well one time might not be the best the next. You have to test, measure the performance and then choose.
You say the plan generated by SQL Server is the same, so in this instance there shouldn't be any difference. Choose the query easiest to maintain and move on to the next problem.
I'm a bit puzzled over a performance problem with our SQL server when using remote query's and applying a where clause. When I run the query on the local server a clustered index seek is used, but from remote this is not the case.
So when running this on the local server it will take 2 seconds:
SELECT * FROM uv_order WHERE order_id > '0000200000'
But running this from a remote database takes 2 minutes:
SELECT * FROM RemoteServer.data.dbo.uv_order WHERE order_id > '0000200000'
Here uv_order is a quite complex view but since an index seek is used when executing from the local server I don't see why it can't use it when running a remote query. This only seams to apply to view since doing the same thing on a table will work as expected.
Any ideas why this happens and how to "fix" it?
Well you can fix it like this
select *
from openquery(
RemoteServer,
'select * from data.dbo.uv_order WHERE order_id > '''0000200000''''
)
EDIT: I've updated the example code and provided complete table and view implementations for reference, but the essential question remains unchanged.
I have a fairly complex view in a database that I am attempting to query. When I attempt to retrieve a set of rows from the view by hard-coding the WHERE clause to specific foreign key values, the view executes very quickly with an optimal execution plan (indexes are used properly, etc.)
SELECT *
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = 20
However, when I attempt to add parameters to the query, all of a sudden my execution plan falls apart. When I run the query below, I'm getting index scans instead of seeks all over the place and the query performance is very poor.
DECLARE #ForeignKeyCol int = 20
SELECT *
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = #ForeignKeyCol
I'm using SQL Server 2008 R2. What gives here? What is it about using parameters that is causing a sub-optimal plan? Any help would be greatly appreciated.
For reference, here are the object definitions for which I'm getting the error.
CREATE TABLE [dbo].[BaseTable]
(
[PrimaryKeyCol] [uniqueidentifier] PRIMARY KEY,
[ForeignKeyCol] [int] NULL,
[DataCol] [binary](1000) NOT NULL
)
CREATE NONCLUSTERED INDEX [IX_BaseTable_ForeignKeyCol] ON [dbo].[BaseTable]
(
[ForeignKeyCol] ASC
)
CREATE VIEW [dbo].[ViewOnBaseTable]
AS
SELECT
PrimaryKeyCol,
ForeignKeyCol,
DENSE_RANK() OVER (PARTITION BY ForeignKeyCol ORDER BY PrimaryKeyCol) AS ForeignKeyRank,
DataCol
FROM
dbo.BaseTable
I am certain that the window function is the problem, but I am filtering my query by a single value that the window function is partitioning by, so I would expect the optimizer to filter first and then run the window function. It does this in the hard-coded example but not the parameterized example. Below are the two query plans. The top plan is good and the bottom plan is bad.
When using OPTION (RECOMPILE) be sure to look at the post-execution ('actual') plan rather than the pre-execution ('estimated') one. Some optimizations are only applied when execution occurs:
DECLARE #ForeignKeyCol int = 20;
SELECT ForeignKeyCol, ForeignKeyRank
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = #ForeignKeyCol
OPTION (RECOMPILE);
Pre-execution plan:
Post-execution plan:
Tested on SQL Server 2012 build 11.0.3339 and SQL Server 2008 R2 build 10.50.4270
Background & limitations
When windowing functions were added in SQL Server 2005, the optimizer had no way to push selections past these new sequence projections. To address some common scenarios where this caused performance problems, SQL Server 2008 added a new simplification rule, SelOnSeqPrj, which allows suitable selections to be pushed where the value is a constant. This constant may be a literal in the query text, or the sniffed value of a parameter obtained via OPTION (RECOMPILE). There is no particular problem with NULLs though the query may need to have ANSI_NULLS OFF to see this. As far as I know, applying the simplification to constant values only is an implementation limitation; there is no particular reason it could not be extended to work with variables. My recollection is that the SelOnSeqPrj rule addresssed the most commonly seen performance problems.
Parameterization
The SelOnSeqPrj rule is not applied when a query is successfully auto-parameterized. There is no reliable way to determine if a query was auto-parameterized in SSMS, it only indicates that auto-param was attempted. To be clear, the presence of place-holders like [#0] only shows that auto-parameterization was attempted. A reliable way to tell if a prepared plan was cached for reuse is to inspect the plan cache, where the 'parameterized plan handle' provides the link between ad-hoc and prepared plans.
For example, the following query appears to be auto-parameterized in SSMS:
SELECT *
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = 20;
But the plan cache shows otherwise:
WITH XMLNAMESPACES
(
DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan'
)
SELECT
parameterized_plan_handle =
deqp.query_plan.value('(//StmtSimple)[1]/#ParameterizedPlanHandle', 'nvarchar(64)'),
parameterized_text =
deqp.query_plan.value('(//StmtSimple)[1]/#ParameterizedText', 'nvarchar(max)'),
decp.cacheobjtype,
decp.objtype,
decp.plan_handle
FROM sys.dm_exec_cached_plans AS decp
CROSS APPLY sys.dm_exec_sql_text(decp.plan_handle) AS dest
CROSS APPLY sys.dm_exec_query_plan(decp.plan_handle) AS deqp
WHERE
dest.[text] LIKE N'%ViewOnBaseTable%'
AND dest.[text] NOT LIKE N'%dm_exec_cached_plans%';
If the database option for forced parameterization is enabled, we get a parameterized result, where the optimization is not applied:
ALTER DATABASE Sandpit SET PARAMETERIZATION FORCED;
DBCC FREEPROCCACHE;
SELECT *
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = 20;
The plan cache query now shows a parameterized cached plan, linked by the parameterized plan handle:
Workaround
Where possible, my preference is to rewrite the view as an in-line table-valued function, where the intended position of the selection can be made more explicit (if necessary):
CREATE FUNCTION dbo.ParameterizedViewOnBaseTable
(#ForeignKeyCol integer)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
SELECT
bt.PrimaryKeyCol,
bt.ForeignKeyCol,
ForeignKeyRank = DENSE_RANK() OVER (
PARTITION BY bt.ForeignKeyCol
ORDER BY bt.PrimaryKeyCol),
bt.DataCol
FROM dbo.BaseTable AS bt
WHERE
bt.ForeignKeyCol = #ForeignKeyCol;
The query becomes:
DECLARE #ForeignKeyCol integer = 20;
SELECT pvobt.*
FROM dbo.ParameterizedViewOnBaseTable(#ForeignKeyCol) AS pvobt;
With the execution plan:
You could always go the CROSS APPLY way.
ALTER VIEW [dbo].[ViewOnBaseTable]
AS
SELECT
PrimaryKeyCol,
ForeignKeyCol,
ForeignKeyRank,
DataCol
FROM (
SELECT DISTINCT
ForeignKeyCol
FROM dbo.BaseTable
) AS Src
CROSS APPLY (
SELECT
PrimaryKeyCol,
DENSE_RANK() OVER (ORDER BY PrimaryKeyCol) AS ForeignKeyRank,
DataCol
FROM dbo.BaseTable AS B
WHERE B.ForeignKeyCol = Src.ForeignKeyCol
) AS X
I think in this particular case it may be because the data types between your parameters and your table do not match exactly so SQL Server has to do an implicit conversion which is not a sargable operation.
Check your table data types and make your parameters the same type. Or do the cast yourself outside the query.
I have been asked to help with performance issue of a SQL server installation. I am not a SQL Server expert, but I decided to take a look. We are using a closed source application that appears to work OK. However after a SQL Server upgrade from 2000 to 2005, application performance has reportedly suffered considerably. I ran SQL profiler and caught the following query (field names changed to protect the innocent) taking about 30 seconds to run. My first thought was that I should optimize the query. But that is not possible, given that the application is closed source and the vendor is not helpful. So I am left, trying to figure out how to make this query run fast without changing it. It is also not clear to me how this query ran faster on the older SQL server 2000 product. Perhaps there was some sort of performance tuning applied to on that instance that did not carry over or does not work on the new SQL server. DBCC PINTABLE comes to mind.
Anyway, here is the offending query:
select min(row_id) from Table1 where calendar_id = 'Test1'
and exists
(select id from Table1 where calendar_id = 'Test1' and
DATEDIFF(day, '12/30/2010 09:21', start_datetime) = 0
)
and exists
(select id from Table1 where calendar_id = 'Test1' and
DATEDIFF(day, end_datetime, '01/17/2011 09:03') = 0
);
Table1 has about 6200 entries and looks like this. I have tried creating various indices to no effect.
id calendar_id start_datetime end_datetime
int, primary key varchar(10) datetime datetime
1 Test1 2005-01-01... 2005-01-01...
2 Test1 2005-01-02... 2005-01-02...
3 Test1 2005-01-03... 2005-01-03...
...
I would be very grateful if somebody could help resolve this mystery.
Thanks in advance.
The one thing that should help is a covering index on calendar_id:
create index <indexname>
on table (calendar_id, id)
include (start_datetime, end_datetime);
This will satisfy the calendar_id = 'Test1' predicates, the min(row_id) sort and will provide the material to evaluate the non-SARG-able DATEFIFF predicates. If there are no other columns in the table, then this is probably the clustered index you need and the id primary key should be a non-clustered one.
Make sure the indexes made the conversion. Then update statistics.
Check the differences between the execution plan on the old sql server and the new one. http://www.sql-server-performance.com/tips/query_execution_plan_analysis_p1.aspx
About the other only thing you can do beyond Remus Rusanu's index suggestion, is to upgrade to the Enterprise edition which has a more advanced scan feature (on both SQL Server 2005 and 2008 Enterprise Edition) which allows multiple tasks to share full table scans.
Beyond that, I do not think there is anything you can do if you cannot change the query. The reason is that the query is doing a comparison against a function result in the Where clause. That means it will force SQL Server to do a table scan on Table1 each time it is executed.
Reading Pages (more info about Advanced Scanning)