PerformanceReview
prID
reviewDate
passed
notes
successStrategy
empID
nextReviewDate
above is my table I am working with, my goal is to get the nextReviewDate check to see if it is within 7 days of the current date ( I will do this using DATEDIFF() ) and send an email to a specified email address if this condition is true.
My question is, how do I make it so that my sql job will perform this task for each performance review row in the table. I have researched and found information on CURSORS, or using WHILE loops being slow and inefficient for this task. Any help is appreciated as I am in the final stage of development :)
If you are within a SQL Server context and you want to send the mails using sp_send_dbmail, using a CURSOR to loop through the rows and call sp_send_dbmail is just fine. It may not be the fastest but it in this case it won't matter all that much. You are not looking to shave off milliseconds for this sort of process.
It will be a lot more of a hassle to formulate a set-based approach. This would involve creating a dynamic SQL statement to have all the sp_send_dbmail calls in one batch. But the gain would be marginal.
Related
I am trying to run this query
$claims = Claim::wherein('policy_id', $userPolicyIds)->where('claim_settlement_status', 'Accepted')->wherebetween('intimation_date', [$startDate, $endDate])->get();
Here, $userPolicyIds can have thousands of policy ids. Is there any way I can increase the maximum number of parameters in SQL server? If not, could anyone help me find a way to solve this issue?
The wherein method creates an SQL fragment of the form WHERE policy_id IN (userPolicyIds[0], userPolicyIds[1], userPolicyIds[2]..., userPolicyIds[MAX]). In other words, the entire collection is unwrapped into the SQL statement. The result is a HUGE SQL statement that SQL Server refuses to execute.
This is a well known limitation of Microsoft SQL Server. And it is a hard limit, because there appears to be no option for changing it. But SQL Server can hardly be blamed for having this limit, because trying to execute a query with as many as 2000 parameters is an unhealthy situation that you should not have put yourself into in the first place.
So, even if there was a way to change the limit, it would still be advisable to leave the limit as it is, and restructure your code instead, so that this unhealthy situation does not arise.
You have at least a couple of options:
Break your query down to batches of, say, 2000 items each.
Add your fields into a temporary table and make your query join that table.
Personally, I would go with the second option, since it will perform much better than anything else, and it is arbitrarily scalable.
I solved this problem by running this raw query
SELECT claims.*, policies.* FROM claims INNER JOIN policies ON claims.policy_id = policies.id
WHERE policy_id IN $userPolicyIds AND claim_settlement_status = 'Accepted' AND intimation_date BETWEEN '$startDate' AND '$endDate';
Here, $userPolicyIds is a string like this ('123456','654321','456789'). This query is a bit slow, I'll admit that. But the number of policy ids is always going to a very big number and I wanted a quick fix.
just use prepare driver_options (PDO::prepare)
PDO::ATTR_EMULATE_PREPARES => true
https://learn.microsoft.com/en-us/sql/connect/php/pdo-prepare
and split where in on peaces (where (column in [...]) or column in [...])
Apologies if I am enraging the forum with repetitive question. Couldn't find the right solution in the forum, hence posting it.
I need to fetch 129991763 rows into a cursor or temp table or a staging table quickly and process them into another table. And this destination table is also huge table.
Currently I am using INSERT using SELECT statement (the SELECT is nested 4 levels) used hints like Option (FAST 1000), MAXDOP 1, RECOMPILE ...etc...
The procedure is consuming lot of time and showing no results or not getting completed at all.
Previously I used a cursor with the same hints; but as it was also running more than 22 hours; I switched to INSERT using SELECT.
Literally, I need to stop the execution for above both methods.
And to be honest, I am beginner in SQL Server database.
Even if specifically filter out the records in SELECT based on criteria; still the process needs to broken 4 or 5 chunks and these chunks are also taking more than 4 - 5 hours to complete.
Please help.
Thanks
Pradyumna
In the past I've used BULK INSERT with reasonable success, but I suspect the suggestion of breaking it into chunks and dropping indexes would still be wise. You can find some details on it here
https://msdn.microsoft.com/en-GB/library/ms188365.aspx
Hope it helps, good luck.
Apologies, you will probably be best using an SSIS package to pull it across. With this you can also transform the data if needed. I would still recommend keeping indexes off the table you are inserting the data into where possible. You'll need to have a bit of a read but hard to explain on here due to the use of the GUI.
Good luck
I'm making frequent inserts and updates in large batches from c# code and I need to do it as fast as possible, please help me find all ways to speed up this process.
Build command text using StringBuilder, separate statements with ;
Don't use String.Format or StringBuilder.AppendFormat, it's slower then multiple StringBuilder.Append calls
Reuse SqlCommand and SqlConnection
Don't use SqlParameters (limits batch size)
Use insert into table values(..),values(..),values(..) syntax (1000 rows per statement)
Use as few indexes and constraints as possible
Use simple recovery model if possible
?
Here are questions to help update the list above
What is optimal number of statements per command (per one ExecuteNonQuery() call)?
Is it good to have inserts and updates in the same batch, or it is better to execute them separately?
My data is being received by tcp, so please don't suggest any Bulk Insert commands that involve reading data from file or external table.
Insert/Update statements rate is about 10/3.
Use table-valued parameters. They can scale really well when using large numbers of rows, and you can get performance that approaches BCP level. I blogged about a way of making that process pretty simple from the C# side here. Everything else you need to know is on the MSDN site here. You will get far better performance doing things this way rather than making little tweaks around normal SQL batches.
As of SQLServer2008 TableParameters are the way to go. See this article (step four)
http://www.altdevblogaday.com/2012/05/16/sql-server-high-performance-inserts/
I combined this with parallelizing the insertion process. Think that helped also, but would have to check ;-)
Use SqlBulkCopy into a temp table and then use the MERGE SQL command to merge the data.
I have a query that has been running every day for a little over 2 years now and has typically taken less than 30 seconds to complete. All of a sudden, yesterday, the query started taking 3+ hours to complete and was using 100% CPU the entire time.
The SQL is:
SELECT
#id,
alpha.A, alpha.B, alpha.C,
beta.X, beta.Y, beta.Z,
alpha.P, alpha.Q
FROM
[DifferentDatabase].dbo.fnGetStuff(#id) beta
INNER JOIN vwSomeData alpha ON beta.id = alpha.id
alpha.id is a BIGINT type and beta.id is an INT type. dbo.fnGetStuff() is a simple SELECT statement with 2 INNER JOINs on tables in the same DB, using a WHERE id = #id. The function returns approximately 11000 results.
The view vwSomeData is a simple SELECT statement with two INNER JOINs that returns about 590000 results.
Both the view and the function will complete in less than 10 seconds when executed by themselves. Selecting the results of the function into a temporary table first and then joining on that makes the query finish in < 10 seconds.
How do I troubleshoot what's going on? I don't see any locks in the activity manager.
Look at the query plan. My guess is that there is a table scan or more in the execution plan. This will cause huge amounts of I/O for the few record you get in the result.
You could use the SQL Server Profiler tool to monitor what queries are running on SQL Server. It doesn't show the locks, but it can for instance also give you hints on how to improve your query by suggesting indexes.
If you've got a reasonably recent version of SQL Server Management Studio, it has a Database Tuning Adviser as well, under Tools. It takes a trace from the Profiler and makes some, sometimes highly useful, suggestions. Makes sure there's not too many queries - it takes a long time to build advice.
I'm not an expert on it, but have had some luck with it in the past.
Do you need to use a function? Can you re-write the entire thing into a stored procedure in which you pass in the #ID as a parameter.
Even if your table has indexes because you pass the #ID as a variable to the WHERE clause potentially greatly increasing the amount of time for the query to run.
The reason the indexes may not be used is because the Query Analyzer does not know the value of the variables when it selects an access method to perform the query. Because this is a batch file, only one pass is made of the Transact-SQL code, preventing the Query Optimizer from knowing what it needs to know in order to select an access method that uses the indexes.
You might want to consider an INDEX query hint if you cannot re-write the SQL.
it might also be possible, since this just started happening, that the INDEXes have become fragmented and might need to be rebuilt.
I've had similar problems with joining functions that return large datasets. I had to do what you've already suggested. Put the results in a temp table and join on that.
Look at the estimated plan, this will probably shed some light. Typically when query cost gets orders of magnitude more expensive it is because a loop or merge join is being used where a hash join is more appropriate. If you see a loop or merge join in the estimated plan, look at the number of rows it expects to process - is it far smaller than the number of rows you know will actually be in play? You can also specify a hint to use a hash join and see if it performs much better. If so, try updating statistics and see if it goes back to a hash join without a hint.
SELECT
#id,
alpha.A, alpha.B, alpha.C,
beta.X, beta.Y, beta.Z,
alpha.P, alpha.Q
FROM
[DifferentDatabase].dbo.fnGetStuff(#id) beta
INNER HASH JOIN vwSomeData alpha ON beta.id = alpha.id
-- having no idea what type of schema is in place and just trying to throw out ideas:
Like others have said... use Profiler and find the source of pain... but I'm thinking it is the function on the other database. Since that function might be a source of pain, have you thought about a little denormalization or anything on [DifferentDatabase]. I think you'll find a bit more scalability in joining to a more flattened table with indexes than a costly function.
Run this command:
SET SHOWPLAN_ALL ON
Then run your query. It will display the execution plan, look for a "SCAN" on an index or a table. That is most likely what is happening to your query now. If that is the case, try to figure out why it is not using indexes now (refresh statistics, etc)
We've got a weird problem with joining tables from SQL Server 2005 and MS Access 2003.
There's a big table on the server and a rather small table locally in Access. The tables are joined via 3 fields, one of them a datetime field (containing a day; idea is to fetch additional data (daily) from the big server table to add data to the local table).
Up until the weekend this ran fine every day. Since yesterday we experienced strange non-time-outs in Access with this query. Non-time-out means that the query runs forever with rather high network transfer, but no timeout occurs. Access doesn't even show the progress bar. Server trace tells us that the same query is exectuted over and over on the SQL server without error but without result either. We've narrowed it down to the problem seemingly being accessing server table with a big table and either JOIN or WHERE containing a date, but we're not really able to narrow it down. We rebuilt indices already and are currently restoring backup data, but maybe someone here has any pointers of things we could try.
Thanks, Mike.
If you join a local table in Access to a linked table in SQL Server, and the query isn't really trivial according to specific limitations of joins to linked data, it's very likely that Access will pull the whole table from SQL Server and perform the join locally against the entire set. It's a known problem.
This doesn't directly address the question you ask, but how far are you from having all the data in one place (SQL Server)? IMHO you can expect the same type of performance problems to haunt you as long as you have some data in each system.
If it were all in SQL Server a pass-through query would optimize and use available indexes, etc.
Thanks for your quick answer!
The actual query is really huge; you won't be happy with it :)
However, we've narrowed it down to a simple:
SELECT * FROM server_table INNER JOIN access_table ON server_table.date = local_table.date;
If the server_table is a big table (hard to say, we've got 1.5 million rows in it; test tables with 10 rows or so have worked) and the local_table is a table with a single cell containing a date. This runs forever. It's not only slow, It just does nothing besides - it seems - causing network traffic and no time out (this is what I find so strange; normally you get a timeout, but this just keeps on running).
We've just found KB article 828169; seems to be our problem, we'll look into that. Thanks for your help!
Use the DATEDIFF function to compare the two dates as follows:
' DATEDIFF returns 0 if dates are identical based on datepart parameter, in this case d
WHERE DATEDIFF(d,Column,OtherColumn) = 0
DATEDIFF is optimized for use with dates. Comparing the result of the CONVERT function on both sides of the equal (=) sign might result in a table scan if either of the dates is NULL.
Hope this helps,
Bill
Try another syntax ? Something like:
SELECT * FROM BigServerTable b WHERE b.DateFld in (SELECT DISTINCT s.DateFld FROM SmallLocalTable s)
The strange thing in your problem description is "Up until the weekend this ran fine every day".
That would mean the problem is really somewhere else.
Did you try creating a new blank Access db and importing everything from the old one ?
Or just refreshing all your links ?
Please post the query that is doing this, just because you have indexes doesn't mean that they will be used. If your WHERE or JOIN clause is not sargable then the index will not be used
take this for example
WHERE CONVERT(varchar(49),Column,113) = CONVERT(varchar(49),OtherColumn,113)
that will not use an index
or this
WHERE YEAR(Column) = 2008
Functions on the left side of the operator (meaning on the column itself) will make the optimizer do an index scan instead of a seek because it doesn't know the outcome of that function
We rebuilt indices already and are currently restoring backup data, but maybe someone here has any pointers of things we could try.
Access can kill many good things....have you looked into blocking at all
run
exec sp_who2
look at the BlkBy column and see who is blocking what
Just an idea, but in SQL Server you can attach your Access database and use the table there. You could then create a view on the server to do the join all in SQL Server. The solution proposed in the Knowledge Base article seems problematic to me, as it's a kludge (if LIKE works, then = ought to, also).
If my suggestion works, I'd say that it's a more robust solution in terms of maintainability.