How may records can a Cursor can hold - cursor

I am using SQL Server 2008.
I want to select a long list of records and add it to a cursor and iterate through those records to accomplish some task. But I do not know the limitations of this cursor.
Q:
Is there a maximum number of records that the cursor can hold. Or is it unlimited and handled internally using some kind of paging technique to handle the records dynamically.
Code:
DECLARE cursor CURSOR FOR
SELECT C1
FROM T1

This question of mine is not answered for a while now. This is what I have understood. I used cursor to store 12000 records and iterated through it. I worked just fine. So I know for now for less than 12000 records courser works fine.
If you found more on this, please leave a comment of a best answer. I will not hesitate to change that to my correct answer.

Related

SQL Server 2012: quickly INSERT millions of rows from SELECT

Apologies if I am enraging the forum with repetitive question. Couldn't find the right solution in the forum, hence posting it.
I need to fetch 129991763 rows into a cursor or temp table or a staging table quickly and process them into another table. And this destination table is also huge table.
Currently I am using INSERT using SELECT statement (the SELECT is nested 4 levels) used hints like Option (FAST 1000), MAXDOP 1, RECOMPILE ...etc...
The procedure is consuming lot of time and showing no results or not getting completed at all.
Previously I used a cursor with the same hints; but as it was also running more than 22 hours; I switched to INSERT using SELECT.
Literally, I need to stop the execution for above both methods.
And to be honest, I am beginner in SQL Server database.
Even if specifically filter out the records in SELECT based on criteria; still the process needs to broken 4 or 5 chunks and these chunks are also taking more than 4 - 5 hours to complete.
Please help.
Thanks
Pradyumna
In the past I've used BULK INSERT with reasonable success, but I suspect the suggestion of breaking it into chunks and dropping indexes would still be wise. You can find some details on it here
https://msdn.microsoft.com/en-GB/library/ms188365.aspx
Hope it helps, good luck.
Apologies, you will probably be best using an SSIS package to pull it across. With this you can also transform the data if needed. I would still recommend keeping indexes off the table you are inserting the data into where possible. You'll need to have a bit of a read but hard to explain on here due to the use of the GUI.
Good luck

Count number of times a procedure is executed

Requirement:
To count the number of times a procedure has executed
From what I understand so far, sys.dm_exec_procedure_stats can be used for approximate count but that's only since the last service restart. I found this link on this website relevant but I need count to be precise and that should not flush off after the service restart.
Can I have some pointers on this, please?
Hack: The procedure I need to keep track of has a select statement so returns some rows that are stored in a permanent table called Results. The simplest solution I can think of is to create a column in Results table to keep track of the procedure execution, select the maximum value from this column before the insert and add one to it to increment the count. This solution seems quite stupid to me as well but the best I could think of.
What I thought is you could create a sequence object, assuming you're on SQL Server 2012 or newer.
CREATE SEQUENCE ProcXXXCounter
AS int
START WITH 1
INCREMENT BY 1 ;
And then in the procedure fetch a value from it:
declare #CallCount int
select #CallCount = NEXT VALUE FOR ProcXXXCounter
There is of course a small overhead with this, but doesn't cause similar blocking issue that could happen with using a table because sequences are handled outside transaction.
Sequence parameters: https://msdn.microsoft.com/en-us/library/ff878091.aspx
The only way I can think of keeping track of number of executions even when the service has restarted , is to have a table in your database and insert a row to that table inside your procedure everytime it is executed.
Maybe add a datetime column as well to collect more info about the execution. And a column for user who executed etc..
This can be done, easily and without Enterprise Edition, by using extended events. The sqlserver.module_end event will fire, set your predicates correctly and use a histogram target.
http://sqlperformance.com/2014/06/extended-events/predicate-order-matters
https://technet.microsoft.com/en-us/library/ff878023(v=sql.110).aspx
To consume the value, query the histogram target (under the reviewing target output examples).

Looking for a suggestion on how to go about looping through rows of a table and changing a value based on other values with the same ID

I am looking for some ideas on the best way to go about doing this.
I would like to use something like a for each loop, but I know that can be difficult and not good practice in SQL. I have a table full of 'comments' of which each have a unique commentID. It is associated with a table of 'Deals' by the DealID. Each comment has a DealID associated with it, and since multiple comments can be made on a single deal, several comments may have the same DealID associated with them.
I have a CurrentComment attribute in my comments table which is either 0 or 1 (1 being the most recent comment) because of some issues in our DB, I had to reset every Comment to have a 0 for the 'current comment' value.
What I want to do is go through the entire table of comments, and for each unique DealID, set the most recently made comment (associated with that DealID) to have a value of 1 for the current comment.
I'm thinking I would want to look at all of the comments associated with a single DealID, and the largest CommentID value would be the most recently made comment, so I would change that CurrentComment Value.
Any input/suggestions on how to go about something like this is much appreciated!
You do not need a for loop to do this. Sql Server is set up to do the exact thing you are asking about, you just have to think about the problem a little differently. Sql Server brings back all of the rows you will need to update, assuming your criteria is set correctly. You just need to specify how you want to update your columns. You can use values from other tables or you can you specific numbers.
I would recommend sorting your data using a date value, if you have one. Updating a comment that was made today (rather than one from yesterday) is a better method because you are certain that you are updating the most recent comment.
Update d
Set CurrentComment = 1
From Deals d
Where not exists ( Select top 1 1
From Deals dd
Where d.CommentId < dd.CommentID
and dd.DealId = d.DealId)

Sybase search result with offset

I am working on Sybase. Want to implement pagination for a result. I can get first few records by stating set rowcount 100 But is there any way to set start point as well. The result is ordered on basis of a text value.
I tried finding at stackoverflow as well as in Sybase documentations, but could not find the way. I tried Limit, rownum() etc. but they are not supported. Also tried putting it as inner query, but somehow it is not working.
One solution I found which was of creating temp table with identity and get the same. But to the application, I do not have create table permission.
Can someone please help me in this?
You should use START AT. Try:
SELECT TOP 25 START AT 50 * FROM TABLE1 ORDER BY Id
LIMIT and OFFSET are supported in ASE 16, see https://help.sap.com/viewer/cbed2190ee2d4486b0bbe0e75bf4b636/16.0.3.7/en-US/c1881eb182ee4b899f54c577d9dc0ecb.html

Query hangs with INNER JOIN on datetime field

We've got a weird problem with joining tables from SQL Server 2005 and MS Access 2003.
There's a big table on the server and a rather small table locally in Access. The tables are joined via 3 fields, one of them a datetime field (containing a day; idea is to fetch additional data (daily) from the big server table to add data to the local table).
Up until the weekend this ran fine every day. Since yesterday we experienced strange non-time-outs in Access with this query. Non-time-out means that the query runs forever with rather high network transfer, but no timeout occurs. Access doesn't even show the progress bar. Server trace tells us that the same query is exectuted over and over on the SQL server without error but without result either. We've narrowed it down to the problem seemingly being accessing server table with a big table and either JOIN or WHERE containing a date, but we're not really able to narrow it down. We rebuilt indices already and are currently restoring backup data, but maybe someone here has any pointers of things we could try.
Thanks, Mike.
If you join a local table in Access to a linked table in SQL Server, and the query isn't really trivial according to specific limitations of joins to linked data, it's very likely that Access will pull the whole table from SQL Server and perform the join locally against the entire set. It's a known problem.
This doesn't directly address the question you ask, but how far are you from having all the data in one place (SQL Server)? IMHO you can expect the same type of performance problems to haunt you as long as you have some data in each system.
If it were all in SQL Server a pass-through query would optimize and use available indexes, etc.
Thanks for your quick answer!
The actual query is really huge; you won't be happy with it :)
However, we've narrowed it down to a simple:
SELECT * FROM server_table INNER JOIN access_table ON server_table.date = local_table.date;
If the server_table is a big table (hard to say, we've got 1.5 million rows in it; test tables with 10 rows or so have worked) and the local_table is a table with a single cell containing a date. This runs forever. It's not only slow, It just does nothing besides - it seems - causing network traffic and no time out (this is what I find so strange; normally you get a timeout, but this just keeps on running).
We've just found KB article 828169; seems to be our problem, we'll look into that. Thanks for your help!
Use the DATEDIFF function to compare the two dates as follows:
' DATEDIFF returns 0 if dates are identical based on datepart parameter, in this case d
WHERE DATEDIFF(d,Column,OtherColumn) = 0
DATEDIFF is optimized for use with dates. Comparing the result of the CONVERT function on both sides of the equal (=) sign might result in a table scan if either of the dates is NULL.
Hope this helps,
Bill
Try another syntax ? Something like:
SELECT * FROM BigServerTable b WHERE b.DateFld in (SELECT DISTINCT s.DateFld FROM SmallLocalTable s)
The strange thing in your problem description is "Up until the weekend this ran fine every day".
That would mean the problem is really somewhere else.
Did you try creating a new blank Access db and importing everything from the old one ?
Or just refreshing all your links ?
Please post the query that is doing this, just because you have indexes doesn't mean that they will be used. If your WHERE or JOIN clause is not sargable then the index will not be used
take this for example
WHERE CONVERT(varchar(49),Column,113) = CONVERT(varchar(49),OtherColumn,113)
that will not use an index
or this
WHERE YEAR(Column) = 2008
Functions on the left side of the operator (meaning on the column itself) will make the optimizer do an index scan instead of a seek because it doesn't know the outcome of that function
We rebuilt indices already and are currently restoring backup data, but maybe someone here has any pointers of things we could try.
Access can kill many good things....have you looked into blocking at all
run
exec sp_who2
look at the BlkBy column and see who is blocking what
Just an idea, but in SQL Server you can attach your Access database and use the table there. You could then create a view on the server to do the join all in SQL Server. The solution proposed in the Knowledge Base article seems problematic to me, as it's a kludge (if LIKE works, then = ought to, also).
If my suggestion works, I'd say that it's a more robust solution in terms of maintainability.

Resources