I am generating XML from 60 tables, and storing this xml in a table.
Table Name : Final_XML_Table
PK FK XML_Content (type xml)
1 1 "XML that I am generating from 60 tables"
When I am running below query , it gives memory exception :
Select * from Final_XML_Table
Things I have tried :
1. Results to text : I am getting only few lines from XML as text in output window
2. Results to file : I am getting only few lines from XML in file.
Please suggest, and also if there is any change , will I have to do this on server's SQL server as well while deployment.
I have also set XML_Data to unlimited :
This is not an answer, but to much for a comment...
The fact, that you are able to store the XML, shows clearly, that the XML is not to big for the database.
The fact that you get an out-of-memory exception with Select * from Final_XML_Table shows clearly, that SSMS has a problem on reading/displaying your XML.
You might try to check the length like here:
DECLARE #tbl TABLE (x XML);
INSERT INTO #tbl VALUES('<root><test>blah</test><test /><test2><x/></test2></root>');
SELECT * FROM #tbl; --This does not work for you
SELECT DATALENGTH(x) FROM #tbl; --This returns just "82" in this case
Might be, that due to a logical error in your XML's creation (a wrong join?) the XML contains multiple/repeated elements. You might try a query like this to get a count of nodes in order to check if this number is realistic:
SELECT x.value('count(//*)','int') FROM #tbl
For the exampe above this returns "5"
You might do the same with your original XML.
With a query like the following you can retrieve all node names of the first level, the second level and so on. You can check if this looks okay:
SELECT firstLevel.value('local-name(.)','varchar(max)') AS l1_node
,SecondLevel.value('local-name(.)','varchar(max)') AS l2_node
--add more
FROM #tbl
OUTER APPLY x.nodes('/*') AS A(firstLevel)
OUTER APPLY A.firstLevel.nodes('*') AS B(SecondLevel)
--add more
And - of course - you might open the ResourceMonitor to look at the actual usage of memory...
Come back with more details...
That error isn't a SQL Server error, it's from SSMS. It means that SSMS has run out of memory.
SSMS is only a 32bit application, so can only address 2GB of RAM. If it tries to address more than that, the error will occur. if you've had SSMS open and returned some very large datasets, that RAM is going to get used up.
In all honesty, if you're running a query like SELECT * FROM Final_XML_Table then I would hazard a guess that the dataset is huge. Add a WHERE clause, or don't return the dataset on screen. if you really need to view the data (all of it), export it to something else. But I very much doubt you need to look at every row, if you're returning around 2GB of data.
Related
I have a query which generates a fairly large XML document (~30k) as a query column for each record of a large table, of the form...
SELECT recordKey, lastUpdatedDate, ( SELECT ... FOR XML PATH( 'elemName' ), TYPE )
FROM largeTable
ORDER BY lastUpdateDate
If I run this query from SQLServer management studio, the query returns almost instantly, showing the first rows, as I would expect, and continues to run in the background.
However, when I run this query from the Camel JDBC component in StreamList mode, it appears to cache the entire resultset at the point of querying, which means I run out of memory.
I've checked the JDBC driver properties, and explicitly set the responseBuffering property to adaptive, and have also tried setting the selectMethod to cursor, neither of which appear to make any difference to my query.
Is this a characteristic of querying XML with JDBC, or are there some parameters I need to set differently?
However, when I run this query from the Camel JDBC component in
StreamList mode, it appears to cache the entire resultset at the point
of querying, which means I run out of memory.
camel-sql introduce the 'StreamList' output type in v2.18.x. Since you are using v2.17.6, your configuration may be falling back to 'SelectList' (default value). This causes the load of whole result set as list in memory. Having xml type in your query/result set doesn not have any influence on this behavior.
You may find this code at: org.apache.camel.component.sql.SqlConsumer.poll()
I suggest you to upgrade camel-sql version to v18.x ( or latest)
Hope this helps.
Not sure if it is possible or desirable in your application but in any case... to load all contents for a table is something that we have to prevent when possible.
So, I will propose you an alternative by obtaining pages:
DECLARE #RowsPerPage AS INT = 5;
DECLARE #CurrentPage AS INT = 3;
SELECT recordKey, lastUpdatedDate, ( SELECT ... FOR XML PATH( 'elemName' ), TYPE )
FROM largeTable
ORDER BY lastUpdateDate
OFFSET (#CurrentPage-1)* #RowsPerPage ROWS
FETCH NEXT #RowsPerPage ROWS ONLY;
Just change the parameters to set your desired Rows Per Page and Current Page. I hope it helps.
Consider table testTable, a table with six fields: one of them a UNIQUEIDENTIFIER, one a TIMESTAMP and four of them VARCHARs. Field Filename is VARCHAR.
This first query takes 1 minutes 38 seconds
Select top 1 * from testTable WHERE Filename = 'any.string.1512.b'
Either of these queries takes 1-3 seconds
Select top 1 * from testTable WHERE Filename = 'any.string.1512'
Select top 1 * from testTable WHERE Filename like 'cusip.realloc.1412.b%'
I have looked at the execution plan for all three and the only difference is that the last query (the LIKE statement) used 46% index seek\54% Key Lookup vs a 50\50 index\key lookup for the first two. As far as I can tell as soon as I no longer use the .b part of this search criterium, the queries go back to normal speed.
FileName has been indexed; table has been removed and recreated just in case. We have added indexes, removed indexes, check table, checked database, restart services, restart server, recreated the table. This field used to be VARCHAR(MAX) and I changed it to VARCHAR(100) to index it, but the problem was occurring before making this change.
Something else that I believe may be happening is that there might be something wrong with the end of the table. It will never complete a full:
Select * from testTable
I hoped it was a corrupted table but that wasn't the case. However when we attempt to generate a script in SSMS it fails to generate (no error given). I was able to recreate it by using another SQL client and generating the structure from SSMS and data copy from the other SQL client.
We are pretty stumped.
Note: I'm running under SQL Server 2008 R2...
I've taken the time to read dozens of posts on this site and other sites on how to execute dynamic SQL where the query is more than 4000 characters. I've tried more than a dozen solutions proposed. The consensus seems to be to split the query into 4000-character variables and then do:
EXEC (#SQLQuery1 + #SQLQuery2)
This doesn't work for me - the query is truncated at the end of #SQLQuery1.
Now, I've seen samples how people "force" a long query by using REPLICATE a bunch of spaces, etc., but this is a real query - but it gets a little more sophisticated than that.
I have SQL View with a name of "Company_A_ItemView".
I have 10 companies that I want to create the same exact view, with different names, e.g.
"Company_B_ItemView"
"Company_C_ItemView"
..etc.
If you offer help, please don't ask why there are multiple views - just accept that I need to do it this way, OK?
Each company has its own set of tables, and the CREATE VIEW statement references several tables by name. Here's BRIEF sample, but remember, the total length of the query is around 6000 characters:
CREATE view [dbo].[Company_A_ItemView] as
select
WE.[Item No_],
WE.[Location Code],
LOC.[Bin Number],
[..more fields, etc.]
from
[Company_A_Warehouse_Entry] WE
left join
[Company_A_Location] LOC
...you get the idea
So, what I am currently doing is:
a. Pulling the contents of the CREATE VIEW statement into 2 Declared Variables, e.g.
Set #SQLQuery1 = (select text
from syscomments
where ID = 1382894081 and colid = 1)
Set #SQLQuery2 = (select
from syscomments
where ID = 1382894081 and colid = 2)
Note that this is how SQL stores long definitions - when you create the view, it stores the text into multiple syscomments records. In my case, the view is split into a text chunk of 3591 characters into the first syscomment record and the rest of the text is in the second record. I have no idea why SQL doesn't use all 4000 characters in the syscomment field. And the statement is broken in the middle of a word.
Please note in all my examples, all #SQLQueryxxx variables are declared as varchar(max). I've also tried declaring them as nvarchar(max) and varchar(8000) and nvarchar(8000) with the same results.
b. I then do a "Search and Replace" for "Company_A" and replace it with "Company_B". In the code below, the variable "#CompanyID" is first set to "Company_B":
SET #SQLQueryNew1 = #SQLQuery1
SET #SQLQueryNew1 = REPLACE(#SQLQueryNew1, 'Company_A', #CompanyID)
SET #SQLQueryNew2 = #SQLQuery2
SET #SQLQueryNew2 = REPLACE(#SQLQueryNew2, 'Company_A',#CompanyID)
c. I then try:
EXEC (#SQLQueryNew1 + #SQLQueryNew2)
The message returned indicates that it's trying to execute the statement truncated at the end of #SQLQueryNew1, e.g. 80% (approx) of the query's text.
I've tried CAST'ing the final result into a new varchar(max) and nvarchar(max) - no luck
I've tried CAST'ing the original query a new varchar(max) and nvarchar(max)- no luck
I've looked at the result of retrieving the original CREATE VIEW statement, and it's fine.
I've tried various other ways of retrieving the original CREATE VIEW statement, such as:
Set #SQLQuery1 = (select VIEW_DEFINITION)
FROM [MY_DATABASE].[INFORMATION_SCHEMA].[VIEWS]
where TABLE_NAME = 'Company_A_ItemView')`
This one returns only the first 4000 characters of the CREATE VIEW
Set #SQLQuery1 = (SELECT (OBJECT_DEFINITION(#ObjectID))
If I do a
SELECT LEN(OBJECT_DEFINITION(#ObjectID))
it returns the correct length of the query (e.g. 5191), but if I look at #SQLQuery1, or try to
EXEC(#SQLQuery1), the statement is still truncated.
c. There are some references that state that since I'm manipulating the text of the query after retrieving it, the resulting variables are then truncated to 4000 characters. I've tried CAST'ing the result as I do the REPLACE, e.g.
SET #SQLQueryNew1 = SELECT (CAST(REPLACE(#SQLQueryNew1,
'Company_A',
#CompanyID) AS varchar(max))
Same result.
I know there are other methods, such as creating stored procedures for creating the views. But the views are being developed and are somewhat "in flux", so placing the text of the CREATE VIEW inside a stored proc is cumbersome. My goal is to be able to take Company_A's view and replicate it exactly - multiple times, except reference Company_B's view name and table names, Company_C's view name and table names, etc.
I'm wondering if there is anyone out there who has done this type of manipulation of a long SQL "CREATE VIEW" statement and try to execute it.
Just use VARCHAR(MAX) or NVARCHAR(MAX). They work fine for EXEC(string).
FYI,
Note that this is how SQL stores long definitions - when you create
the view, it stores the text into multiple syscomments records.
This is not correct. This is how it used to be done on SQL Server 2000. Since SQL Server 2005 and higher they are saved as NVARCHAR(MAX) in a single entry in sys.sql_modules.
syscomments is still around, but it is retained read-only solely for compatibility.
So all you should need to do is to change your #SQLQuery1,2,etc. variables to a single NVARCHAR(MAX) variable, and pull your View code from the [definition] column of the sys.sql_modules table instead.
Note that you should be careful with your string manipulations as there are certain functions that will revert to (N)VARCHAR(4000) output if all of their input arguments are not (N)VARCHAR(MAX). (Sorry, I do not know which ones, but REPLACE() may be one). In fact, this may be what has been causing so much confusion in your tests.
declare your sql variables (#SQLQuery1...) as nvarchar(4000)
be sure each sql part did't exceed 4000 byte (copy each part to a text file and test the file size in bytes)
I want to insert some data on the local server into a remote server, and used the following sql:
select * into linkservername.mydbname.dbo.test from localdbname.dbo.test
But it throws the following error
The object name 'linkservername.mydbname.dbo.test' contains more than the maximum number of prefixes. The maximum is 2.
How can I do that?
I don't think the new table created with the INTO clause supports 4 part names.
You would need to create the table first, then use INSERT..SELECT to populate it.
(See note in Arguments section on MSDN: reference)
The SELECT...INTO [new_table_name] statement supports a maximum of 2 prefixes: [database].[schema].[table]
NOTE: it is more performant to pull the data across the link using SELECT INTO vs. pushing it across using INSERT INTO:
SELECT INTO is minimally logged.
SELECT INTO does not implicitly start a distributed transaction, typically.
I say typically, in point #2, because in most scenarios a distributed transaction is not created implicitly when using SELECT INTO. If a profiler trace tells you SQL Server is still implicitly creating a distributed transaction, you can SELECT INTO a temp table first, to prevent the implicit distributed transaction, then move the data into your target table from the temp table.
Push vs. Pull Example
In this example we are copying data from [server_a] to [server_b] across a link. This example assumes query execution is possible from both servers:
Push
Instead of connecting to [server_a] and pushing the data to [server_b]:
INSERT INTO [server_b].[database].[schema].[table]
SELECT * FROM [database].[schema].[table]
Pull
Connect to [server_b] and pull the data from [server_a]:
SELECT * INTO [database].[schema].[table]
FROM [server_a].[database].[schema].[table]
I've been struggling with this for the last hour.
I now realise that using the syntax
SELECT orderid, orderdate, empid, custid
INTO [linkedserver].[database].[dbo].[table]
FROM Sales.Orders;
does not work with linked servers. You have to go onto your linked server and manually create the table first, then use the following syntax:
INSERT INTO [linkedserver].[database].[dbo].[table]
SELECT orderid, orderdate, empid, custid
FROM Sales.Orders
WHERE shipcountry = 'UK';
I've experienced the same issue and I've performed the following workaround:
If you are able to log on to remote server where you want to insert data with MSSQL or sqlcmd and rebuild your query vice-versa:
so from:
SELECT * INTO linkservername.mydbname.dbo.test
FROM localdbname.dbo.test
to the following:
SELECT * INTO localdbname.dbo.test
FROM linkservername.mydbname.dbo.test
In my situation it works well.
#2Toad: For sure INSERT INTO is better / more efficient. However for small queries and quick operation SELECT * INTO is more flexible because it creates the table on-the-fly and insert your data immediately, whereas INSERT INTO requires creating a table (auto-ident options and so on) before you carry out your insert operation.
I may be late to the party, but this was the first post I saw when I searched for the 4 part table name insert issue to a linked server. After reading this and a few more posts, I was able to accomplish this by using EXEC with the "AT" argument (for SQL2008+) so that the query is run from the linked server. For example, I had to insert 4M records to a pseudo-temp table on another server, and doing an INSERT-SELECT FROM statement took 10+ minutes. But changing it to the following SELECT-INTO statement, which allows the 4 part table name in the FROM clause, does it in mere seconds (less than 10 seconds in my case).
EXEC ('USE MyDatabase;
BEGIN TRY DROP TABLE TempID3 END TRY BEGIN CATCH END CATCH;
SELECT Field1, Field2, Field3
INTO TempID3
FROM SourceServer.SourceDatabase.dbo.SourceTable;') AT [DestinationServer]
GO
The query is run on DestinationServer, changes to right database, ensures the table does not already exist, and selects from the SourceServer. Minimally logged, and no fuss. This information may already out there somewhere, but I hope it helps anyone searching for similar issues.
Some how some records in my table are getting updated with value of xyz in a certain column. Out of hundred of stored procedures, functions, triggers, how can I determine which code is doing this action. Is there a way to search through the database some how through each and every script of the code?
Please help.
One approach is to check syscomments
Contains entries for each view, rule,
default, trigger, CHECK constraint,
DEFAULT constraint, and stored
procedure within the database. The
text column contains the original SQL
definition statements..
e.g. select text from syscomments
If you are having trouble finding that literal string, the values could be coming from a table, or they could be being concatenated within a routine.
Try this
Select text from syscomments
where CharIndex('x', text) > 0
and CharIndex('y', text) > 0
and CharIndex('z', text) > 0
That might help you either find the right routine, or further indicate that the values are coming from a table.
This is going to be nearly impossible to do in SQL Server 2000 because the update might very well be from a variable that has that value or a join to another table that has that value and not hard-coded into the stored proc, trigger etc. The update could also be coming from a DTS package, a job, a piece of dynamic code run by the app or even from query analyzer, so the code itself may not be recorded inthe datbase anywhere.
Perhaps a better approach might be to create an audit table for the table in question and have it record the user and the code from the spid that generated the change as well as the old and new values. You'll have to wait until it happens again, but then you would know exactly what changed the value and what value to put it back to if need be.
Alternatively you could run profiler on the system until it happens but profiler tends to hurt performance and is not usually a good idea to run on a production system. If it is happening very often, it might be an acceptable alternative.
Here's a hint as to how you might get some of the info you want for the eventual trigger code you write:
create table #temp (eventtype nvarchar (1000), parameters int, eventinfo nvarchar (4000), myspid int)
declare #myspid int
select #myspid =##spid
insert #temp (eventtype,parameters, eventinfo)
exec ('dbcc inputbuffer (##spid)')
update #temp
set myspid = #myspid
select hostname, program_name, eventinfo
from #temp t
join sysprocesses s on t.myspid = s.spid
WHERE spid = #myspid
You might use sql-profiler to trac the update of a given table / column.