I'm converting a client's SSIS packages from DTS to SSIS. In one of their packages they have an execute SQL task that has a query similar to this:
SELECT * FROM [SOME_TABLE] AS ReturnValues
ORDER BY IDNumber
FOR XML AUTO, ELEMENTS
This query seems to return in a decent amount of time on the old system but on the new box it takes up to 18 minutes to run the query in SSMS. Sometimes if I run it it will generate an XML link and if I click on it to view the XML its throwing a 'System.OutOfMemoryException' and suggests increasing the number of characters retrieved from the server for XML data. I increased the option to unlimited and still getting error.
The table itself contains 220,500 rows but the query rows returned is showing 129,810 before query stops. Is this simply a matter of not having enough memory available to the system? This box has 48 GB (Win 2008 R2 EE x64), instance capped to 18GB because its shared dev environment. Any help/insight would be greatly appreciated as I don't really know XML!
When you are using SSMS to do XML queries FOR XML, it will generate all the XML and then put it into the grid and allow you to click on it. There are limits to how much data it brings back and with 220,000 rows, depending on how wide the table is, is huge and produces a lot of text.
The out of memory is the fact that SQL Server is trying to parse all of it and that is a lot of memory consumption for SSMS.
You can try to execute to a file and see what you get for size. But the major reason for running out of memory, is because that is a lot of XML and returning it to the grid, you will not get all the results all the time with this type of result set (size wise).
DBADuck (Ben)
The out of memory exception you're hitting is due to the amount of text a .net grid control can handle. 220k lines is huge! the setting in SSMS to show unlimited data is only as good as the .net control memory cap.
You coul look at removing the ELEMENTS option and look at the data in attribute format. That will decreate the amount XML "string space" returned. Personally, I prefer attributes over elements for that reason alone. Context is king, so it depends on what you're trying to accomplish (look at the data or use the data). Could youp pipe the data into an XML variable? When all is said & done, DBADuck is 100% correct in his statement.
SqlNightOwl
Related
I created a linked server connection to our SAP B1 HANA server in my SSMS environment and have been using it for various queries for a while - never had any issues. This week I typed in this simple query into SSMS:
SELECT *
FROM OPENQUERY([LINKEDSERVERCONNECTION], 'SELECT T0."ItemCode", T0."UserText" FROM DATABASE.OITM T0;')
What I got was 99% blank cells with a seemingly random few cells with actual ItemCodes. I can see the corresponding number of rows as we have ItemCodes in our HANA database but where there should be the ItemCodes I see blanks. See following screenshot.
The column UserText IS showing properly though - when I scroll through I can recognize values but the corresponding ItemCode is blank when it should be the correct part number.
Now when I remove T0."UserText" from the query then all the ItemCodes show up fine. See screenshot.
So it seems that adding T0."UserText" to the query causes some strange problem returning data.
While few ItemCodes have something in UserText compared to all the other ItemCodes, those that do have data can sometimes have quite lengthy strings in UserText (sometimes 100+ characters). However, I do not believe there are enough instances of this to cause what I would believe to be a shortage of resources on the HANA server - but then I'm not a HANA systems expert.
If I query ItemCode and UserText in SAP B1's Query Wizard, I can get everything to display correctly. I need it to work in SSMS however.
Does anyone have any idea what could be causing SSMS' troubles displaying the query with UserText?
Can anybody tell me what the N is at the beginning of value in WHERE clause from Crystal Reports show SQL Query is doing (example below)? When I plug in as is into Sql Server, the query returns much slower because it appears to be initially going through way more records than it needs. When I removed the N, I get much faster results, and it doesn't seem to be hitting as many records. Is there a way to prevent Crystal from adding this when running reports? Any help would be greatly appreciated.
Example: ...WHERE "usr_MasterBill"."car_move_id"=N'M090036749'
It uses 16 bytes for each character, allowing unicode strings to be created. This kind of string should be assigned to nvarchar, nchar, ntext etc. data types.
https://dba.stackexchange.com/questions/36081/write-differences-between-varchar-and-nvarchar
The problem is not the N in the query, but how CR works, in some cases the CR Core decides that is better retrieves all registers and filter on presentation load.
Verify the "Select expert" for additional filters, or in change group options "Specified Order". You need check to the formulas fields
Part of an SSIS package is the data import from an external database via a SQL command embedded into an ADO.NET Source Data Flow Source. Whenever I make even the slightest adjustment to the query (such as changing a column name) it takes ages (in that case 1-2 hours) until the program has finished validation. The query itself returns around 30,000 rows with 20 columns each.
Is there any way to cut these long intervals or is this something I have to live with?
I usually store the source queries in a table and the first part of my package would execute a select and store the query returned from the table in a package variable, which would then be used by the ADO.NET Source Data Flow. So In my package for the default value of the variable I usually have the query that is stored in the database along with a "where 1=2" at the end. Hence during design time it does execute the query but just returns the column metadata. Let me know if you have any questions.
Ok I'm using SQL Server 2008 and have a table field of type VARCHAR(MAX). Problem is that when saving information using Hibernate, the contents of VARCHAR(MAX) field is getting truncated. I don't see any error messages on either the app server or database server.
The content of this field is just a plain text file. The size of this text file is 383KB.
This is what I have done so far to troubleshoot this problem:
Changed the database field from VARCHAR(MAX) to TEXT and same
problem occurs.
Used the SQL Server Profiler and I noticed that the full text
content is being
received by the database server, but for some reason the profiler freezes when trying
to view the SQL with the truncation problem. Like I said, just before it freezes, I
did noticed that the full text file content (383KB) are being received, so it seems
that it might be the database problem.
Has anyone encountered this problem before? Any ideas what causes this truncation?
NOTE: just want to mention that I'm just going into SQL Studio and just copying the TEXT field content and pasting it to Textpad. That's how I noticed it's getting truncated.
Thanks in advance.
Your problem is that you think Management Studio is going to present you with all of the data. It doesn't. Go to Tools > Options > Query Results > SQL Server. If you are using Results to Grid, change "Maximum Characters Retrieved" for "Non XML data" (just note that Results to Grid will eliminate any CR/LF). If you are using Results to Text, change "Maximum number of characters displayed in each column."
You may be tempted to enter more, but the maximum you can return within Management Studio is:
65535 for Results to Grid
8192 for Results to Text
If you really want to see all the data in Management Studio, you can try converting it to XML, but this has issues also. First set Results To Grid > XML data to 5 MB or unlimited, then do:
SELECT CONVERT(XML, column) FROM dbo.table WHERE...
Now this will produce a grid result where the link is actually clickable. This will open a new editor window (it won't be a query window, so won't have execute buttons, IntelliSense, etc.) with your data converted to XML. This means it will replace > with > etc. Here's a quick example:
SELECT CONVERT(XML, 'bob > sally');
Result:
When you click on the grid, you get this new window:
(It does kind of have IntelliSense, validating XML format, which is why you see the squigglies.)
BACK AT THE RANCH
If you just want to sanity check and don't really want to copy all 383K elsewhere, then don't! Just check using:
SELECT DATALENGTH(column) FROM dbo.table WHERE...
This should show you that your data was captured by the database, and the problem is the tool and your method of verification.
(I've since written a tip about this here.)
try using SELECT * FROM dbo.table for XML PATH
I had a similar situation. I have an excel sheet. A couple of columns in the sheet may have more than 255 chars, sometimes even 500. A simple way was to sort the rows of data, placing the rows with the most characters up on top. You actually need just one row. When SQL imports the data, it recognizes the field being more than 255 characters and imports the entire data :)
Otherwise, they suggested using regedit to change a specific value. Didn't want to do that.
Hope this helps
The environment I am working with is CF8 and SQL 2005 and the datatype CLOB is disabled on the CF administrator. My concern is, will there be a performance ramification by enabling the CLOB datatype in the CF Administrator.
The reason I want/need to enable it is, SQL is building the AJAX XML response. When the response is large, the result is either truncated or returned with multiple rows (depending on how the SQL developer created the stored proc). Enabling CLOB allows the entire result to be returned. The other option I have is to have SQL always return the XML result in multiple rows and have CF join the string for each result row.
Anyone with some experience with this idea or have any thoughts?
Thanks!
I really think that returning Clob data is likely to be less expensive then concatenating multiple rows of data into an XML string and then parsing it (ick!). What you are trying to do is what CLOB is designed for. JDBC handles it pretty well. The performance hit is probably negligible. After all - you have to return the same amount of character data either way, whether in multiple rows or a single field. And to have to "break it up" on the SQL side and then "reassemble" it on the CF side seems like reinventing the wheel to be sure.
I would add that questions like this sometimes mystify me. A modest amount of testing would seem to be able to answer this question to your own satisfaction - no?
I would just have the StoredProc return the data set, or multiple data sets, and just build the XML the way you need it via CF.
I've never needed to use CLOB. I almost always stick to the varchar datatype, and it seems to do the job just fine.
There are also options where you could call the Stored Proc, which triggers MSSQL to generate an actual xml file (not just a string) and simply return you the file name. Then you can use CFFILE action="read" to grab the xml string and parse it accrodingly. Assuming your web server and db have a common file storage area.