'N' in Crystal reports sql query where clause - sql-server

Can anybody tell me what the N is at the beginning of value in WHERE clause from Crystal Reports show SQL Query is doing (example below)? When I plug in as is into Sql Server, the query returns much slower because it appears to be initially going through way more records than it needs. When I removed the N, I get much faster results, and it doesn't seem to be hitting as many records. Is there a way to prevent Crystal from adding this when running reports? Any help would be greatly appreciated.
Example: ...WHERE "usr_MasterBill"."car_move_id"=N'M090036749'

It uses 16 bytes for each character, allowing unicode strings to be created. This kind of string should be assigned to nvarchar, nchar, ntext etc. data types.
https://dba.stackexchange.com/questions/36081/write-differences-between-varchar-and-nvarchar

The problem is not the N in the query, but how CR works, in some cases the CR Core decides that is better retrieves all registers and filter on presentation load.
Verify the "Select expert" for additional filters, or in change group options "Specified Order". You need check to the formulas fields

Related

SQL - Compare 2 text fields

I’m using a software known as FME Desktop. In this software we can issue SQL commands through an item called a transformer. I’m using a transformer called a SQLExecutor that uses a very simple query to make a comparison. Below is an explanation of what I’m trying to do with this SQL Query and the fact that it does not work when trying to compare 2 text fields.
I believe my issue is a limitation of SQL when used in the SQLExecutor. Let's say I have a layer of data called TEST.LEASE and I want to compare it to a layer called EDIT.LEASE based on one unique ID field. Both of these layers are in the same database. We use SQL Server for our stored data. There is a TEXT field in both layers called GIS_ID. This is a unique ID field. So what happens is we get updates on our LEASE layer and they start off being loaded to TEST.LEASE. When we have done our QA/QC of the data and we are satisfied that they are ready to be uploaded to EDIT.LEASE we then run an FME job that serves as our promotion tool. What this promotion tool does is that it checks various fields in TEST.LEASE to make sure they qualify for being uploaded (this part works 100% without issue).
Right before they are promoted to EDIT.LEASE we need to know if this will be a completely new record, in which case we will do an INSERT with FME. If by chance the GIS_ID already exists then we need to do an UPDATE to those records. The tool we have works perfectly for determining if it is an INSERT or UPDATE, except for one seemingly small thing … IT ONLY WORKS IF THE TEXT FIELD CONTAINS A NUMBER THAT DOESN’T HAVE A LETTER IN IT.
FYI: Someone at our company decided to make the GIS_ID field a text field. In my opinion it should have been an integer field because comparisons would have been super easy. But I can't change that now, it has already been decided by people who make way more money than I do that it will be a text field.
As mentioned … The GIS_ID is a text field (in both layers and they are both the same size, there is no difference in the field in both layers). As you may know, SQL doesn't care if it is a TEXT field or an INTEGER field when all that is contained in that field is a number. It can still compare 202 to 202 to see if they are equal to each other. For my example let's say I have a record in both TEST.LEASE and EDIT.LEASE where both of their GIS_ID fields equal 09198760. When I run the query below it runs perfectly.
select OBJECTID
from TEST.LEASE_UPDATE_INSERT_WRITER
where GIS_ID = #Value(GIS_ID)
It runs perfectly, as I’ve mentioned, on the data if both GIS_ID text fields have only numbers in them. But if just one record contains an actual alpha, the SQL query will error out.
So if GIS_ID has 09198760a01 once the query reaches the “a” in GIS_ID a SQL error is returned. I’m not looking for a way for the job to continue and ignore those records, because I need ALL OF THE RECORDS to load. I need to know if anyone would know how to add to or rewrite the query above so that it loads both “number only text fields” and “numbers containing a letter fields.”
I hope that long explanation is clear. Please let me know if it isn’t. Thanks for any help you might be able to provide for me
Sincerely,
Tex
I am assuming that the #value is the function that is causing you problems. I briefly checked their docs. it looks like you need to encapsulate like so '#value(GIS)'
http://fmepedia.safe.com/articles/How_To/Executing-a-Stored-Procedure-on-Microsoft-SQL-Server-with-FME
Jeff is right and as a generic answer for regular sql users and even people using sql in their application code, if you are comparing text like the op mentioned, then you need to use single ' quotes '.
Where avalue = 'myvalue'
Otherwise sql server thinks it is an int, hence why it works when the value he's passing in is only numbers. It's not always easy to tell what the problem is when you're passing in parameters.
Where avalue = #myvalue
So you'll need to pay attention to that. Just wanted to mention this so maybe it helps someone else with a similar issue. I figured this out when we were getting errors from a field that had concatenated an id field i.e. it worked when the value = 2, but not 2,3 etc. Wrapping the parameter in single quotes easily fixed that as we were truly only concerned with value = '2' in our case.
Hope this makes sense.

ColdFusion 8 + MSSQL 2005 and CLOB datatype on resultset

The environment I am working with is CF8 and SQL 2005 and the datatype CLOB is disabled on the CF administrator. My concern is, will there be a performance ramification by enabling the CLOB datatype in the CF Administrator.
The reason I want/need to enable it is, SQL is building the AJAX XML response. When the response is large, the result is either truncated or returned with multiple rows (depending on how the SQL developer created the stored proc). Enabling CLOB allows the entire result to be returned. The other option I have is to have SQL always return the XML result in multiple rows and have CF join the string for each result row.
Anyone with some experience with this idea or have any thoughts?
Thanks!
I really think that returning Clob data is likely to be less expensive then concatenating multiple rows of data into an XML string and then parsing it (ick!). What you are trying to do is what CLOB is designed for. JDBC handles it pretty well. The performance hit is probably negligible. After all - you have to return the same amount of character data either way, whether in multiple rows or a single field. And to have to "break it up" on the SQL side and then "reassemble" it on the CF side seems like reinventing the wheel to be sure.
I would add that questions like this sometimes mystify me. A modest amount of testing would seem to be able to answer this question to your own satisfaction - no?
I would just have the StoredProc return the data set, or multiple data sets, and just build the XML the way you need it via CF.
I've never needed to use CLOB. I almost always stick to the varchar datatype, and it seems to do the job just fine.
There are also options where you could call the Stored Proc, which triggers MSSQL to generate an actual xml file (not just a string) and simply return you the file name. Then you can use CFFILE action="read" to grab the xml string and parse it accrodingly. Assuming your web server and db have a common file storage area.

Simple results formatting from SQL Server

I have been doing SQL for a while and I've always been satisfied to use the Results to Grid found in SSMS.
Now I have a series of queries that I am running and I would like to have some very simple formatting of the results. Currently neither the Results to Grid nor the Results to Text do quite what I would like to do.
A few things I would like to do so it is easier for me to read is
Remove the text that says '# row(s) affected' (found in the Results to Text)
Make the columns not so wide in the column aligned Output Format (part of the problem is that the Maximum Number of Characters does not appear to go below 30 - is this my data that forces this?)
If I cannot format the output (even to a text file) what other options do I have ?
I spent some time looking at SQL Server -> PHP -> HTML as well as SQL Server -> Reporting Services -> MS Report Builder but quite frankly it seems like overkill to put a few spacers and pretty up the headings a bit.
I feel like I am missing something here ... I would rather not go through the hassle of all that installation of PHP and what not just to be able to look at my data a little bit prettier.
Remove the text that says '# row(s) affected' (found in the Results to Text)
Put this SET NOCOUNT ON at the top of your sql
Make the columns not so wide in the column aligned Output Format (part
of the problem is that the Maximum Number of Characters does not
appear to go below 30 - is this my data that forces this?)
Yes its the size of the field that does this. You can cast it cast(field as varchar(20)to make it smaller if you know you won't lose data.
It all depends on what you want to do with the formatted results.
For quickly reading / formatting a result that isn't great when viewed directly in Management Studio, I use Results to Grid, select all with headers (by right-clicking on the upper-left corner of the grid), and copy/paste into Excel. From there it's easy to do basic tinkering with column widths and formatting. The biggest downside for me is dates are never quite right out-of-the-box, but it's always a quick fix.
Excel also makes a good interim stop for basic formatting when I'm pasting query results into an email.
It might be overkill in some cases, but I suspect much less so than using PHP -> HTML or Reporting Services -> MS Report Builder.

SQLBulkCopy and Dates (1/1/1753)

I've got an application which has been working fine for quite a while, but there is an annoying item that continues to get in the way on occasion.
Let's say that I use an object such as OracleDataReader or MySQLDataReader to pass the data to the sqlbulkcopy object for insert. Let's assume that all the columns maps just fine and for the most part, it all works well.
Granted, I don't have control over the source application or database (which is either MySQL or Oracle). So some goof goes into a different application and puts in a date on the invoice table of 5/31/0210. He really meant to put in 5/31/2010, but the application he's using is not validating the data very tightly and the Oracle database accepts it. For all intensive purposes, the data of 5/31/0210 is a valid date for the Oracle db. It might be stupid in terms of data entry, but it is what it is at this point.
Now our OracleDataReader comes along and is transferring this invoice table over to SQL Server via the SQLBulkCopy. It is passing the data to perfectly matched table with the right column names and data types. You can see what is going to happen. This date of 05/31/0210 from Oracle is not accepted by the SQL Server db engine, as the DATETIME field only allows dates from 1/1/1753 to 12/31/9999.
When it encounters this record, it simply fails and gives an overflow error. It doesn't skip the record, it kills the feed. So if it happens a thousand records in on a million record table, you don't get the remaining 999,000 records.
Is there anyway to get around this issue so that the feed will continue?
Ideally, I'd like to move the receiving SQL Server DB to 2008 and use DATETIME2, which would allow for these goofy dates, but unfortunately not all my clients are ready to move to this version yet, so I'm stuck with DATETIME in SQL 2000/2005/2008.
Any ideas on how to get around this without changing the SQL? Ideally, I wouldn't mind if it just skipped the record. I know that I could do this in the SQL for the datareder, but this would be extremely complicated when you have twenty date fields in a single query. It would be maintenance nightmare.
Any thoughts would be appreciated.
One option would be to change the datetime column type to varchar. Then add a derived column for converting the string to datetime. The trick would be to use a function in the derived column to validate the date and put an arbitrary datetime if the coversion will fail. If you do heavy date comparisons, persist the computed column and/or index it.
I say all of this under the impression that sqlbulkcopy is not able to do transforms. Maybe you can. Hopefully, someone will chime in with a way to.
SSIS would be great in this situation, as you could do the transform and also get the performance benefits of the bulk update lock.

SQL Query FOR XML runs fine in 2000, slow in 2008 R2

I'm converting a client's SSIS packages from DTS to SSIS. In one of their packages they have an execute SQL task that has a query similar to this:
SELECT * FROM [SOME_TABLE] AS ReturnValues
ORDER BY IDNumber
FOR XML AUTO, ELEMENTS
This query seems to return in a decent amount of time on the old system but on the new box it takes up to 18 minutes to run the query in SSMS. Sometimes if I run it it will generate an XML link and if I click on it to view the XML its throwing a 'System.OutOfMemoryException' and suggests increasing the number of characters retrieved from the server for XML data. I increased the option to unlimited and still getting error.
The table itself contains 220,500 rows but the query rows returned is showing 129,810 before query stops. Is this simply a matter of not having enough memory available to the system? This box has 48 GB (Win 2008 R2 EE x64), instance capped to 18GB because its shared dev environment. Any help/insight would be greatly appreciated as I don't really know XML!
When you are using SSMS to do XML queries FOR XML, it will generate all the XML and then put it into the grid and allow you to click on it. There are limits to how much data it brings back and with 220,000 rows, depending on how wide the table is, is huge and produces a lot of text.
The out of memory is the fact that SQL Server is trying to parse all of it and that is a lot of memory consumption for SSMS.
You can try to execute to a file and see what you get for size. But the major reason for running out of memory, is because that is a lot of XML and returning it to the grid, you will not get all the results all the time with this type of result set (size wise).
DBADuck (Ben)
The out of memory exception you're hitting is due to the amount of text a .net grid control can handle. 220k lines is huge! the setting in SSMS to show unlimited data is only as good as the .net control memory cap.
You coul look at removing the ELEMENTS option and look at the data in attribute format. That will decreate the amount XML "string space" returned. Personally, I prefer attributes over elements for that reason alone. Context is king, so it depends on what you're trying to accomplish (look at the data or use the data). Could youp pipe the data into an XML variable? When all is said & done, DBADuck is 100% correct in his statement.
SqlNightOwl

Resources