SQL - Compare 2 text fields - sql-server

I’m using a software known as FME Desktop. In this software we can issue SQL commands through an item called a transformer. I’m using a transformer called a SQLExecutor that uses a very simple query to make a comparison. Below is an explanation of what I’m trying to do with this SQL Query and the fact that it does not work when trying to compare 2 text fields.
I believe my issue is a limitation of SQL when used in the SQLExecutor. Let's say I have a layer of data called TEST.LEASE and I want to compare it to a layer called EDIT.LEASE based on one unique ID field. Both of these layers are in the same database. We use SQL Server for our stored data. There is a TEXT field in both layers called GIS_ID. This is a unique ID field. So what happens is we get updates on our LEASE layer and they start off being loaded to TEST.LEASE. When we have done our QA/QC of the data and we are satisfied that they are ready to be uploaded to EDIT.LEASE we then run an FME job that serves as our promotion tool. What this promotion tool does is that it checks various fields in TEST.LEASE to make sure they qualify for being uploaded (this part works 100% without issue).
Right before they are promoted to EDIT.LEASE we need to know if this will be a completely new record, in which case we will do an INSERT with FME. If by chance the GIS_ID already exists then we need to do an UPDATE to those records. The tool we have works perfectly for determining if it is an INSERT or UPDATE, except for one seemingly small thing … IT ONLY WORKS IF THE TEXT FIELD CONTAINS A NUMBER THAT DOESN’T HAVE A LETTER IN IT.
FYI: Someone at our company decided to make the GIS_ID field a text field. In my opinion it should have been an integer field because comparisons would have been super easy. But I can't change that now, it has already been decided by people who make way more money than I do that it will be a text field.
As mentioned … The GIS_ID is a text field (in both layers and they are both the same size, there is no difference in the field in both layers). As you may know, SQL doesn't care if it is a TEXT field or an INTEGER field when all that is contained in that field is a number. It can still compare 202 to 202 to see if they are equal to each other. For my example let's say I have a record in both TEST.LEASE and EDIT.LEASE where both of their GIS_ID fields equal 09198760. When I run the query below it runs perfectly.
select OBJECTID
from TEST.LEASE_UPDATE_INSERT_WRITER
where GIS_ID = #Value(GIS_ID)
It runs perfectly, as I’ve mentioned, on the data if both GIS_ID text fields have only numbers in them. But if just one record contains an actual alpha, the SQL query will error out.
So if GIS_ID has 09198760a01 once the query reaches the “a” in GIS_ID a SQL error is returned. I’m not looking for a way for the job to continue and ignore those records, because I need ALL OF THE RECORDS to load. I need to know if anyone would know how to add to or rewrite the query above so that it loads both “number only text fields” and “numbers containing a letter fields.”
I hope that long explanation is clear. Please let me know if it isn’t. Thanks for any help you might be able to provide for me
Sincerely,
Tex

I am assuming that the #value is the function that is causing you problems. I briefly checked their docs. it looks like you need to encapsulate like so '#value(GIS)'
http://fmepedia.safe.com/articles/How_To/Executing-a-Stored-Procedure-on-Microsoft-SQL-Server-with-FME

Jeff is right and as a generic answer for regular sql users and even people using sql in their application code, if you are comparing text like the op mentioned, then you need to use single ' quotes '.
Where avalue = 'myvalue'
Otherwise sql server thinks it is an int, hence why it works when the value he's passing in is only numbers. It's not always easy to tell what the problem is when you're passing in parameters.
Where avalue = #myvalue
So you'll need to pay attention to that. Just wanted to mention this so maybe it helps someone else with a similar issue. I figured this out when we were getting errors from a field that had concatenated an id field i.e. it worked when the value = 2, but not 2,3 etc. Wrapping the parameter in single quotes easily fixed that as we were truly only concerned with value = '2' in our case.
Hope this makes sense.

Related

Linked SQL Server's table shows all fields as #Deleted, but when converted to local, all information is there

My company has a really old Access 2003 .ADP front-end connected to an on-premise SQL Server. I was trying to update the front-end to MS Access 2016, which is what we're transitioning to, but when linking the tables I get all the fields in this specific table as #Deleted. I've looked around and tried to change some of the settings, but I'm really not that into SQL Server to know what I'm doing, hence asking for help.
When converting the table to local, all the info is correctly displayed, so it begs the question. Also, skipping to the last record will reveal the info on that record, or sorting/filtering reveals some of the records, but most of the table stays "#Deleted"...
Since I know you're going to ask: Yes, I need to edit the records.. Although the snapshot method would work for people trying to view the info, some of us need to edit it.
I'm hoping someone can shed some light on this,
Thanks in advance, Rafael.
There are 3 common reasons for this:
You have bit fields in SQL server, but they are null. They should be assigned a default of 0.
The table in question does NOT have a PK (primary key).
Last but not least you need (want) to add a timestamp column. Keep in mind that this is really what we call a “row version” column (so it not a date/time column, but a timestamp column). Adding this column will help access determine if a record been changed, and this is especially the case for any table/form in Access that allows editing of “real” number data types (single, double). If access does not find a timestamp column, then it reverts to a column by column comparison to determine table changes, and due to how computers handle “real” numbers (with rounding), then such comparisons often fail.
So, check for the above 3 issues. You likely should re-run the linked table manager have making any changes.

How to avoid SQL Server error on ORDER BY with duplicate columns

Although this question references PHP, it is not actually PHP-specific, so I have not flagged it as such.
We have a PHP framework which supports multiple DB back-ends.
There is a generic function in our data object class, which allows you to get records from the underlying table, with a specified criteria and sort order.
It looks something like this:
function GetAll($Criteria, $OrderBy = "") {
...
// Add primary key (column 1) to end of order by list,
// so that returned order is predictable.
if ($OrderBy != "") {
$OrderBy .= ", ";
}
$OrderBy .= "1";
...
// Build and run query, returning the result as an array.
}
If you specify an $OrderBy argument of StaffID on a Staff object, the resulting SQL looks something like the following:
SELECT * FROM adminStaff ORDER BY StaffID, 1;
This works fine on a MySQL back-end, and from my searching of the web it should also be fine on most other DB back-ends. However, when using SQL Server, we get the following error message:
A column has been specified more than once in the order by list.
Columns in the order by list must be unique.
This arises because SQL Server disallows the same column appearing multiple times in the ORDER BY clause. In this case StaffID is column 1 and therefore we have multiple instances of the same column.
Is there a way to disable this check in SQL Server? MySQL provides a lot of options to enable/disable strictness checks and incompatible features - does SQL Server provide anything of that nature that would allow the above query to run without errors?
If not, do you have any suggestions for how we could resolve this in our data-object layer? Bear in mind we need to maintain compatibility with existing projects which expect this behaviour, so it is not sufficient to only include the first column when $OrderBy is blank.
The situation is also slightly complicated in the fact that the field list is customisable elsewhere in the data object configuration, so we can't rely on * being used as the field list - it could contain pretty much anything that is valid in a normal SQL field list. However, if that is asking too much, a solution to the simpler case (as outlined above) would be a good start!
In SQL Server you are able to sort either by column name or by ordinal position of the column order in the SELECT list.
In your case the column StaffID became the ordinal position 1. Hence SQL Server cannot sort the same result set based on the same column twice.
If you remove the 1 from your query, the problem will be solved.
Avoid using the ordinal position of the column for sorting.
The basic question - is it possible to suppress this SQL Server restriction on ORDER BY column duplication - was answered by Venu: No it is not.
There are various suggestions (mostly from me) about how you could possibly code around this limitation in a generic manner. For any future readers, those answers are probably the most helpful if you are adapting an existing system. (If you are starting from scratch, just try and avoid this situation altogether.)
However, the actual solution that I came to was to add versioning to our internal API for our DBAL. The API version is now 2 but you can call setApiVersion(1) to instruct the back-end to use the old version of the API instead.
v2 is identical to v1* except it no longer automatically adds column 1 to the ORDER BY unless it is completely blank. Therefore, the SQL Server issue is resolved for new (v2) projects, whilst existing projects can be set to use the v1 API and therefore continue to work correctly (but without SQL server compatibility).
(* Actually, I've taken this opportunity to make some other breaking changes in v2, but that is not relevant to this answer.)
I've come up with a couple of potential solutions at the framework level. All of them have performance implications which would need to be profiled, and in practice that may rule some or all of them out. However, in theory at least, these are ways that a generic solution could be implemented.
Omit the ORDER BY altogether, and do the sorting in code. Would involve parsing the provided ORDER BY string. Would be problematic if ORDER BY contained expressions, but I can't remember ever seeing that in our projects, so can probably be ignored. Probably the slowest solution.
Perform the query without the ORDER BY, limiting the results set to a single row. Use resulting column list to work out whether column 1 is already in the ORDER BY clause, and therefore whether to add it. Then run the full query. Would require parsing the provided ORDER BY string. Query caching may mean this won't add as much overhead as it appears.
Parse the field list to get the first column name and see if this appears in the ORDER BY clause. If field list contains * or table.* would require a schema lookup. May be too difficult if we need to deal with table aliases and wildcards in combination.
Parse ORDER BY string and see if it contains any primary key. If so it is already uniquely ordered and doesn't require the addition of an extra field. Would require a schema look-up.
Use a sub-select to give us a new instance of the column that we can sort on instead. Not sure whether SQL Server would still complain that this is the 'same' column, though.
Could you just append '--' to your OrderBy parameter when working with SQL Server and just explicitly define the Order By fields where necessary?

access report field pulling from not null column error without NZ() function

I am working on converting an Access database to a SQL Server backend. I've got most of it working, but one thing I haven't been able to figure out yet is that on one of the reports that we run, a few fields show up as #Error! The field's Control source is:
=DSum("[CustomerMinutes]","QryOutageSummaryByDateRange","NZ([CityRelated])= 0")
It works fine as shown, but it takes a lot longer to load the report and the CityRelated field is a not null field, so I feel as though I shouldn't need to use the NZ() function. I have opened the query in datasheet view and there appropriately isn't any NULLs. I would be more than happy to provide more detail, I just don't know what other information I should provide. Any help or general direction would be greatly appreciated!
The database function (DSUM, etc.) are fussy about the use of brackets. Try this.
=DSum("IIF([CustomerMinutes] Is Null,0,[CustomerMinutes])","[QryOutageSummaryByDateRange]","[CityRelated] Is Null Or [CityRelated]=0")
If CustomerMinutes is never NULL then you can just use CustomerMinutes as the first argument.
Notice that the square brackets are around the table or query name, not necessarily required for a single field-name. (This is the opposite to the how the examples appear in the Help system.)
I always prefer to avoid NZ - it can, in my experience, cause problems with aggregate functions, or when used in a sequence of queries.

‘String or binary data would be truncated’

I am using entity framework to insert details into the database where I have a column varchar(50).
when I am trying to insert some value more than 50 length it gives me an error string or binary data would be truncated.So ., for safe side I just changed it to varchar(100).
Now can someone let me know is this the only solution to change the value in the Db or do we have any alternatives.......
I read an article http://sqlblogcasts.com/blogs/danny/archive/2008/01/12/scuffling-with-string-or-binary-data-would-be-truncated.aspx
But how can I use such things in c#. I appreciate any suggestions from you.........
Thanks
Depending on what type of input field you're dealing with, using varchar(max) could be an option.
But as previously pointed out, it really boils down to what your business requirements are.
Well, first of all, obviously you cannot insert a string longer than 50 in a varchar(50).
So you have two options depending on your requirement:
change the database (as you have found out) and make sure that all code 'upstream' will be able to tackle the longer data
add some validations or restrict user input so that you will never get a string that is longer
Well, there's a third and that is cut off the string without telling the user but I would not do that.
So it depends on your business requirement what to do. But I would not do any 'tricks' like in the article you suggested
You should set the field length in the db to whatever is a reasonable length and you should prevent users from entering values that does not fit within that length.
1-when database table Alter when issue generated
2-First Backup table schema and data then delete table when re create table then run code .. working fine.

Can't change data type on MS Access 2007

I have a huge database (800MB) which consists of a field called 'Date Last Modified' at the moment this field is entered as a text data type but need to change it to a Date/Time field to carry out some queries.
I have another exact same database but with only 35MB of data inside it and when I change the data type it works fine, but when I try to change data type on big database it gives me an error:
Micorosoft Office Access can't change the data type.
There isn't enough disk space or memory
After doing some research some sites mentioned of changing the registry file (MaxLocksPerFile) tried that as well, but no luck :-(
Can anyone help please?
As John W. Vinson says here, the problem you're running into is that Access wants to hold a copy of the table while it makes the changes, and that causes it to exceed the maximum allowable size of an Access file. Compacting and repairing might help get the file under the size limit, but it didn't work for me.
If, like me, you have a lot of complex relationships and reports on the old table that you don't want to have to redo, try this variation on #user292452's solution instead:
Copy the table (i.e. 'YourTable') then paste Structure Only back
into your database with a different name (i.e. 'YourTable_new').
Copy YourTable again, and paste-append the data to YourTable_new.
(To paste-append, first paste, and select Append Data to Existing
Table.)
You may want to make a copy of your Access database at this point,
just in case something goes wrong with the next part.
Delete all data in YourTable using a delete query---select all
fields, using the asterisk, and then run with default settings.
Now you can change the fields in YourTable as needed and save
again.
Paste-append the data from YourTable_new to YourTable, and check
that there were no errors from type conversion, length, etc.
Delete YourTable_new.
One relatively tedious (but straightforward) solution would be to break the big database up into smaller databases, do the conversion on the smaller databases, and then recombine them.
This has an added benefit that if, by some chance, the text is an invalid date in one chunk, it will be easier to find (because of the smaller chunk sizes).
Assuming you have some kind of integer key on the table that ranges from 1 to (say) 10000000, you can just do queries like
SELECT *
INTO newTable1
FROM yourtable
WHERE yourkey >= 0 AND yourkey < 1000000
SELECT *
INTO newTable2
FROM yourtable
WHERE yourkey >= 1000000 AND yourkey < 2000000
etc.
Make sure to enter and run these queries seperately, since it seems that Access will give you a syntax error if you try to run more than one at a time.
If your keys are something else, you can do the same kind of thing, but you'll have to be a bit more tricky about your WHERE clauses.
Of course, a final thing to consider, if you can swing it, is to migrate to a different database that has a little more power. I'm guessing you have reasons that this isn't easy, but with the amount of data you're talking about, you'll probably be running into other problems as well as you continue to use Access.
EDIT
Since you are still having some troubles, here is some more detail in the hopes that you'll see something that I didn't describe well enough before:
Here, you can see that I've created a table "OutputIDrive" similar to what you're describing. I have an ID tag, though I only have three entries.
Here, I've created a query, gone into SQL mode, and entered the appropriate SQL statement. In my case, because my query only grabs value >= 0 and < 2, we'll just get one row...the one with ID = 1.
When I click the run button, I get a popup that tells/warns me what's going to happen...it's going to put a row into a new table. That's good...that's what we're looking for. I click "OK".
Now our new table has been created, and when I click on it, we can see that our one line of data with ID = 1 has been copied over to this new table.
Now you should be able to just modify the table name and the number values in your SQL query, and run it again.
Hopefully this will help you with whatever tripped you up.
EDIT 2:
Aha! This is the trick. You have to enter and run the SQL statements one at a time in Access. If you try to put multiple statements in and run them, you'll get that error. So run the first one, then erase it and run the second one, etc. and you should be fine. I think that will do it! I've edited the above to make it clearer.
Adapted from Karl Donaubauer's answer on an MSDN post:
Switch to immediate window (Ctl + G)
Execute the following statement:
DBEngine.SetOption dbMaxLocksPerFile, 200000
Microsoft has a KnowledgeBase article that addresses this problem directly and describes the cause:
The page locks required for the transaction exceed the MaxLocksPerFile value, which defaults to 9500 locks. The MaxLocksPerFile setting is stored in the Windows registry.
The KnowledgeBase article says it applies to Access 2002 and 2003, but it worked for me when changing a field in an .mdb from Access 2013.
It's entirely possible that in a database of that size, you've got text data that won't convert to a valid Date/Time.
I would suggest (and you may hate me for this) that you export all those prospective date values from "Big" and go through them (perhaps in Excel) to see which ones are not formatted the way you'd expect.
Assuming that the error message is accurate, you're running up against a disk or memory limitation. Assuming that you have more than a couple of gigabytes free on your disk drive, my best guess is that rebuilding the table would put the database (including work space) over the 2 gigabyte per file limit in Access.
If that's the case you'll need to:
Unload the data into some convenient format and load it back in to an empty database with an already existing table definition.
Move a subset of the data into a smaller table, change the data type in the smaller table, compact and repair the database, and repeat until all the data is converted.
If the error message is NOT correct (which is possible), the most likely cause is a bad or out-of-range date in your text-date column.
Copy the table (i.e. 'YourTable') then paste just its structure back into your database with a different name (i.e. 'YourTable_new').
Change the fields in the new table to what you want and save it.
Create an append query and copy all the data from your old table into the new one.
Hopefully Access will automatically convert the old text field directly to the correct value for the new Date/Time field. If not, you might have to clear out the old table and re-append all the data and use a string to date function to convert that one field when you do the append.
Also, if there is an autonumber field in the old table this might not work because there is no way to ensure that the old autonumber values will line up with the new autonumber values that get assigned.
You've been offered a bunch of different ways to get around the disk space error message.
Have you tried adding a new field to your existing table using Date data type and then updating the field with the value the existing string date field? If that works, you can then delete the old field and rename the new one to the old name. That would probably take up less temp space than doing a direct conversion from string to date on a single field.
If it still doesn't work, you may be able to do it with a sceond table with two columns, the first long integer (make it the primary key), the second, date. Then append the PK and string date field to this empty table. Then add a new date field to the existing table, and using a join, update the new field with the values from the two-column table.
This may run into the same problem. It depends on number of things internal to the Jet/ACE database engine over which we have no real control.

Resources