Is there any option to set the words "file", "key" and "trigger" as field names for ms-sql server?
We have pretty big application written with web2py with PostgreSQL as db, ans some of the fields has those names. One customer wishes to use ms-sql as db server, And I'm trying not to break compatibility within the DB structure.
Using square brackets (found in google) didn't help (could not use '[file]') - the ms-sql rejected it.
The documentation is pretty clear on this:
Although it is syntactically possible to use SQL Server reserved
keywords as identifiers and object names in Transact-SQL scripts, you
can do this only by using delimited identifiers.
The delimiters used in SQL Server are either double quotes or []. So, you can define them as:
[file]
or
"file"
Note that you need to use the delimiters wherever they appear.
The use of reserved words for such columns is discouraged. However, you might actually have a use case of compatibility between different databases where this capability will be useful.
I don't know why square bracket would fail. It works on SQL Fiddle.
I stumbled into this same issue when trying to use a legacy SQL Server table with fields that don't conform to identifier name rule, such as 'My File'.
Reading the source code I found that a rname field attribute exists for these cases:
Field('my_file', 'string', rname='[My File]'),
With this, Web2Py works with the my_file field name, the actual SQL/DML generated to interact with the database will use [My File] instead.
Related
I multiple tables in my database. All of my tables are showing output using select query instead of only 1 table "case". it also have data and columns but when I use it in my query it shows syntax error. I have also attached picture which have list of table and a simple query. This code is not developed by me so I am not sure why it is showing error. Is there any kind of restriction we can set so that it cannot be used in queries?
CASE is a reserved keyword in SQL Server. Therefore, you must escape it in double brackets:
SELECT * FROM dbo.[Case];
But best naming practice dictates that we should avoid naming database objects using reserved keywords. So, don't name your tables CASE.
Reserved words are not recommended for use as a database, table, column, variable, or other object names. If you desire to use a reserved word is used as an object name in ANSI standard syntax, it must be enclosed in double-quotes OR "[]" to allow the Relational Engine (whichever that one is) that the word is being used as an object and not as a keyword in the given context. Here is the sample code.
SELECT * FROM dbo."Case"
Or
SELECT * FROM dbo.[Case]
I have some text data in an SQL Server 2014 table in which I want to detect complex patterns and extract certain portions of the text if the text matches the pattern. Because of this, I need capturing groups.
E.g.
From the text
"Some title, Some Journal name, vol. 5, p. 20-22"
I want to grab the volume number
, vol\. ([0-9]+), p\. [0-9]+
Mind that I have simplified this use-case to improve readability. The above use-case could be solved without capturing groups. The actual use-case handles a lot more exceptions, like:
The journal/title containing "vol.".
Volume numbers/pages containing letters
"vol" being followed by ":" or ";" instead of "."
...
The actual regex I use is the following (yet, this is not a question on regex structure, just elaborating on why I need capturing groups).
(^|ยง|[^a-z0-9])vol[^a-z0-9]*([a-z]?[0-9]+[a-z]?)
As far as I know, there are two ways of getting Regex functionality into SQL Server.
Through CLR: https://www.simple-talk.com/sql/t-sql-programming/clr-assembly-regex-functions-for-sql-server-by-example/ . Yet, this example (from 2009) does not support groups. Are there any commonly used solutions out there that do?
By installing Master Data Services
Since installing and setting up the entire Master Data Services package felt like overkill to get some Regex functionality, I was hoping there'd be an easy, common way out...
I have found a CLR implementation that is super easy to install, and includes Regex capturing group functions.
http://www.sqlsharp.com/
I have installed this in a separate database called 'SQL#' (simply by using the provided installation .sql script), and the functions are located inside a schema with the same name. As a result I can use the function as follows:
select SQL#.SQL#.RegEx_CaptureGroup( 'test (2005) test', '\((20[012][0-9]|19[5-9][0-9])\)', 1, NULL, 1, -1, '');
Would be nice if this was included by default in SQL Server...
The DB I use has French_CI_AS collation (CI should stand for Case-Insensitive) but is case-sensitive anyway. I'm trying to understand why.
The reason I assert this is that bulk inserts with a 'GIVEN' case setup fail, but they succeed with another 'Given' case setup.
For example:
INSERT INTO SomeTable([GIVEN],[COLNAME]) VALUES ("value1", "value2") fails, but
INSERT INTO SomeTable([Given],[ColName]) VALUES ("value1", "value2") works.
EDIT
Just saw this:
http://msdn.microsoft.com/en-us/library/ms190920.aspx
so that means it should be possible to change a column's collation without emptying all the data and recreating the related table?
Given this critical piece of information (that is in a comment on the question and not in the actual question):
In fact I use Microsoft .Net's bulk insert method, so I don't really know the exact query it sends to the DB server.
it makes sense that the column names are being treated as case-sensitive, even in a case-insensitive DB, since that is how the SqlBulkCopy Class works. Please see Column mappings in SqlBulkCopy are case sensitive.
ADDITIONAL NOTES
When asking about an error, please always include the actual, and full, error message in the question. Simply saying that there was an error leads to a lot of guessing and wild-goose chases that in turn lead to off-topic answers.
When asking a question, please do not change the circumstances that you are dealing with. For example, the question states (emphasis added):
bulk inserts with a 'GIVEN' case setup fail, but they succeed with another 'Given' case setup.
Yet the example statements are single INSERTs. Also, a comment on the question states:
In fact I use Microsoft .Net's bulk insert method, so I don't really know the exact query it sends to the DB server.
Using .NET and SqlBulkCopy is waaaay different than using BULK INSERT or INSERT, making the current question misleading, making it difficult (or even impossible) to answer correctly. This new bit of info also leads to more questions because when using SqlBulkCopy, you don't write any INSERT statements: you just write a SELECT statement and specify the name of the destination Table. If you specify column names at all for the destination Table, it is in the optional column mappings. Is that where the issue is?
Regarding the "EDIT" section of the question:
No, changing the Collation of the column won't help at all, even if you weren't using SqlBulkCopy. The Collation of a column determines how data stored in the column behaves, not how the column names (i.e. meta-data of the Table) behaves. It is the Collation of the Database itself that determines how Database-level object meta-data behaves. And in this case, you claim that the DB is using a case-insensitive Collation (correct, the _CI_ portion of the Collation name does mean "Case Insensitive").
Regarding the following statements made by Jonathan Leffler on the question:
that gets into a very delicate area of the interaction between delimited identifiers (normally case-sensitive) and collations (this one is case-insensitive).
No, delimited identifiers are not normally case-sensitive. The sensitivities (case, accent, kana type, width, and starting in SQL Server 2017 variation selector) of delimited identifiers is the same as for non-delimited identifiers at that same level. "Same level" means that Instance-level names (Databases, Logins, etc) are controlled by the Instance-level Collation, while Database-level names (Schemas, Objects--Tables, Views, Functions, Stored Procedures, etc--, Users, etc) are controlled by the Database-level Collation. And these two levels can have different Collations.
you need to research whether the SQL column names in a database are case-sensitive when delimited. It may also depend on how the CREATE TABLE statement is written (were the names delimited in that?). Normally, SQL is case-insensitive on column and table names; you could write INSERT INTO SoMeTaBlE(GiVeN, cOlNaMe) VALUES("v1", "v2") and if the names were never delimited, it'd be OK.
It does not matter if the column names were delimited or not when creating the Table, at least not in terms of how their resolution is handled. Column names are Database-level meta-data, and that is controlled by the default Collation of the Database. And it is the same for all Database-level meta-data within each Databases. You cannot have some column names being case-sensitive while others are case-insensitive.
Also, there is nothing special about Table and column names. They are Database-level meta-data just like User names, Schema names, Index names, etc. All of this meta-data is controlled by the Database's default Collation.
Meta-data (both Instance-level and Database-level) is only "normally" case-insensitive due to the default Collation suggested during installation being a case-insensitive Collation.
a 'delimited identifier' is a column name, table name, or something similar enclosed in double quotes, such as CREATE TABLE "table"(...)
It is more accurate to say that a delimited identifier is an identifier enclosed in whatever character(s) the DBMS in question has defined as its delimiters. And which particular characters are used for delimiters varies between the different DBMSs.
In SQL Server, delimited identifiers are enclosed in square brackets: [GIVEN]
While square brackets always work as delimiters for identifiers, it is possible to use double-quotes as delimiters IF you have the session-level property of QUOTED_IDENTIFIER set to ON (which is best to always do anyway).
There are arcane parts to SQL (and delimited identifier handling is one of them)
Well, delimited identifiers are actually quite simple. The whole point of delimiting an identifier is to effectively ignore the rules of regular (i.e. non-delimited) identifiers. But, in terms of regular identifiers, yes, those rules are rather arcane (mainly due to the official documentation being incomplete and incorrect). So, in order to take the mystery out of how identifiers in SQL Server actually work, I did a bunch of research and published the results here (which includes links to the research itself):
Completely Complete List of Rules for T-SQL Identifiers
For more info on Collations / Encodings / Unicode / ASCII, especially as they relate to Microsoft SQL Server, please visit:
Collations.Info
The fact the column names are case sensitive means that the MASTER database has been created using a case sensitive collation.
In the case I just had that lead me to investigate this, someone entered
Latin1_CS_AI instead of Latin1_CI_AS
When setting up SQL server.
Check the collation of the columns in your table definition, and the collation of the tempdb database (i.e. the server collation). They may differ from your database collation.
I have a field [Product/Services] in my table in sql server 2005. Now I want to create a stored procedure for that table, but it keep giving an error and when I put only Product in my table than stored procedure is working fine. Now I want to put [Product/Services] in my table so how can I do that?????
it is always a bad idea to try to include special characters in a column/variable/parameter/table/view/procedure/etc names. All of your code will have to dance around this bad decision forever.
Without any detail on your particular code and/or error message, all I can provide are these links on the the rules for naming things in SQL Server:
Identifiers
Delimited Identifiers
from the second link:
Microsoft SQL Server does not
recognize variable names and stored
procedure parameters that are
delimited. These types of identifiers
must comply with the rules for regular
identifiers.
Your best bet is to just name the column something like Product_Services or ProductServices and you can have local variables and parameters named #Product_Services or #ProductServices.
Your next best bet is to leave the table alone and just names the local variables as #Product_Services or #ProductServices even though the table column is named [Product/Services].
Make sure you are properly quoting the name Product/Services as [Product/Services] whenever you are referring to it or using it of redefining it. [] are the quote characters in MS SQL Server.
I'm trying to export some tables from SQL Server 2005 and then create those tables and populate them in Oracle.
I have about 10 tables, varying from 4 columns up to 25. I'm not using any constraints/keys so this should be reasonably straight forward.
Firstly I generated scripts to get the table structure, then modified them to conform to Oracle syntax standards (ie changed the nvarchar to varchar2)
Next I exported the data using SQL Servers export wizard which created a csv flat file. However my main issue is that I can't find a way to force SQL Server to double quote column names. One of my columns contains commas, so unless I can find a method for SQL server to quote column names then I will have trouble when it comes to importing this.
Also, am I going the difficult route, or is there an easier way to do this?
Thanks
EDIT: By quoting I'm refering to quoting the column values in the csv. For example I have a column which contains addresses like
101 High Street, Sometown, Some
county, PO5TC053
Without changing it to the following, it would cause issues when loading the CSV
"101 High Street, Sometown, Some
county, PO5TC053"
After looking at some options with SQLDeveloper, or to manually try to export/import, I found a utility on SQL Server management studio that gets the desired results, and is easy to use, do the following
Goto the source schema on SQL Server
Right click > Export data
Select source as current schema
Select destination as "Oracle OLE provider"
Select properties, then add the service name into the first box, then username and password, be sure to click "remember password"
Enter query to get desired results to be migrated
Enter table name, then click the "Edit" button
Alter mappings, change nvarchars to varchar2, and INTEGER to NUMBER
Run
Repeat process for remaining tables, save as jobs if you need to do this again in the future
Use the SQLDeveloper migration tools
I think quoting column names in oracle is something you should not use. It causes all sort of problems.
As Robert has said, I'd strongly advise agains quoting column names. The result is that you'd have to quote them not only when importing the data, but also whenever you want to reference that column in a SQL statement - and yes, that probably means in your program code as well. Building SQL statements becomes a total hassle!
From what you're writing, I'm not sure if you are referring to the column names or the data in these columns. (Can SQLServer really have a comma in the column name? I'd be really surprised if there was a good reason for that!) Quoting the column content should be done for any string-like columns (although I found that other characters usually work better as the need to "escape" quotes becomes another issue). If you're exporting in CSV that should be an option .. but then I'm not familiar with the export wizard.
Another idea for moving the data (depending on the scale of your project) would be to use an ETL/EAI tool. I've been playing around a bit with the Pentaho suite and their Kettle component. It offered a good range of options to move data from one place to another. It may be a bit oversized for a simple transfer, but if it's a big "migration" with the corresponding volume, it may be a good option.