I was trying to create a table in TDengine to record server load information using the following SQL:
taos> create table server-load-1 (ts timestamp, oneline_status bool, host-name binary(30), location-info binary(45));
DB error: syntax error near "-load-1 (ts timestamp, oneline_status bool, (0.000201s)
taos>
Looks like TDengine does not support '-' in table name and column name. Is there any ways or settings to make TDengine support special characters like '*', '-', '.' in table/column names? Since my script used for importing data doesn't convert any special characters so it would be quite useful if TDengine support this and I don't need to make any changes to my script.
It turns out using backquote(`) to enclose the name works for me.
Related
I am new to snowflake so please bear with me.
I am trying to do a very simple thing - specify a column name by literal but am getting sql compilation error
insert into MYDB.MYSCHEMA.MYTABLE (identifier('MYCOLUMN')) values (10);
SQL compiler points into unexpected parenthesis before MYCOLUMN. Skipping the word identifier and single qotes works fine.
Just got a response from Snowflake support. Currently "identifier" is only supported for select statements. It is not implemented for inserts for identifying columns. It does work for identifying tables n both select and insert.
I have quite a few tables and I'm using SSIS to bring the data from Oracle to SQL Server, in the process I'd like to convert all varchar fields to nvarchar. I know I can use the Data Conversion transformer but it seems the only way to do this is to set each field one by one, then I'll have to manually set the mapping in the destination component to map to the "Copy of" field. I've got thousands of fields and it would be tedious to set it on each one... is there a way to say "if field is DT_STR convert to DT_WSTR"?
what you can do is, instead of replacing varchar with nvarchar manually before running the script is copy and save all the create table scripts generated by SSIS to a document. Then you can do a global replace nvarchar x varchar in the document.
Use then the amended script as a step in your SSIS package to create the tables before populating them with the data from Oracle.
The proper way is to use the data conversion step...
That said, it appears if you disable external meta data validation in SSIS, you can bypass this error. SQL will then use an implicit conversion to the destination type.
See this SO post for a quick explanation.
Is there any option to set the words "file", "key" and "trigger" as field names for ms-sql server?
We have pretty big application written with web2py with PostgreSQL as db, ans some of the fields has those names. One customer wishes to use ms-sql as db server, And I'm trying not to break compatibility within the DB structure.
Using square brackets (found in google) didn't help (could not use '[file]') - the ms-sql rejected it.
The documentation is pretty clear on this:
Although it is syntactically possible to use SQL Server reserved
keywords as identifiers and object names in Transact-SQL scripts, you
can do this only by using delimited identifiers.
The delimiters used in SQL Server are either double quotes or []. So, you can define them as:
[file]
or
"file"
Note that you need to use the delimiters wherever they appear.
The use of reserved words for such columns is discouraged. However, you might actually have a use case of compatibility between different databases where this capability will be useful.
I don't know why square bracket would fail. It works on SQL Fiddle.
I stumbled into this same issue when trying to use a legacy SQL Server table with fields that don't conform to identifier name rule, such as 'My File'.
Reading the source code I found that a rname field attribute exists for these cases:
Field('my_file', 'string', rname='[My File]'),
With this, Web2Py works with the my_file field name, the actual SQL/DML generated to interact with the database will use [My File] instead.
I have a SQL Server table, whose name is like Vers-xxx_yyy.
As you can see, there is a character "-".
I don't know why this table was made so, but I have to load it from datastage job.
So when I run my job, I obtain error "table doesn't exist".
I use odbc stage.
Directly on SQL Server it is possible to use syntax [Vers-xxx_yyy], but not in datastage.
This db already exists and it is used by other applications.
Is there a way to avoid/resolve the problem?
Try using double quotes over the table name. Also it is good practice not to use hyphen, instead you can use underscore
Try using a backslash \ to escape the - character - Vers\-xxx_yyy.
You should be able to put the table name in this form on the ODBC Connector too: [Vers-xxx_yyy]
Another solution would be to inform the SQL to query this table: SELECT * FROM [Vers-xxx_yyy]
we have WinForms app which stores data in SQL Server (2000, we are working on porting it in 2008) through ADO.NET (1.1, working on porting to 4.0). Everything works fine if I read data previsouly written in Western-European locale (E.g.: "test", "test ù"), but now we have to be able to mix Western and non-Western alphabets as well (E.g.: "test - ۓےۑ" - these are just random arabic chars).
On the SQL Server side, database has been set with the Latin1_General collation, the field is a nvarchar(80). If I run a SQL SELECT statement (E.g.: "SELECT * FROM MyTable WHERE field = 'test - ۓےۑ'", don't mind about the "*" or the actual names) from Query Analyzer, I get no results; the same happens if I pass the Sql statement to an ADO.NET DataAdapter to fill a DataTable. My guess is that it has something to do with collation, but I don't know how to correct this: do I have to change to collation (SQL Server) to a different one? Or do I have to set the locale on the DataAdaoter/DataTable (ADO.NET)?
Thanks in advance to anyone who will help
Shouldn't you use N when comparing nvarchar with extended char. set?
SELECT * From TestTable WHERE GreekColCaseInsensitive = N'test - ۓےۑ'
Yes, the problem is most likely the collation. The Latin1_General collation does not include the rules to sort and compare non latin characters.
MSDN claims:
If you must store character data that reflects multiple languages, you can minimize collation compatibility issues by always using the Unicode nchar, nvarchar, and ntext data types instead of the char, varchar, text data types. Using the Unicode data types eliminates code page conversion issues.
Since you have already complied with this, you should read further on the info about Mixed Collation Environments here.
Additionally I want to add that just changing a collation is not something done easy, check the MSDN for SQL 2000:
When you set up SQL Server 2000, it is important to use the correct collation settings. You can change collation settings after running Setup, but you must rebuild the databases and reload the data. It is recommended that you develop a standard within your organization for these options. Many server-to-server activities can fail if the collation settings are not consistent across servers.
You can specify a collation on a per column bases however:
CREATE TABLE TestTable (
id int,
GreekColCaseInsensitive nvarchar(10) collate greek_ci_as,
LatinColCaseSensitive nvarchar(10) collate latin1_general_cs_as
)
Have a look at the different binary multilingual collations here. Depending on the charset you use, you should find one that fits your purpose.
If you are not able or willing to change the collation of a column you can also just specify the collation to be used in the query like:
SELECT * From TestTable
WHERE GreekColCaseInsensitive = N'test - ۓےۑ'
COLLATE latin1_general_cs_as
As jfrobishow pointed out the use of N in front of the string you want to use to compare is essential. What does it do:
It denotes that the subsequent string is in Unicode (the N actually stands for National language character set). Which means that you are passing an NCHAR, NVARCHAR or NTEXT value, as opposed to CHAR, VARCHAR or TEXT. See Article #2354 for a comparison of these data types.
You can find a quick rundown here.