I am new to snowflake so please bear with me.
I am trying to do a very simple thing - specify a column name by literal but am getting sql compilation error
insert into MYDB.MYSCHEMA.MYTABLE (identifier('MYCOLUMN')) values (10);
SQL compiler points into unexpected parenthesis before MYCOLUMN. Skipping the word identifier and single qotes works fine.
Just got a response from Snowflake support. Currently "identifier" is only supported for select statements. It is not implemented for inserts for identifying columns. It does work for identifying tables n both select and insert.
Related
I multiple tables in my database. All of my tables are showing output using select query instead of only 1 table "case". it also have data and columns but when I use it in my query it shows syntax error. I have also attached picture which have list of table and a simple query. This code is not developed by me so I am not sure why it is showing error. Is there any kind of restriction we can set so that it cannot be used in queries?
CASE is a reserved keyword in SQL Server. Therefore, you must escape it in double brackets:
SELECT * FROM dbo.[Case];
But best naming practice dictates that we should avoid naming database objects using reserved keywords. So, don't name your tables CASE.
Reserved words are not recommended for use as a database, table, column, variable, or other object names. If you desire to use a reserved word is used as an object name in ANSI standard syntax, it must be enclosed in double-quotes OR "[]" to allow the Relational Engine (whichever that one is) that the word is being used as an object and not as a keyword in the given context. Here is the sample code.
SELECT * FROM dbo."Case"
Or
SELECT * FROM dbo.[Case]
I am trying to add multiple rows in the table in snowflake using "insert into values" statement.
Here is create table statement :
create table table1(col1 float);
I am inserting multiple rows using following command :
insert into table1(col1) values (-3.4E20),(3.4E-20);
I am getting error like
"Numeric value '-340000000000000000000' is not recognized"
On the other hand if I try inserting both the rows separately its getting successful.
Insertion commands :
insert into table1(col1) values (-3.4E20);
insert into table1(col1) values (3.4E-20);
Can you please help me identify the issue with the insert command with multiple rows?
Any suggestions, help will be very helpful.
Expanding on Marcel's answer, this query produces the same error:
select $1
from values (-3.4E20), (3.4E-20);
And this one fixes it:
select $1
from values (-3.4E20::float), (3.4E-20);
As seen in the query, the solution is to add information to the literal number so Snowflake doesn't get confused about possible types.
I'm not sure whether this is related to your problem but maybe it helps finding an answer.
According to docs you have to make sure that the data types of the inserted values are consistent across the rows because the server looks at the data type of the first row as a guide. So... if the datatypes from your first and second values-clause are different (maybe due to some automatically conversion of one of the values above), the combined query will fail. Even if they match the datatype of the table's column!
Link: https://docs.snowflake.com/en/sql-reference/sql/insert.html#multi-row-insert-using-explicitly-specified-values
I'm experiencing a problem with my full-text search on a client's database. The issue happens only on the client's database, and I cannot reproduce it locally, so I think it should be related to the existing data in the database.
In the stored procedure, I validate the keyword for different symbols and then manipulate it and assign the result to #SearchString. If my keyword doesn't contain any blank spaces of quotations, I run the following query on a table with the full-text index on all columns:
SELECT [KEY] FROM CONTAINSTABLE([TableName], *, #SearchString, 1000)
The keyword I'm passing to the stored procedure is I-MU-MUMB-ISC-2909 and after manipulation, I assign "I-MU-MUMB-ISC-2909*" to #SearchString. Even though the exact string exists in one of the columns, the query doesn't return any results. I used STOPLIST = OFF when I created this full-text index, so my problem should not be about stopwords. Also, if I search for I-MU-MUMB-ISC-2908, (string ends with 8 instead of 9) another row that has this string in the same column shows up in the result.
I even modified the select and used the specific column name that contains the keyword, but again there was no result.
If it helps, I noticed that these two rows along with few hundred more rows have I-MU-MUMB in one of their columns.
I don't have direct access to the server, so I would appreciate if you could give me some advice that I can test later when I get my access to the server.
Thank you!
I found my answer after a few days of testing every scenario that came to my mind.
While checking the properties of the full-text catalog, I noticed that the last population day is NULL. I checked the log of the full-text search and saw the following error:
a fatal error occurred during a full-text population and caused the population to be cancelled
Then I checked the error log of the database and realized that the database ran out of space a few days ago and it caused the full-text indexing to stop.
The solution was already in the log file, I resumed the population with the following command, and everything works fine now:
ALTER FULLTEXT INDEX ON table_name RESUME POPULATION
Hope it helps someone!
I am trying to see if there is an easy answer for this. I have done something similar using multiple pick dropdown parameters in SSRS but this appears to be different.
My scenario is this, so maybe there is an even better answer.
I have a production server that I do not want to make any changes to including temp tables or functions. The production server has a table of clients with about 1600 records. I have set up an SSIS package that will allow transfer of data from production to dev based on a clientid. So my sources would have a query similar to Select Field From Table Where ClientId = ?
This works fine. Now I want to load more than one client, based an data in the clients table. It may be Select ClientId From Clients where Field = A and returns multiple ClientIds.
I am able to populate a comma delimited list from an execute sql task to a SSIS variable, so it maybe 1,4,8.
If I change my source query to use ClientId in (?) I get a conversion error.
I have looked at many posts that advocate a temp table or a function which I want to avoid. Select IN using varchar string with comma delimited values
I have contemplated building the entire sql statement into a variable but this don't seem like the right path as I have many tables to query and transfer where using ClientId = ? works well without having to build each individual SQL statement to a variable.
Is there an easy fix I am missing? I will turn my research now to try to find out how I did this in SSRS but I thought that I should try a post here to see if someone has accomplished this before.
I appreciate any info on this, thank you.
EDIT: Key note is that the column on clients is on the dev server, so I cannot just use a select in the where clause as the column does not exist on the production server.
EDIT: I did not mention that I am specifically looking at OLEDB sources mapping a parameter to ? in the sql statement.
EDIT: Narrowing down on this but having trouble relating SSRS and SSIS functionality. In SSRS its called a multi-value parameter in the following link the key line is
WHERE Production.ProductInventory.ProductID IN (#ProductID)
https://msdn.microsoft.com/en-us/library/dn385719(v=sql.110).aspx
This one looks good as well
https://sqlblogcasts.com/blogs/simons/archive/2007/11/22/RS-HowTo---Pass-a-multivalue-parameter-to-a-query-using-IN.aspx
I will keep researching and thank you for the help so far.
I think this sums it up best
This functionality is limited to strictly using embedded SQL.
What SSRS does is transform your SQL column IN (#value) to column IN
(#selectedvalue1,#selectedvalue2) etc.
You need to forget anything you have about the other ways of passing
lists to SQL i.e. building strings etc. and make sure you declare the
data types are correct for the value of your parameter.
You do not need to use the Join(parameters!,",") trick UNLESS
you are passing the list to a stored procedure.
In which case you then need to use some function to turn the delimited
list into a rowset as you have done.
I hope that helps
The core question is if I can get the same functionality in SSIS as in SSRS. It reminds me of macro substitution..
If you dont want to create a function, you can use the following in your t-sql statement.
Declare #ClientIds nvarchar(50) = '123,456'; --<-- Comma delimited list of Client Ids
Select Field
From Table
Where ClientId IN (
SELECT CAST(RTRIM(LTRIM(Split.a.value('.', 'VARCHAR(100)'))) AS INT) ClientIDs
FROM (
SELECT Cast ('<X>'
+ Replace(#ClientIds, ',', '</X><X>')
+ '</X>' AS XML) AS Data
) AS t CROSS APPLY Data.nodes ('/X') AS Split(a)
)
I'm trying to use a BULK INSERT statement to populate a large (17-million-row) table in SQL Server from a text file. One column, of type nchar(17) has a UNIQUE constraint on it. I've checked (using some Python code) that the file contains no duplicates, but when I execute the query I get this error message from SQL Server:
Cannot insert duplicate key row in object 'dbo.tbl_Name' with unique index 'IX_tbl_Name'.
Could Server be transforming the text in some way as it executes BULK INSERT? Do SQL Server databases forbid any punctuation marks in nchar columns, or require that any be escaped? Is there any way I can find out which row is causing the trouble? Should I switch to some other method for inserting the data?
Your collation settings on the column could be causing data to be seen as duplicate, whereas your code may see it as unique. Things like accents and capitals can cause issues under certain collation settings.
Another thought too would be that empty or null values count as duplicates, so your Python code may not have found any text duplicates, but what about the empties?