Oracle - PreparedStatement Command not properly Ended - prepared-statement

I am using Oracle 11g, and when I try to create a select statement with an apostrophe, I get the following error.
Is the prepared statement suppose to take care of apostrophes or should I do it?
Is there a way to see the query just before execution (with true values instead of '?') ?

You are lacking a space between the name of the table (T) and WHERE. Thus the SQL that gets executed looks something like this SELECT * FROM TABLE TWHERE T.COL1 = '<somevalue>' AND T.COL2 = '<somevalue>' AND T.COL3 = '<somevalue>' - note the TWHERE.

Related

In T-SQL, why is the ISJSON function not filtering out bad data when used in a CTE?

I have a table with a varchar(max) column that stores JSON data and one of the records has an extra comma at the end of an array in the JSON. The following statement results in an error
Unexpected character ',' is found...
Code:
SELECT JSON_QUERY(JsonField, '$.SomeProperty') AS JsonData
FROM MyTable
But the following code works because of the WHERE clause:
SELECT JSON_QUERY(JsonField, '$.SomeProperty') AS JsonData
FROM MyTable
WHERE ISJSON(JsonField) = 1
I can use that statement as a CTE as follows:
WITH cte AS
(
SELECT JSON_QUERY(JsonField, '$.SomeProperty') AS JsonData
FROM MyTable
WHERE ISJSON(JsonField) = 1
)
SELECT JsonData
FROM cte
But when I try to filter on the cte, I get the same error:
WITH cte AS
(
SELECT JSON_QUERY(JsonField, '$.SomeProperty') AS JsonData
FROM MyTable
WHERE ISJSON(JsonField) = 1
)
SELECT JsonData
FROM cte
WHERE LEN(JsonData) > 500
Obviously, I can move the 2nd WHERE clause up to the definition of the CTE, but there are other filters I want to use and this is the the most straight forward filter I can use to provide this example.
Am I missing something or is this a bug? My workaround is to use a temp table instead.
You can use the following expression so that the JSON_QUERY is only evaluated if the ISJSON condition is met.
SELECT CASE WHEN ISJSON(JsonField) = 1 THEN JSON_QUERY(JsonField, '$.SomeProperty') END AS JsonData
FROM MyTable
WHERE ISJSON(JsonField) = 1
This is mostly reliable. The documentation says
You should only depend on order of evaluation of the WHEN conditions
for scalar expressions (including non-correlated subqueries that
return scalars), not for aggregate expressions.
And this is a scalar expression - not an aggregate. (One other case to be aware of is if you test this with a single literal value you can also encounter the error at plan compilation time as SQL Server tries to constant fold it but this shouldn't affect you as you are selecting from a table)
But really you need to be using SQL Server 2022 so you can test ISJSON(JsonField, OBJECT).
Otherwise the string '1' will pass the ISJSON check (as ISJSON(JsonField, VALUE) is 1) but still barf at the JSON_QUERY.
I suppose on earlier versions you could also add an additional condition (to both CASE and WHERE) that JsonField is LIKE '{%'
This is a common issue. You'll see the same kind of thing with people trying to use IsNumeric() or IsDate() to pre-filter a field for number or date values, and the answer is the same: a database server may decide it's more efficient to run the function call BEFORE the where clause, and there's really nothing you can do about it except design the schema and validation to not have mixed or broken data in the first place.
So you can use a function to select only the records where IsJSON() is 1, but if you actually try to parse into the JSON data all bets are off. It's more useful to select records where IsJSON() is 0, without parsing the JSON on the DB server, so you can fix them in client code.
There's also Try_Cast() now, but I don't recall whether it supports JSON.

IF condition in where clause in SQL Server

I need to write the if condition in the where clause of SQL where I wanted to define that if one column is null then search in another column in a row. I have used union but it's making the query slow to execute, so help me write this statement in the proper way.
This is the code I have right now:
SELECT *
FROM ACCOUNT
WHERE (IF ACCOUNTID IS NULL THEN REF_ACC_ID = 12 ELSE ACCOUNTID = 12)
you can use isnull built in function in sql server
like below link
w3schools
you can write this code for solve your problem
SELECT *
FROM ACCOUNT
WHERE isnull(ACCOUNTID ,REF_ACC_ID ) = 12

How can I pass a table name as a variable in SQL - Python 3.4

I am trying to write an SQL statement in python which passes a table name as a variable. However, I get the following error: Must declare the table variable "#P1".
pypyodbc.Programming Error: ('42000', '[42000]' [Miscrosoft] [SQL SERVER NATIVE CLIENT 10.0] [SQL SERVER] Must declare the table variable "#P1"
The code yielding the ERROR is:
query = cursor.execute('''SELECT * FROM ?''', (table_variable,))
I have other code where I pass variables to the SQL statement using the same syntax which works fine (code below works as intended).
query = cursor.execute('''SELECT column_name FROM information_schema.columns WHERE table_name = ?''', (table_variable,))
The error seems to occur when I am using a variable to pass a table name.
Any help resolving this error would be much appreciated.
With new comments from the OP this has changed rather significantly. If all you are trying to do is get a few rows of sample from each table you can easily leverage the sys.tables catalog view. This will create a select statement for every table in your database. If you have multiple schemas you could extend this to add the schema name too.
select 'select top 10 * from ' + QUOTENAME(t.name)
from sys.tables t
What you're trying to do is impossible. You can only pass values into queries as parameters - so
SELECT * FROM #Table
is banned but
SELECT * FROM TableName WHERE Column=#Value
is perfectly legal.
Now, as to why it's banned. From a logical point of view the database layer can't cache a query plan for what you're trying to do at all - the parameter will completely and utterly change where it goes and what returns - and can't guarantee in advance what it can or can't do. It's like trying to load an abstract source file at runtime and execute it - messy, unpredictable, unreliable and a potential security hole.
From a reliability point of view, please don't do
SELECT * FROM Table
either. It makes your code less readable because you can't see what's coming back where, but also less reliable because it could change without warning and break your application.
I know it can seem a long way round at first, but honestly - writing individual SELECT statements which specify the fields they actually want to bring back is a better way to do it. It'll also make your application run faster :-)
You can define a string variable:
table_var_str = 'Table_name'
st = 'SELECT * FROM ' + table_var_str
query = cursor.execute(st)
It will solve the problem.
You can also set the table_var_str as a list:
table_var_str = []
st = []
for i in range(N):
table_var_str.append = 'Table_name' + str(i)
st.append('SELECT * FROM ' + table_var_str[i])
for j in range(J):
query = cursor.execute(st[j])
If the query is very long, you should write them in a line instead of multi lines.

Where clause with varbinary doesn't work

I have a table MyTable with a field MyField, which is varbinary(max). I have the following query:
SELECT COUNT(*) FROM MyTable WHERE MyField IS NOT NULL
The SELECT clause can have anything, it does not matter. Because of the varbinary MyField in the Where clause, the execution never ends.
I tried even this:
SELECT Size
FROM
(
SELECT ISNULL(DATALENGTH(MyField), 0) AS Size FROM MyTable
) AS A
WHERE A.Size > 0
The inner query works fine, the whole query without the Where clause works fine, but with that Where clause it is stuck. Could anybody please explain this to me?
Thanks.
Don't think or assume that it couldn't possibly be blocking, just because a different query returns immediately. With a different where clause, and a different plan, and possibly different locking escalation, you could certainly have cases where one query is blocked and another isn't, even against the same table.
The query is obviously being blocked, if your definition of "stuck" is what I think it is. Now you just need to determine by who.
In one query window, do this:
SELECT ##SPID;
Make note of that number. Now in that same query window, run your query that gets "stuck" (in other words, don't select a spid in one window and expect it to have anything to do with your query that is already running in another window).
Then, in a different query window, do this:
SELECT blocking_session_id, status, command, wait_type, last_wait_type
FROM sys.dm_exec_requests
WHERE session_id = <spid from above>;
Here is a visualization that I suspect might help (note that my "stuck" query is different from yours):
If you get a non-zero number in the first column, then in that different query window, do this:
DBCC INPUTBUFFER(<that blocking session id>);
If you aren't blocked, I'd be curious to know what the other columns show.
As an aside, changing the WHERE clause to use slightly different predicates to identify the same rows isn't going to magically eliminate blocking. Also, there is no real benefit to doing this:
SELECT Size
FROM
(
SELECT ISNULL(DATALENGTH(MyField), 0) AS Size FROM MyTable
) AS A
WHERE A.Size > 0
When you can just do this:
SELECT ISNULL(DATALENGTH(MyField), 0) AS Size
FROM dbo.MyTable -- always use schema prefix
WHERE DATALENGTH(MyField) > 0; -- always use semi-colon
SQL Server is smart enough to not calculate the DATALENGTH twice, if that is your concern.
Just for fun and giggles I decided to toss this one out to you. I'm taking some things for granted here but you can look at tasks that had to wait. Look for wait types that begin with LCK_ as these will be your blocked tasks. Please note that on a busy server the ring buffer may have rolled over after a while. Also note that this is intended to supplement #AaronBertran's excellent answer and in no way meant to supplant it. His is more comprehensive and is the correct way to identify the issue while it's happening.
SELECT
td.r.value('#name','sysname') event_name,
td.r.value('#timestamp','datetime2(0)') event_timestamp,
td.r.value('(data[#name="wait_type"]/text)[1]','sysname') wait_type,
td.r.value('(data[#name="duration"]/value)[1]','bigint') wait_duration,
td.r.value('(action[#name="sql_text"]/value)[1]','nvarchar(max)') sql_text,
td.r.query('.') event_data
FROM (
SELECT
CAST(target_data AS XML) target_data
FROM sys.dm_xe_sessions s
JOIN sys.dm_xe_session_targets t
ON s.address = t.event_session_address
WHERE s.name = N'system_health'
and target_name = 'ring_buffer'
) base
CROSS APPLY target_data.nodes('/RingBufferTarget/*') td(r)
where td.r.value('#name','sysname') = 'wait_info';

How to create UPDATE from exported data?

I'd like to get data from one database..table into an UPDATE script that can be used with another database..table. Rather than doing export from DB1 into DB2, I need to run an UPDATE script against DB2.
If the above is not possible, is there a way to generate the UPDATE syntax from DB1..table1:
col1 = value1,
col2 = value2,
col3 = value3,
...
--- EDIT ---
Looking through the answers, there's an assumption that DB1 is available at the same time that DB2 is available. This is not the case. Each database will know nothing of the other. The two servers/databases will not be available/accessible at the same time.
Is it possible to script the table data into a flat file? Not sure how easy that will be to then get into an UPDATE statement.
Using a linked server and an update statement will really be your easiest solution as stated above, but I do understand that sometimes that isn't possible. The following is an example of dynamically building update statements. I am assuming there is no chance of SQL Injection from the "SourceData" table. If there is that possibility then you will need to use the same technique to build statements that use sp_executesql and parameters.
SELECT 'UPDATE UpdateTable ' +
' SET FieldToUpdate1 = ''' + SourceData.DataToUpdate1 + '''' +
' , FieldToUpdate2 = ' + CAST(SourceData.DataToUpdate2 AS varchar) +
' WHERE UpdateTable.PrimaryKeyField1 = ' + CAST(SourceData.PrimaryKey1 AS varchar) +
' AND UpdateTable.PrimaryKeyField2 = ''' + SourceData.PrimaryKey2 + ''''
FROM SourceData
Also here is a link to a blog I wrote on Generating multiple SQL statements from a query. It's a bit more simplistic than the type of statement you are trying to create, but it should give you an idea. Also here is an article I wrote on using Single Quotation Marks in SQL. Other than that you can go onto Google and search for "SQL Server Dynamic SQL" and you will get hundreds of blogs, articles, forum entries etc on the subject.
Your question needs a little more clarification to be completely understand what you are trying to accomplish, but assuming the databases are on the same server, then you should be able to do something like this using UPDATE and JOIN:
UPDATE a
SET col1 = value1, col2 = value2
FROM database1.schema.table a
JOIN database2.schema.table b
ON a.primaryKey = b.primaryKey
Alternatively, if they are on different servers, you could setup a linked server and it should work similarly.
I think you still want to INSERT from one table into another table of another database. You can use INSERT INTO..SELECT
INSERT INTO DB2.dbo.TableName(Col1, Col2, Col3) -- specify columns
SELECT Col1, Col2, Col3
FROM DB1.dbo.TableName
Assuming dbo is the schema used.
Are both databases on the same SQL-Server? In that case, use fully-qualified table names. I.e.:
Update Database1.Schema.Table
SET ...
FROM
Database2.Schema.Table
If they're not on the same server, then you can use linked servers.
I'm not sure of the SQL server syntax but you can do something like this to generate the update statement.
SELECT 'UPDATE mytable SET col1=' || col1 || ' WHERE pk=' primaryKey ||';' FROM mytable;
Obviously you'll need to escape quotes, etc. depending on the value types.
I assume this is because you can't do a normal UPDATE from a SELECT?

Resources