Due to security policy I need to use a stored procedure (MS SQL Server) as a source of ClickHouse external dictionary via ODBC.
According to documentation for ClickHouse it is possible to use only a table (or view). Although ODBC allows to call stored procedures.
<odbc>
<db>DatabaseName</db>
<table>TableName</table>
<connection_string>DSN=some_parameters</connection_string>
<invalidate_query>SQL_QUERY</invalidate_query>
</odbc>
When I tried "{CALL my_procedure_name}" in table I got this error.
Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = ODBC
handle exception: Failed to get number of columns: Connection:NetConn:
000000001760E400\nServer:OMEGA_DSN\n===========================\nODBC
Diagnostic record #1:\n===========================\nSQLSTATE =
42S02\nNative Error Code = 208\n[Microsoft][ODBC Driver 17 for SQL
Server][SQL Server]54>?CAB8<>5 81J5:B0 "{CALL
my_procedure_name}".\n\n===========================\nODBC Diagnostic
record
Can anybody sujest some solution or workaround?
This is most likely due to the incompatibility of the ODBC driver. You may try testing it via https://www.mankier.com/1/isql
As for a workaround, I would start from wrapping the stored procedure in a view.
Related
Firstly, I have seen many answers which is specific to the Invalid Object Name error working with SQL Server, but None of them seem to solve my problem. I don't have much idea on SQL Server dialect, but here is my current setup required on the project.
SQL Server 2017
SQLAlchemy (pyodbc+mssql)
Python 3.9
I'm trying to insert a database row, using the SQLAlchemy ORM, but it fails to resolve the schema and table, giving me the error of type
(pyodbc.ProgrammingError) ('42S02', "[42S02] [Microsoft][ODBC Driver
17 for SQL Server][SQL Server]Invalid object name 'agent.dbo.tasks'.
(208) (SQLExecDirectW); [42S02] [Microsoft][ODBC Driver 17 for SQL
Server][SQL Server]Statement(s) could not be prepared. (8180)")
I'm creating a session with the following code.
engine = create_engine(connection_string, pool_size=10, max_overflow=5,
pool_use_lifo=False, pool_pre_ping=True, echo=True, echo_pool=True)
db_session_manager = sessionmaker()
db_session_manager.configure(bind=engine)
session = db_session_manager()
I have a task object defined like
class Task(BaseModel):
__tablename__ = "tasks"
__table_args__ = {"schema": "agent.dbo"}
# .. field defs
I'm trying to fill the object fields and then do the insert like the usual
task = Task()
task.tid = 1
...
session.add(task)
session.commit()
But this fails, with the error mentioned before. I tried to execute a direct query like
session.execute("SELECT * FROM agent.dbo.tasks")
And it returned a result set.
The connection string is a URL object, which prints like
mssql+pyodbc://task-user:******#CRD4E0050L/agent?driver=ODBC+Driver+17+for+SQL+Server
I tried to use the SQL Server Management Studio to insert manually and check, there it shown me a sql dialect with [] as field separators
like
INSERT INTO [dbo].[tasks]
([tid]..
, but SQLAlchemy on echo did not show that, instead it used the one I see in MySQL like
INSERT INTO agent.dbo.tasks (onbase_handle_id,..
What is that I'm doing wrong ? I thought SQLAlchemy if configured with a supported dialect, should work fine (I use it on MySQL quite well). Am I missing any configuration ? Any help appreciated.
I have a table stored in SQL Server Management Studio version 15.0.18369.0 and I have a working and established ODBC connection to SAS language program World Programming Software version 3.4.
Previous import/reading of this data has been successful but the operator of the SQL server may have recently converted their data type to NVARCHAR(max). It may also be due to a driver change (I got a new laptop and reinstalled what I thought was the exact same OBDC for SQl driver as I had before but who knows) I have 64-bit ODBC Driver 17 for SQL server.
The VARCHAR(max) type in SQL causes the data to only be 1 character long in every column.
I have tried to fix it by:
Adding the DB max text option to libname
libname odbclib odbc dsn="xxx" user="xxx" password="xxx" DBMAX_TEXT=8000;
This did nothing so I also tried to add DB type option:
data mydata (dbtype=(mycol='char(25)')) ;
set odbclib.'sql data'n
run;
And I get ERROR:
The option dbtype is not a valid output data set option.
I have also tried DBSASTYPE, and putting both options in the set statement and this yields the same error.
I also tried with proc SQL:
proc sql noprint;
CONNECT TO ODBC(dsn="xxx" user="xxx" password="xxx"); create table
extract(compress=no dbsastype=(mycol='CHAR(20)')) as select * from
connection to odbc ( select * from dbo.'sql data'n
); disconnect from odbc; quit;
And I get
NOTE: Connected to DB: BB64 (Microsoft SQL Server version 12.00.2148)
NOTE: Successfully connected to database ODBC as alias ODBC. 3915
create table extract(compress=no
dbsastype=(mycol='CHAR(20)')) as 3916 select *
from connection to odbc 3917 ( 3918 select *
from dbo.'sql data'n ERROR: The option dbsastype is not a valid
output data set option 3919 3920 ); 3921
disconnect from odbc; NOTE: ERRORSTOP was specified. The statement
will not be executed due to an earlier error NOTE: Statements not
executed because of errors detected 3922 quit;
One thing to try would be putting the DBTYPE or DBSASTYPE in the right place.
data mydata;
set odbclib.'sql data'n(dbtype=(mycol='char(25)'));
run;
It is an option that would go on the "set" not the "data" line.
I would also encourage you to contact the developers, as it seems like this is a bug in their implementation; or perhaps try a different ODBC SQL driver.
Maybe a workaround in the meanwhile is to use CONVERT in the passthrough?
proc sql noprint;
CONNECT TO ODBC(dsn="xxx" user="xxx" password="xxx"); create table
extract as select * from
connection to odbc ( select a,b,c,convert(NCHAR(20),mycol) from dbo.'sql data'n
); disconnect from odbc; quit;
Oracle version 12.1.0.2
max_string_size=extended
I am using sql server ODBC to connect to sql server database via Oracle gateway to sql server, the connection is working fine and i am able to access sql server tables.
However, as per Oracle documentation starting 12c and with extended limit on varchar2 data type the conversion of sqlserver varchar(max) to oracle Long will only happen if the length of sql server data is more than 32k.
My sql server table has few columns defined as varchar(max) in and all of those i see getting converted to LONG when i try to describe the table over dblink.
I need to load the data from sql server to oracle and the above problem is making it very difficult as more than one long columns can not be copied over dblink.
Any help will be deeply appreciated.
I created a view on the SQL server side that uses substr(column,1,4000) to fit within the old Oracle max 4000 character length. This worked quite well with Oracle 11.
I am in the process of migrating to a new Oracle 18 instance that uses character set AL32UTF8 instead of WE8MSWIN1252. The exact same SQL is now getting:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[Microsoft][ODBC Driver Manager] Program type out of range {HY003}
ORA-02063: preceding 2 lines from CEAV195
Fortunately I don't have a tight deadline for working this out.
Comment: I am now getting
[Error] Execution (8: 17): ORA-00997: illegal use of LONG datatype
despite using the following in the view on the SQL Server side:
cast(substring(cr.response,1,2000) as varchar(2000)) response
As I said earlier, this worked perfectly fine with Oracle 11 and the WE8MSWIN1252 character set.
I hit the same issue and found this solution elsewhere
set serverout on
DECLARE
l_cursor BINARY_INTEGER;
l_id VARCHAR2(60);
l_temp VARCHAR2(250);
l_notes VARCHAR2(32767);
BEGIN
l_cursor := DBMS_HS_PASSTHROUGH.open_cursor#remotedb;
DBMS_HS_PASSTHROUGH.parse#remotedb(
l_cursor,
'select "RecId","Notes" from "MySqlServerTable"'
);
LOOP
DBMS_HS_PASSTHROUGH.get_value#remotedb(l_cursor, 1, l_id);
DBMS_HS_PASSTHROUGH.get_value#remotedb(l_cursor, 2, l_notes);
DBMS_OUTPUT.put_line(l_id || ' ' || l_notes);
END LOOP;
exception
when others then
DBMS_HS_PASSTHROUGH.close_cursor#remotedb(l_cursor);
raise;
END;
/
I have a huge (26GB) sqlite database that I want to import to SQL Server with SSIS.
I have everything setup correctly. Some of the data flows are working correctly and importing the data.
Data flows are simple. They just consist of source and destination.
But when it comes to a table that has 80 million rows, data flow fails with this unhelpful message:
Code: 0xC0047062
Source: Data Flow Task Source 9 - nibrs_bias_motivation [55]
Description: System.Data.Odbc.OdbcException (0x80131937): ERROR [HY000] unknown error (7)
at System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, RetCode retcode)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader, Object[] methodArguments, SQL_API odbcApiMethod)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader)
at System.Data.Odbc.OdbcCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.Odbc.OdbcCommand.ExecuteDbDataReader(CommandBehavior behavior)
at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior)
at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.PreExecute()
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPreExecute(IDTSManagedComponentWrapper100 wrapper)
And before this task fails, memory usage goes up to 99%, then the task fails. This made me think its a memory issue. But I don't know how can I solve this.
I tried setting DelayValidation to true on all data flow tasks. Nothing changed.
I played with the buffer sizes. Nothing.
What can I do?
Step by Step guide
Since the error is thrown when reading from a large dataset, try reading data by chunks, to achieve that you can follow these steps:
Declare 2 Variables of type Int32 (#[User::RowCount] and #[User::IncrementValue])
Add an Execute SQL Task that execute a select Count(*) command and store the Result Set into the variable #[User::RowCount]
Add a For Loop with the following preferences:
Inside the for loop container add a Data flow task
Inside the dataflow task add an ODBC Source and OLEDB Destination
In the ODBC Source select SQL Command option and write a SELECT * FROM TABLE query *(to retrieve metadata only`
Map the columns between source and destination
Go back to the Control flow and click on the Data flow task and hit F4 to view the properties window
In the properties window go to expression and Assign the following expression to [ODBC Source].[SQLCommand] property: (for more info refer to How to pass SSIS variables in ODBC SQLCommand expression?)
"SELECT * FROM MYTABLE ORDER BY ID_COLUMN
LIMIT 500000
OFFSET " + (DT_WSTR,50)#[User::IncrementValue]"
Where MYTABLE is the source table name, and IDCOLUMN is your primary key or identity column.
Control Flow Screenshot
References
ODBC Source - SQL Server
How to pass SSIS variables in ODBC SQLCommand expression?
HOW TO USE SSIS ODBC SOURCE AND DIFFERENCE BETWEEN OLE DB AND ODBC?
SQLite Limit
I'm working on a Shiny app for updating entries maintained in a SQLServer2008 remote DB. I've been connecting to the DB using RODBC and I'm attempting to use parameterized queries through RODBCext to support mass updates of information.
I'm able to get the parameterized queries to work from my Windows 7, RStudio running R 3.2.3, but for some reasons when I try to run the same code from the linux machine running the same version of R and connecting with the same version of the driver, I get the following error:
Error in sqlExecute(Connection, data = dat) :
42000 402 [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]The data types char and text are incompatible in the equal to operator.
42000 8180 [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Statement(s) could not be prepared.
[RODBCext] Error: SQLExecute failed
In addition: Warning messages:
1: In sqlExecute(Connection, data = dat) :
42000 402 [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]The data types char and text are incompatible in the equal to operator.
2: In sqlExecute(Connection, data = dat) :
42000 8180 [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Statement(s) could not be prepared.
Here is the simple example code that works properly on my windows machine, but not on the linux machine (I removed the connection string information):
library(RODBCext)
Connection <- odbcDriverConnect(paste('Driver=ODBC Driver 13 for SQL Server',
'Server=<Server IP>', 'Port=<Port>', 'Database=<Database>', 'UID = <UserID>',
'PWD=<Password>', sep = ';'))
dat <- data.frame(Node_ID = "999", NodeGUID = "AF213171-201B-489B-B648-F7D289B735B1")
query <- "UPDATE dbo.Nodes SET Node_ID = ? WHERE NodeGUID = ?"
sqlPrepare(Connection, query)
sqlExecute(Connection, data = dat)
In this example, the dataframe is created with the columns as factors. I've tried explicitly casting the columns as characters first, as this seemed to work for the users having trouble with dates, but that still results in the same SQL error. I've also tried casting the Node_ID as numeric to match the SQL table, and I get the same error. The columns in the Nodes table in SQL are defined as:
NodeGUID (PK, char(36), not null)
Node_ID (int, null)
I've tried combining the sqlPrepare and sqlExecute calls by supplying the query argument for sqlExecute, and from what I understand that's a trivial difference and it results in the same error.
I suspect there must be a difference in the drivers and how they implement whatever SQL calls sqlExecute() makes. I also suspect sqlExecute() must handle the data types, as my results don't change regardless of the column types.
Thank you for any help you can provide!
Thanks to everyone who took a look at my question.
One of the SQL Server folks at my job was able to solve the issue. They suggested explicitly casting the arguments in the SQL query written for sqlExecute(). Here's the code that works, note I know the GUID will always be 36 characters and I'm confident the rest of the arguments I use this query for will be less than 1000 when converted to strings:
my_query <- "UPDATE dbo.Nodes SET Node_ID = CAST(? As varchar(1000)) WHERE NodeGUID = CAST(? As varchar(36))"
sqlExecute(Connection, data = dat, query = my_query)
I'm guessing the driver for Windows is somehow handling the casting from text to varchar, but the linux driver does not.
I hope this helps others working with RODBCext. Thanks to Mateusz Zoltak and the team of contributors for a great package!