I'm working on a Shiny app for updating entries maintained in a SQLServer2008 remote DB. I've been connecting to the DB using RODBC and I'm attempting to use parameterized queries through RODBCext to support mass updates of information.
I'm able to get the parameterized queries to work from my Windows 7, RStudio running R 3.2.3, but for some reasons when I try to run the same code from the linux machine running the same version of R and connecting with the same version of the driver, I get the following error:
Error in sqlExecute(Connection, data = dat) :
42000 402 [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]The data types char and text are incompatible in the equal to operator.
42000 8180 [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Statement(s) could not be prepared.
[RODBCext] Error: SQLExecute failed
In addition: Warning messages:
1: In sqlExecute(Connection, data = dat) :
42000 402 [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]The data types char and text are incompatible in the equal to operator.
2: In sqlExecute(Connection, data = dat) :
42000 8180 [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Statement(s) could not be prepared.
Here is the simple example code that works properly on my windows machine, but not on the linux machine (I removed the connection string information):
library(RODBCext)
Connection <- odbcDriverConnect(paste('Driver=ODBC Driver 13 for SQL Server',
'Server=<Server IP>', 'Port=<Port>', 'Database=<Database>', 'UID = <UserID>',
'PWD=<Password>', sep = ';'))
dat <- data.frame(Node_ID = "999", NodeGUID = "AF213171-201B-489B-B648-F7D289B735B1")
query <- "UPDATE dbo.Nodes SET Node_ID = ? WHERE NodeGUID = ?"
sqlPrepare(Connection, query)
sqlExecute(Connection, data = dat)
In this example, the dataframe is created with the columns as factors. I've tried explicitly casting the columns as characters first, as this seemed to work for the users having trouble with dates, but that still results in the same SQL error. I've also tried casting the Node_ID as numeric to match the SQL table, and I get the same error. The columns in the Nodes table in SQL are defined as:
NodeGUID (PK, char(36), not null)
Node_ID (int, null)
I've tried combining the sqlPrepare and sqlExecute calls by supplying the query argument for sqlExecute, and from what I understand that's a trivial difference and it results in the same error.
I suspect there must be a difference in the drivers and how they implement whatever SQL calls sqlExecute() makes. I also suspect sqlExecute() must handle the data types, as my results don't change regardless of the column types.
Thank you for any help you can provide!
Thanks to everyone who took a look at my question.
One of the SQL Server folks at my job was able to solve the issue. They suggested explicitly casting the arguments in the SQL query written for sqlExecute(). Here's the code that works, note I know the GUID will always be 36 characters and I'm confident the rest of the arguments I use this query for will be less than 1000 when converted to strings:
my_query <- "UPDATE dbo.Nodes SET Node_ID = CAST(? As varchar(1000)) WHERE NodeGUID = CAST(? As varchar(36))"
sqlExecute(Connection, data = dat, query = my_query)
I'm guessing the driver for Windows is somehow handling the casting from text to varchar, but the linux driver does not.
I hope this helps others working with RODBCext. Thanks to Mateusz Zoltak and the team of contributors for a great package!
Related
I am trying to append records from a dataframe in R to an established SQL data table using the odbc::dbWriteTable() function. This is a function I use for many workflows to append records to various database tables.
Specifically:
odbc::dbWriteTable(connection, DBI::SQL(glue("{database}.{schema}.{table}")), value = dataframe, append = TRUE)
The dataframe and the target SQL table share the same column names and variable types.
However, when I attempt to run the function and append the data records, I receive the following error:
Error in result_insert_dataframe(rs#ptr, values, batch_rows) :
nanodbc/nanodbc.cpp:####: ######: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid column name 'row_names'. [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared.
The dataframe contains no row names. Why is the column name "row_names" being generated, and is there a way to ensure this column name is not generated? Many thanks in advance for any suggestions!
For anyone having similar issues, the answer was very simple. I just needed to add
row.names=FALSE
to the import function:
odbc::dbWriteTable(connection, DBI::SQL(glue("{database}.{schema}.{table}")), value = dataframe, append = TRUE, row.names=FALSE)
Firstly, I have seen many answers which is specific to the Invalid Object Name error working with SQL Server, but None of them seem to solve my problem. I don't have much idea on SQL Server dialect, but here is my current setup required on the project.
SQL Server 2017
SQLAlchemy (pyodbc+mssql)
Python 3.9
I'm trying to insert a database row, using the SQLAlchemy ORM, but it fails to resolve the schema and table, giving me the error of type
(pyodbc.ProgrammingError) ('42S02', "[42S02] [Microsoft][ODBC Driver
17 for SQL Server][SQL Server]Invalid object name 'agent.dbo.tasks'.
(208) (SQLExecDirectW); [42S02] [Microsoft][ODBC Driver 17 for SQL
Server][SQL Server]Statement(s) could not be prepared. (8180)")
I'm creating a session with the following code.
engine = create_engine(connection_string, pool_size=10, max_overflow=5,
pool_use_lifo=False, pool_pre_ping=True, echo=True, echo_pool=True)
db_session_manager = sessionmaker()
db_session_manager.configure(bind=engine)
session = db_session_manager()
I have a task object defined like
class Task(BaseModel):
__tablename__ = "tasks"
__table_args__ = {"schema": "agent.dbo"}
# .. field defs
I'm trying to fill the object fields and then do the insert like the usual
task = Task()
task.tid = 1
...
session.add(task)
session.commit()
But this fails, with the error mentioned before. I tried to execute a direct query like
session.execute("SELECT * FROM agent.dbo.tasks")
And it returned a result set.
The connection string is a URL object, which prints like
mssql+pyodbc://task-user:******#CRD4E0050L/agent?driver=ODBC+Driver+17+for+SQL+Server
I tried to use the SQL Server Management Studio to insert manually and check, there it shown me a sql dialect with [] as field separators
like
INSERT INTO [dbo].[tasks]
([tid]..
, but SQLAlchemy on echo did not show that, instead it used the one I see in MySQL like
INSERT INTO agent.dbo.tasks (onbase_handle_id,..
What is that I'm doing wrong ? I thought SQLAlchemy if configured with a supported dialect, should work fine (I use it on MySQL quite well). Am I missing any configuration ? Any help appreciated.
I configure a Heterogenous Service from Oracle to access SQL Server using the ODBC Drive from Microsoft
It works, but some query in specific table return the right message for example:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[Microsoft][ODBC Driver 11 for SQL Server][SQL Server]Attempt to access a column 'UtilizaMetrica_DescontoComerci'. {42S22,NativeErr = 207}[Microsoft][ODBC Driver 11 for SQL Server][SQL Server]
The right column 'UtilizaMetrica_DescontoComercial' has 32 characters, but truncate in the return message to 30 characters
It seems that OHS has a limitation on the length of a column name (30 characters).
Workaround is to shorten the name to an acceptable length by defining a shorter alias for that column or using a view to do the same thing.
Due to security policy I need to use a stored procedure (MS SQL Server) as a source of ClickHouse external dictionary via ODBC.
According to documentation for ClickHouse it is possible to use only a table (or view). Although ODBC allows to call stored procedures.
<odbc>
<db>DatabaseName</db>
<table>TableName</table>
<connection_string>DSN=some_parameters</connection_string>
<invalidate_query>SQL_QUERY</invalidate_query>
</odbc>
When I tried "{CALL my_procedure_name}" in table I got this error.
Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = ODBC
handle exception: Failed to get number of columns: Connection:NetConn:
000000001760E400\nServer:OMEGA_DSN\n===========================\nODBC
Diagnostic record #1:\n===========================\nSQLSTATE =
42S02\nNative Error Code = 208\n[Microsoft][ODBC Driver 17 for SQL
Server][SQL Server]54>?CAB8<>5 81J5:B0 "{CALL
my_procedure_name}".\n\n===========================\nODBC Diagnostic
record
Can anybody sujest some solution or workaround?
This is most likely due to the incompatibility of the ODBC driver. You may try testing it via https://www.mankier.com/1/isql
As for a workaround, I would start from wrapping the stored procedure in a view.
I have a connection with the DBLIB PDO driver, and I am not getting any errors on connecting, but when I run a query an exception is thrown with the error below. I have played around with the syntax of the query as well. I am connecting to a MS SQL server:
SQLSTATE[HY000]: General error: 208 General SQL Server error: Check messages from the SQL Server [208] (severity 16) [SELECT PCO_INBOUNDLOG.PHONE FROM PCO_INBOUNDLOG]
The code:
$sql = "SELECT PCO_INBOUNDLOG.PHONE FROM PCO_INBOUNDLOG";
foreach($this->mssql->query($sql) as $row) {
print_r($row);
}
This is the first time I have ever done a query to a MS SQL server so my syntax may be wrong, any ideas?
First, find out what error 208 means:
select * from sys.messages where message_id = 208
Second, check the FROM syntax (including the examples!) and object identifier rules.
Third, write the query correctly:
SELECT PHONE FROM PCO_INBOUNDLOG
Or, probably better (because it's good practice to include the schema name):
SELECT PHONE FROM dbo.PCO_INBOUNDLOG
Or even:
SELECT p.PHONE FROM dbo.PCO_INBOUNDLOG p