I am trying to migrate data from an on prem SQL Server DB to Postgres Aurora using AWS DMS. The data migrates just fine, but for tables that have a boolean column in the primary key, during validation it fails with the following error:
[Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared.
Upon checking the logs on the Postgres side, I find this:
ERROR: invalid input syntax for type boolean: "" at character <number>
STATEMENT: SELECT cast ("pcode" as varchar(6)) , "other_columns" , "boolean_column" FROM "db"."table" WHERE ((("boolean_column" = '' AND "pcode" > 'L7L3V9') AND ("boolean_column" = '' AND "pcode" <= 'L8L4E8'))) ORDER BY "boolean_column" ASC , "pcode" ASC
During validation, it's fetching the records from the postgres in batches, and for each batch it uses the wrong value for the "bolean_column" (comparing to '' - blank string). I am not sure why it's doing this, or how to influence this behaviour so that the validations will be performed successfully.
The boolean column is a:
bit field on the SQL Server side
on postgres side, have tried converting to (both with same validation error as above):
numeric with precision 1
boolean
bit (after T N's comment below)
Related
Firstly, I have seen many answers which is specific to the Invalid Object Name error working with SQL Server, but None of them seem to solve my problem. I don't have much idea on SQL Server dialect, but here is my current setup required on the project.
SQL Server 2017
SQLAlchemy (pyodbc+mssql)
Python 3.9
I'm trying to insert a database row, using the SQLAlchemy ORM, but it fails to resolve the schema and table, giving me the error of type
(pyodbc.ProgrammingError) ('42S02', "[42S02] [Microsoft][ODBC Driver
17 for SQL Server][SQL Server]Invalid object name 'agent.dbo.tasks'.
(208) (SQLExecDirectW); [42S02] [Microsoft][ODBC Driver 17 for SQL
Server][SQL Server]Statement(s) could not be prepared. (8180)")
I'm creating a session with the following code.
engine = create_engine(connection_string, pool_size=10, max_overflow=5,
pool_use_lifo=False, pool_pre_ping=True, echo=True, echo_pool=True)
db_session_manager = sessionmaker()
db_session_manager.configure(bind=engine)
session = db_session_manager()
I have a task object defined like
class Task(BaseModel):
__tablename__ = "tasks"
__table_args__ = {"schema": "agent.dbo"}
# .. field defs
I'm trying to fill the object fields and then do the insert like the usual
task = Task()
task.tid = 1
...
session.add(task)
session.commit()
But this fails, with the error mentioned before. I tried to execute a direct query like
session.execute("SELECT * FROM agent.dbo.tasks")
And it returned a result set.
The connection string is a URL object, which prints like
mssql+pyodbc://task-user:******#CRD4E0050L/agent?driver=ODBC+Driver+17+for+SQL+Server
I tried to use the SQL Server Management Studio to insert manually and check, there it shown me a sql dialect with [] as field separators
like
INSERT INTO [dbo].[tasks]
([tid]..
, but SQLAlchemy on echo did not show that, instead it used the one I see in MySQL like
INSERT INTO agent.dbo.tasks (onbase_handle_id,..
What is that I'm doing wrong ? I thought SQLAlchemy if configured with a supported dialect, should work fine (I use it on MySQL quite well). Am I missing any configuration ? Any help appreciated.
Getting error when trying to run Sql Command in SSIS package.
Task: DataFlow Task
Connection: ADO.NET
Data Access mode: Sql Command
Sql text:
select from table where field1 = ? and field2 = ?
Error:" No value given for one or more required parameters"
More Information:
Execute Sql task in package:
(General tab)
- Connection: ADO.NET
- SQL Statement: exec storedprocedureX ?,?
(Parameter Mapping tab)
User::field1 , Input , String , 0 , -1
User::field2, Input, String, 1, -1
Variables set in package
field1 value 12C
field2 value 15A
What am I missing that is causing the variable values to not be read at Data flow level? I have no problem at the Execute SQL task level.
An OLE DB Command in the data flow is different compared to the Execute SQL Task in the control flow. You seem to be describing the Execute SQL Task correctly.
To use a variable in the data flow, you need to add it to the data flow -- the easiest way is to use a Derived Column with an expression. Add a Derived Column to your Data Flow before the OLE DB Command and configure it as follows: Derived Column Name: field1; Derived Column: add as new column; Expression: #[User::field1]. Then in the OLE DB Command, under Column Mappings map the columns as Input Column: field1; Destination Column: Param_0, etc.
In my project I am rebuilding my Access database to an SQL Database. So to do this I am transferring the Access DATA to the SQL Database. I made sure they both have the same structure and the Access fields are modified correctly in the SQL database.
For most of the data this works.
Except for 1 table. This table gives me the following weird error message:
OLE DB provider 'Microsoft.ACE.OLEDB.12.0' for linked server 'OPS_JMD_UPDATE' returned data that does not match expected data length for column '[OPS_JMD_UPDATE]...[OrderStatus].Omschrijving'. The (maximum) expected data length is 100, while the returned data length is 21.
So here some more information about both the Access and SQL field/column:
Access type: Short text
SQL type: nvarchar(MAX)
Access column data in it: Normal letters and & - % é + €
. , : being the 'not normal ones'.
A few empty Access records (which is allowed)
A total of 135314 record in the Access table
Iv'e set the SQL datatype to nvarchar(MAX) so that the field can never be to small, this didn't seem to help though..
*The OPS_JMD_UPDATE is the linked Access database
What causes this problem? Is it because some characters aren't allowed or..?
There was 1 record that generated the error. I pinned out the exact record with a TOP select and DESC Select and then used the select ascii replaceto remove the error! Solved thanks to xQbert, once more thank you!
I have tried some examples but so far not working.
I have a Link Server (SQL Server 2014) to an Oracle 12C Database.
The table contain a datatype TIMESTAMP with data like this:
22-MAR-15 04.18.24.144789000 PM
When attempting to query this table in SQL Server 2014 via link server I get the following error using this code:
SELECT CAST(OracleTimeStampColumn AS DATETIME2(7)) FROM linkServerTable
Error:
Msg 7354, Level 16, State 1, Line 8
The OLE DB provider "OraOLEDB.Oracle" for linked server "MyLinkServer" supplied invalid metadata for column "MyDateColumn". The data type is not supported.
While the error is self explanatory, I am not certain how to resolve this.
I need to convert the timestamp to datetime2. Is this possible?
You can work around this problem by using OPENQUERY. For me, connecting to Oracle 12 from SQL 2008 over a linked server, this query fails:
SELECT TOP 10 TimestampField
FROM ORACLE..Schema.TableName
...with this error:
The OLE DB provider "OraOLEDB.Oracle" for linked server "ORACLE" supplied invalid metadata for column "TimestampField". The data type is not supported.
This occurs even if I do not include the offending column (which is of type TIMESTAMP(6). Explicitly casting it to DATETIME does not help either.
However, this works:
SELECT * FROM OPENQUERY(ORACLE, 'SELECT "TimestampField" FROM SchemaName.TableName WHERE ROWNUM <= 10')
...and the data returned flows nicely into a DATETIME2() field.
One way to solve the problem is to create a view in oracle server and convert the OracleTimeStampColumn compatible with sql server's datetime2datatype. You can change the time format to 24 hours format in oracle server's view and mark the field as varchar. Then you can convert the varchar2 column to datetime2 when selecting the column in SQL Server.
In Oracle Server
Create or Replace View VW_YourTableName As
select to_char(OracleTimeStampColumn , 'DD/MM/YYYY HH24:MI:SS.FF') OracleTimeStampColumn from YourTableName
In SQL Server
SELECT CAST(OracleTimeStampColumn AS DATETIME2(7)) FROM **linkServerVIEW**
I am working in SQL Server 2008 and BIDS (SSIS). I am trying to generate a "load ID" for when a package is executed and store that ID in a load history table (which then populates subsequent tables).
My basic SSIS control flow is the following:
Execute SQL Task, Data Flow Task
The load table is created via the following:
CREATE TABLE dbo.LoadHistory
(
LoadHistoryId int identity(1,1) NOT NULL PRIMARY KEY,
LoadDate datetime NOT NULL
);
The editor for the Execute SQL Task is as follows:
General:
ResultSet = None
ConnectionType = OLE DB
SQLStatement:
INSERT INTO dbo.LoadHistory (LoadDate) VALUES(#[System::StartTime]);
SELECT ? = SCOPE_IDENTITY()
Parameter Mapping:
Variable Name = User::LoadID
Direction = Output
Data Type = LONG
Parameter Name = 0
Parameter Size = -1
SSIS is throwing the following error:
[Execute SQL Task] Error: Executing the query "INSERT INTO dbo.LoadHistory
..." failed with the following error: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
This error message doesn't really help me find the problem. My best guess is that it's due to the parameter mapping, but I don't see my mistake. Can anybody point out my problem and provide the fix?
I figured out my problem. System::StartTime needs to have DATE as its data type, not DBTIMESTAMP.
I was passing three parameters.
In the Parameter Name property I had:
0
1
3
Corrected it to:
0
1
2
It works now, no multiple-step operation generated errors message.