When I export my tables from MS Access 2007 to SQL Server 2014 via ODBC driver, all tables go normally, but for tables when include date/time field generate code follows:
... "tt" datetime2(⪎Ѱ�撵)
These characters don't copypast in the message (screenshot from profiler http://i.stack.imgur.com/XwxVH.png)
When I convert date/time to string on my Access, all export normally.
How to fix this?
If your fields in SQL Server are of data type DateTime2, change these to DateTime.
Related
I have an Access 2019 front-end database that links to a SQL Server 2017 Express database.
I'd like to export a table or query from the VBA code in the front-end into an Access (Jet) format database (as a portable data format to use for updating a remote site)
The code I've tried (for a table called FileLocation) is:
Access.DBEngine.CreateDatabase "C:\Temp\ExportTest3.mdb", DB_LANG_GENERAL
docmd.TransferDatabase transfertype:=acExport, databasetype:="Microsoft Access",databasename:="c:\temp\ExportTest3.mdb", objecttype:=acTable, source:="FileLocation", destination:="FileLocation", structureonly:=false
This "works" but the table created in the ExportTest3 database is a link to the SQL database (with the Connect property set in MSysObjects) so is dependant on the SQL Server connection, but I'm looking for an independant portable .mdb file that can be read on any PC.
Edit: I've discovered that I can use
docmd.RunSQL "select * into FileLocationLocal from FileLocation"
and then use TransferDatabase to export the FileLocationLocal table as a non-linked table
But is there a way to do this as a single step, or is there a better approach?
Consider:
Access.DBEngine.CreateDatabase "C:\Temp\ExportTest3.mdb", DB_LANG_GENERAL
CurrentDb.Execute "SELECT FileLocation.* INTO FileLocation IN 'C:\Temp\ExportTest3.mdb' FROM FileLocation"
I've migrated a database formerly in Access to SQL Server and am now rebuilding my Access front end to work with that SQL Server back end using a DSN-less link. I'm running into issues with new data entry in my time field. The error I get is ODBC--update on a linked table...failed. [Microsoft][ODBC Driver 17 for SQL Server]Invalid character value for cast specification (#0). I'm assuming this has to do with the way Access converts the data into short text from SQL Server, where it is a time(0) data type.
My question is what is the best way to handle "time" data to work in both Access and SQL Server? Ideally users would enter data in Access simply as something like "0130" where this means "1 hour and 30 minutes" (we never record seconds). And ideally the data in SQL Server would be formatted in some sort of time or datetime/datetime2 format.
I'm in a position to modify the formatting or code of the Access front end or the SQL Server back end (or both)--what's the cleanest way to go about this?
The best way is to user data type DateTime in SQL Server. Any ODBC driver will read and write that from Access as native DateTime of VBA.
If you must use DateTime2 in SQL Server, you must install and use one of the never ODBC drivers, not the "SQL Server" ODBC driver that comes with Windows as it cannot read the microsecond resolution of DateTime2.
You should never use the other date/time data types of SQL Server: Time and Short Date
In SSMS I'm connected to an Intersystems Cache database using ODBC driver and linked server When I fetch data using a SQL query like
SELECT Text FROM OPENQUERY([ODBC_CACHE_DB],'SELECT TOP 100 Text FROM cls.Actions')
IN SSMS it gives results but it gives ? for arabic characters like
"18:29:00 [Mohamad] ????? ??? ?? ??? ??? ?????? ????? ? 18:30:30 [Customer] Hi Sirius is jai"
how could get arabic texts ?
note: I can read and write arabic text with using nvarchar data type
Had a similar issue. My setup was a linked server setup between MSSQL 2012 cluster and Intersystems Cache 2009.x using MS OLE ODBC provider.
My observations below:
Convert/Cast on the column with nvarchar datatype did not work -- as in it shows the ???? (This is on SSMS)
When using 3rd Party DB management tools such as Database.net and WinSQL, I was able to see the correct characters.
Playing around with the ODBC driver's Unicode SQL Types function only intermittently helped show the correct characters.
The solution:
Enable Unicode SQL Types function on the ODBC driver
Make changes to the test sql query that is being executed on the Intersystems Cache db. If you keep executing the same query, the output is cached for sometime (not sure how long exactly).
In my case, the sql server cluster was not under my control and took a few days to play around with the different variations.
We get data delivered to us in a flat file. A date column we want to store in a destination column called DWValidFrom has the following format:
2017-02-06T22:07:09Z
In SSIS using a Flat File Connection Manager, I set the datatype of said column to DT_DBTIMESTAMPOFFSET. It correctly shows us when checking the data in the Columns and Preview pages of the Connection Manager.
In SQL Server, I created the destination table, and defined the DWValidFrom column as datetimeoffset(0):
[DWValidFrom] [datetimeoffset](0) NOT NULL,
When I attempt to set the mappings in the OLE DB Destination object, which has been set to the SQL Server table in question, SSIS won't have it, and throws the following error:
The OLE DB provider used by the OLE DB adapter cannot convert between types "DT_DBTIMESTAMPOFFSET" and "DT_WSTR" for "DWValidFrom".
Suspecting something off with my regional settings, I issued the following query in Management Studio to ensure the format of the date wouldn't change:
SELECT CAST('2017-02-06T22:07:09Z' AS datetimeoffset(0))
This yielded the following result:
2017-02-06 22:07:09 +00:00
Why is SSIS not recognizing the column's proper data type? I do not have any other conversions or expressions set, so I'm confused as to why SSIS won't allow me to push a valid datetimeoffset.
We're using SQL Server 2014, Visual Studio 2015.
Thanks.
This sounds like the OLEDB source metadata is out of sync with the changes you made on the flat file connection manager. The quickest fix it would be to recreate the OLEDB source, but don't do that quite yet.
SSIS is not going to like that standard ISO format for the date. If you remove the "T" in the middle and the "Z" at the end it be ok. i.e.
2017-02-06 22:07:09
Because of this conversion issue in SSIS, the connection manager will probably fail in converting the string to datetimeoffset. So you will need to configure it as a string and then fix it's value in a derived column:
(DT_DBTIMESTAMPOFFSET, 0) REPLACE(REPLACE( [DWValidFrom] , "T", " " ), "Z", "")
Hope that helps,
m
The issue seemed to be that the OLEDB destination does not recognize datetimeoffset as a valid column format. Despite everything working in SQL Server and SSIS pushing a datetime that would be perfectly valid, the OLEDB destination wouldn't have any of it.
I considered using a SQL Server destination, but because the target server is a different server than the one we develop on, that wasn't an option either.
The fix for us was to instead format the columns using datetime as a datatype, which causes us to loose the timezone info, but because all of the dates were UTC, we really don't miss any data.
Quick Answer: Set DataTypeCompatibility to 0
I noticed in Connection Manager for my SQL Server Native Client 11.0 (OLEDB) connection, clicking on "All", then under the SQLNCLI11.1 section there's a value DataTypeCompatibility which was set to "80". 80 is code for SQL Server 2000 compatibility, well before they introduced TimeStampOffset (or in my case DT_DBDATE and DT_DBTIME2 types). I tried setting compatibility to 130, then 100, but "Test Connection" failed.
At https://learn.microsoft.com/en-us/sql/relational-databases/native-client/applications/using-connection-string-keywords-with-sql-server-native-client?view=sql-server-2017 there's a table, specifying information about this value
DataTypeCompatibility SSPROP_INIT_DATATYPECOMPATIBILITY Specifies the mode of data type handling to use. Recognized values are "0" for provider data types and "80" for SQL Server 2000 data types.
Changing the value to 0, then refreshing all of my connections using the OLEDB connection manager seems to have done the trick - now all my database's types are recognized rather than forcing it to nvarchar/DT_WSTR
I'm running queries from Oracle across a database link to a SQL Server 2012 instance. All the results are padded with spaces out to the maximum length of the field and I can't figure out why. The data type in the SQL Server database is varchar. In all_tab_columns#mydblink, Oracle reports the column type as VARCHAR2.
Is there some initialization parameter in the ODBC or SQL Server driver that I'm missing?
I'm using Oracle Generic Connectivity (ODBC) and the Microsoft ODBC Driver 11 for SQL Server on Linux.
Edit:
Fields are varchar on the SQL Server db, according to information_schema.columns.
They are not padded. At least, when I run the query SELECT first_name, len(first_name) from mytable, I get "John" and "4". Running SELECT first_name, length(first_name) from mytable from the Oracle side, I get "John" and "50".
Whelp. It looks like the data actually is blank-padded. The LEN() function ignores trailing spaces. Using cast( first_name as varbinary(max) ) shows the field has a large number of 0x20 characters at the end.