I've managed to connect from SBCL running on debian to an SQL Server 2000 instance over the network using FreeTDS/unixODBC.
I can actually get data back from the server, so all is working.
However, many of the columns trigger what seem to be unsupported data types a-la:
The value 2147483647 is not of type FIXNUM.
and
-11 fell through ECASE expression.
Wanted one of (-7 -6 -2 -3 -4 93 92 91 11 10 ...).
Anyone have experience using CLSQL with SQL Server would be able to help out?
This (error with 2147483647) occurs because the FreeTDS driver doesn't handle OLEDB BLOBs so well.
You have to issue the following SQL command to make it work:
set textsize 102400
You can see the FreeTDS FAQ entry here. Excerpt:
The text data type is different from char and varchar types. The maximum data length of a text column is governed by the textsize connection option. Microsoft claims in their documentation to use a default textsize of 4000 characters, but in fact their implementation is inconsistent. Sometimes text columns are returned with a size of 4 GB!
The best solution is to make sure you set the textsize option to a reasonable value when establishing a connection.
As for the ECASE expression, I haven't really solved it but I have hacked it away by doing a data conversion of timestamp into a binary value, and a uniqueidentifier into a varchar(36).
Related
First off this is my setup:
Windows 7
MS SQL Server 2008
Python 3.6 Anaconda Distribution
I am working in a Jupyter notebook and trying to import a column of data from a MS SQL Server database using SQLAlchemy. The column in question contains cells which store long strings of text (datatype is nvarchar(max)). This is my code:
engine = create_engine('mssql+pyodbc://user:password#server:port/db_name?driver=SQL+Server+Native+Client+11.0'
stmt = 'SELECT componenttext FROM TranscriptComponent WHERE transcriptId=1265293'
connection = engine.connect()
results = connection.execute(stmt).fetchall()
This executes fine, and imports a list of strings. However when I examine the strings they are truncated, and in the middle of the strings the following message seems to have been inserted:
... (8326 characters truncated) ...
With the number of characters varying from string to string. I did a check on how long the strings that got imported are, and the ones that have been truncated are all limited at either 339 or 340 characters.
Is this a limitation in SQLAlchemy, Python or something else entirely?
Any help appreciated!
Same problem here!
Set up :
Windows Server 2012
MS SQL Server 2016/PostgreSQL 10.1
Python 3.6 Anaconda Distribution
I've tested everything I could, but can't overpass this 33x limitation in field length. Either varchar/text seems to be affected and the DBMS/driver doesn't seem to have any influence.
EDIT:
Found the source of the "problem": https://bitbucket.org/zzzeek/sqlalchemy/issues/2837
Seems like fetchall() is affected by this feature.
The only workaround i found was :
empty_list=[]
connection = engine.connect()
results = connection.execute(stmt)
for row in results:
empty_list.append(row['componenttext'])
This way i haven't found any truncation in my long string field(>3000 ch).
I'm using SQL Server 2012. I have a column declared as a varchar(max), I am using log4net and the parameter is set to -1.
However, the string being sent in is being truncated to about 43000 characters.
I have observed
log4net AdoNetAppender truncating message text
but that has no answer, and the linked answer simply says using -1 works, which it doesn't.
Any ideas?
Sorry, found the answer :)
Max Text Replication size is 65536 so I figure with white space that 43000 is about 65k.
Since migrating the Access data to a SQL server I am having multiple problems with the decimal values. In my SQL tables on the SQL 2012 server I am using the Decimal data type for multiple fields. A while a go I first tried to set the decimal values to 18,2 but Access acted weird on this by truncating all the values (55,55 became 50 and so on).
So after multiple changes it seemed that Access accepted the 30,2 decimal setting in the SQL server (now the values were linked correct in the linked Access tables).
A few days ago I stumbled however back on this problem because a user had problems with editing a number in the access form. So I checked the linked table data type and there it seemed that Access converts the decimal 30,2 value to a Short Text data type, which is obviously wrong. So I did a bit of research and found out that Access cannot handle a 30,2 decimal, thus it is converted to text by the ODBC driver. (See my previously post: Access 2013 form field value gets cut off on changing the number before the point)
So to fix this latter error I tried, once again (forgetting that I already messed around with it) to change the decimal value to 17,2 / 18,2 and some other decimal values but on all these changes I am getting back to the truncating problem...
I found some posts about it but nothing concrete or answers on how to solve it.
Some additional information:
Using a SQL 2012 server
Using Access 2013
Got a SQL Server Native Client 10 and 11 installed.
Looking in the register key I found out that I am using ODBC driver version 02.50
The SQL native client 11 has/uses DriverODBC ver 03.80 and the native client 10 uses DriverODBC ver 10.00 (not sure this is relevant though).
UPDATE WITH IMAGES
In a access form I have multiple lines that have a linked table (sql table) as record source. These lines get populated with the data in the SQL server.
Below you can see a line with a specific example, the eenh. prijs is loaded from the linked (SQL) table.
Now when I change the 5 in front of the point (so making it 2555,00 instead of 5555,00) the value gets cut off:
======>>>
So I did research on it and understand that my SQL decimal 30,2 isn't accepted by Access. So I looked in my access linked table to see what kind of data type the field is:
So the specific column (CorStukPrijs) is in the SQL server a decimal 30,2 but here a short text (sorry for the dutch words).
The other numerics (which are OK) are just normal integers by the way.
In my linked table on access - datasheet view the values look like this:
I also added a decimal value of how it looks in my linked table:
In my SQL server the (same) data looks like this:
Though, because of the changing number problem before the point (back in the form - first images) I changed the decimal type of 30,2 in the server to 18,2.
This is the result in the linked table on that same 5555 value:
It gives #Errors and the error message:
Scaling of decimal values has resulted in truncated values
(translated it so wont be probably exactly like that in English)
The previous 0,71 value results with the decimal 18,2 in:
Hope its a bit clearer now!
P.S. I just changed one decimal field to 18,2 now.
Recently I found a solution for this problem! It all had to do with language settings after all.. (and the decimal 30,2 which is not accepted as a decimal in Access 2013).
I changed the Native client from 10 to 11 and in my connection string I added one vital value: regional=no. This fixed the problem!
So now my connection string is:
szSQLConnectionString = "DRIVER=SQL Server Native Client 11.0;SERVER=" & szSQLServer & ";DATABASE=" & szSQLDatabase & ";UID=" & szSQLUsername & ";PWD=" & szSQLPassword & ";regional=no;Application Name=OPS-FE;MARS_Connection=yes;"
A few things:
No real good reason to try a decimal value of 30 digits?
Access only supports 28 digits for a packed decimal column. So going to 30 will force Access to see that value as a string.
If you keep the total digits below 28, then you should be ok.
You also left out what driver you are using. (legacy, or native 10 or native 11). However, all 3 should have no trouble with decimal.
As a few noted here, after ANY change to the sql table, you have to refresh the linked table else such changes will not show up.
There is NO need to have some re-link code every time on startup. And it not clear how your re-link code works. If the re-link code makes a copy of the tabledef object, and then re-instates the same tabledef then changes to the back end may well not show up.
I would suggest during testing, you DO NOT use your re-link routines, but simply right click on the given linked table and choose the linked table manager. Then click on the one table, and ok to refresh.
Also, in Access during this testing, dump (remove) any formatting you have in the table settings for testing (the format setting).
I suggest you start over, and take the original tables and re-up-size them again.
Access should and can handle the decimal types with ease, but it not clear what your original settings were. If the values never require more than 4 significant digits beyond the decimal, then I would consider using currency, but decimal should also work.
I have a big problem when I try to save an object that's bigger than 400KB in a varbinary(max) column, calling ODBC from C++.
Here's my basic workflow of calling SqlPrepare, SQLBindParameter, SQLExecute, SQLPutData (the last one various times):
SqlPrepare:
StatementHandle 0x019141f0
StatementText "UPDATE DT460 SET DI024543 = ?, DI024541 = ?, DI024542 = ? WHERE DI006397 = ? AND DI008098 = ?"
TextLength 93
Binding of first parameter (BLOB field):
SQLBindParameter:
StatementHandle 0x019141f0
ParameterNumber 1
InputOutputType 1
ValueType -2 (SQL_C_BINARY)
ParameterType -4 (SQL_LONGVARBINARY)
ColumnSize 427078
DecimalDigits 0
ParameterValPtr 1
BufferLength 4
StrLenOrIndPtr -427178 (result of SQL_LEN_DATA_AT_EXEC(427078))
SQLExecute:
StatementHandle 0x019141f0
Attempt to save blob in chunks of 32K by calling SQLPutData a number of times:
SQLPutData:
StatementHandle 0x019141f0
DataPtr address of a std::vector with 32768 chars
StrLen_or_Ind 32768
During the very first SQLPutData-operation with the first 32KB of data, I get the following SQL Server error:
[HY000][Microsoft][ODBC SQL Server Driver]Warning: Partial insert/update. The insert/update of a text or image column(s) did not succeed.
This happens always when I try to save an object with a size of more than 400KB. Saving something that's smaller than 400KB works just fine.
I found out the critical parameter is ColumSize of SQLBindParemter. The parameter StrLenOrIndPtr during SQLBindParameter can have lower values (like 32K),
it still results in the same error.
But according to SQL Server API, I don't see why this should be problematic as long as I call SQLPutData with chunks of data that are smaller than 32KB.
Does anyone have an idea what the problem could be?
Any help would be greatly appreciated.
Ok, I just found out this was actually an sql driver problem!
After installing the newest version of Microsoft® SQL Server® 2012 Native Client (from http://www.microsoft.com/de-de/download/details.aspx?id=29065), saving bigger BLOBs works with exactly these parameters from above.
I'm using Perl with DBI / DBD::ODBC to retrieve data from an SQL Server database, and have some issues with character encoding.
The database has a default collation of SQL_Latin1_General_CP1_CI_AS, so data in varchar columns is encoded in Microsoft's version of Latin-1, AKA windows-1252.
There doesn't seem to be a way to handle this transparently in DBI/DBD::ODBC. I get data back still encoded as windows-1252, for instance, € “ ” are encoded as bytes 0x80, 0x93 and 0x94. When I write those to an UTF-8 encoded XML file without decoding them first, they are written as Unicode characters 0x80, 0x93 and 0x94 instead of 0x20AC, 0x201C, 0x201D, which is obviously not correct.
My current workaround is to call $val = Encode::decode('windows-1252', $val) on every column after every fetch. This works, but hardly seems like the proper way to do this.
Isn't there a way to tell DBI or DBD::ODBC to do this conversion for me?
I'm using ActivePerl (5.12.2 Build 1202), with DBI (1.616) and DBD::ODBC (1.29) provided by ActivePerl and updated with ppm; running on the same server that hosts the database (SQL Server 2008 R2).
My connection string is:
dbi:ODBC:Driver={SQL Server Native Client 10.0};Server=localhost;Database=$DB_NAME;Trusted_Connection=yes;
Thanks in advance.
DBD::ODBC (and ODBC API) does not know the character set of the underlying column so DBD::ODBC cannot do anything with 8 bit data returned, it can only return it as it is and you need to know what it is and decode it. If you bind the columns as SQL_WCHAR/SQL_WVARCHAR the driver/sql_server should translate the characters to UCS2 and DBD::ODBC should see the columns as SQL_WCHAR/SQL_WVARCHAR. When DBD::ODBC is built in unicode mode SQL_WCHAR columns are treat as UCS2 and decoded and re-encoded in UTF-8 and Perl should see them as unicode characters.
You need to set SQL_WCHAR as the bind type after bind_columns as bind types are not sticky like parameter types.
If you want to continue reading your varchar data which windows 1252 as bytes then currently you have no choice but to decode them. I'm not in a rush to add something to DBD::ODBC to do this for you since this is the first time anyone has mentioned this to me. You might want to look at DBI callbacks as decoding the returned data might be more easily done in those (say the fetch method).
You might also want to investigate the "Perform Translation for character data" setting in newer SQL Server ODBC Drivers although I have little experience with it myself.