Access linked tables truncating my Decimal values from the SQL server - sql-server

Since migrating the Access data to a SQL server I am having multiple problems with the decimal values. In my SQL tables on the SQL 2012 server I am using the Decimal data type for multiple fields. A while a go I first tried to set the decimal values to 18,2 but Access acted weird on this by truncating all the values (55,55 became 50 and so on).
So after multiple changes it seemed that Access accepted the 30,2 decimal setting in the SQL server (now the values were linked correct in the linked Access tables).
A few days ago I stumbled however back on this problem because a user had problems with editing a number in the access form. So I checked the linked table data type and there it seemed that Access converts the decimal 30,2 value to a Short Text data type, which is obviously wrong. So I did a bit of research and found out that Access cannot handle a 30,2 decimal, thus it is converted to text by the ODBC driver. (See my previously post: Access 2013 form field value gets cut off on changing the number before the point)
So to fix this latter error I tried, once again (forgetting that I already messed around with it) to change the decimal value to 17,2 / 18,2 and some other decimal values but on all these changes I am getting back to the truncating problem...
I found some posts about it but nothing concrete or answers on how to solve it.
Some additional information:
Using a SQL 2012 server
Using Access 2013
Got a SQL Server Native Client 10 and 11 installed.
Looking in the register key I found out that I am using ODBC driver version 02.50
The SQL native client 11 has/uses DriverODBC ver 03.80 and the native client 10 uses DriverODBC ver 10.00 (not sure this is relevant though).
UPDATE WITH IMAGES
In a access form I have multiple lines that have a linked table (sql table) as record source. These lines get populated with the data in the SQL server.
Below you can see a line with a specific example, the eenh. prijs is loaded from the linked (SQL) table.
Now when I change the 5 in front of the point (so making it 2555,00 instead of 5555,00) the value gets cut off:
======>>>
So I did research on it and understand that my SQL decimal 30,2 isn't accepted by Access. So I looked in my access linked table to see what kind of data type the field is:
So the specific column (CorStukPrijs) is in the SQL server a decimal 30,2 but here a short text (sorry for the dutch words).
The other numerics (which are OK) are just normal integers by the way.
In my linked table on access - datasheet view the values look like this:
I also added a decimal value of how it looks in my linked table:
In my SQL server the (same) data looks like this:
Though, because of the changing number problem before the point (back in the form - first images) I changed the decimal type of 30,2 in the server to 18,2.
This is the result in the linked table on that same 5555 value:
It gives #Errors and the error message:
Scaling of decimal values has resulted in truncated values
(translated it so wont be probably exactly like that in English)
The previous 0,71 value results with the decimal 18,2 in:
Hope its a bit clearer now!
P.S. I just changed one decimal field to 18,2 now.

Recently I found a solution for this problem! It all had to do with language settings after all.. (and the decimal 30,2 which is not accepted as a decimal in Access 2013).
I changed the Native client from 10 to 11 and in my connection string I added one vital value: regional=no. This fixed the problem!
So now my connection string is:
szSQLConnectionString = "DRIVER=SQL Server Native Client 11.0;SERVER=" & szSQLServer & ";DATABASE=" & szSQLDatabase & ";UID=" & szSQLUsername & ";PWD=" & szSQLPassword & ";regional=no;Application Name=OPS-FE;MARS_Connection=yes;"

A few things:
No real good reason to try a decimal value of 30 digits?
Access only supports 28 digits for a packed decimal column. So going to 30 will force Access to see that value as a string.
If you keep the total digits below 28, then you should be ok.
You also left out what driver you are using. (legacy, or native 10 or native 11). However, all 3 should have no trouble with decimal.
As a few noted here, after ANY change to the sql table, you have to refresh the linked table else such changes will not show up.
There is NO need to have some re-link code every time on startup. And it not clear how your re-link code works. If the re-link code makes a copy of the tabledef object, and then re-instates the same tabledef then changes to the back end may well not show up.
I would suggest during testing, you DO NOT use your re-link routines, but simply right click on the given linked table and choose the linked table manager. Then click on the one table, and ok to refresh.
Also, in Access during this testing, dump (remove) any formatting you have in the table settings for testing (the format setting).
I suggest you start over, and take the original tables and re-up-size them again.
Access should and can handle the decimal types with ease, but it not clear what your original settings were. If the values never require more than 4 significant digits beyond the decimal, then I would consider using currency, but decimal should also work.

Related

Encoding (?) issues fetching binary data (image type column) from SQL Server via pyodbc/Python3 [duplicate]

I'm executing this query
SELECT CMDB_ID FROM DB1.[dbo].[CDMID]
when I do this on SSMS 18 I get this:
I'm aware these are HEX values, although I'm not an expert on the topic.
I need to excute this exact query on python so I can process that information through a script, this script need as input the HEX values without any manipulation (as you see in the SSMS output).
So, through pyodbc library with a regular connection:
SQLserver_Connection("Driver={SQL Server Native Client 11.0};"
"Server=INSTANCE;"
"Database=DB1;"
"UID=USER;"
"PWD=PASS;")
I get this:
0 b'#\x12\x90\xb2\xbb\x92\xbbe\xa3\xf9:\xe2\x97#...
1 b'#"\xaf\x13\x18\xc9}\xc6\xb0\xd4\x87\xbf\x9e\...
2 b'#G\xc5rLh5\x1c\xb8h\xe0\xf0\xe4t\x08\xbb'
3 b'#\x9f\xe65\xf8tR\xda\x85S\xdcu\xd3\xf6*\xa2'
4 b'#\xa4\xcb^T\x06\xb2\xd0\x91S\x9e\xc0\xa7\xe543'
... ...
122 b'O\xa6\xe1\xd8\tA\xe9E\xa0\xf7\x96\x7f!"\xa3\...
123 b'O\xa9j,\x02\x89pF\xb9\xb4:G]y\xc4\xb6'
124 b'O\xab\xb6gy\xa2\x17\x1b\xadd\xc3\r\xa6\xee50'
125 b'O\xd7ogpWj\xee\xb0\xd8!y\xec\x08\xc7\xfa'
126 b"O\xf0u\x14\xcd\x8cT\x06\x9bm\xea\xddY\x08'\xef"
I have three questions:
How can this data be intepreted, why am I getting this?
Is there a way to manipulate this data back at the original HEX value? and if not...
What can I do to receive the original HEX value?
I've looking for a solution but haven't found anything yet, as you can see I'm not an expert on this kind of topics so if you are not able to provide a solution I will, also, really appreciate documents with some background knowledge that I need to get so I can provide a solution by myself.
I think your issue is simply due to the fact that SSMS and Python produce different hexadecimal representations of binary data. Your column is apparently a binary or varbinary column, and when you query it in SSMS you see its fairly standard hex representation of the binary values, e.g., 0x01476F726400.
When you retrieve the value using pyodbc you get a <class 'bytes'> object which is represented as b'hex_representation' with one twist: Instead of simply displaying b'\x01\x47\x6F\x72\x64\x00', Python will render any byte that corresponds to a printable ASCII character as that character, so we get b'\x01Gord\x00' instead.
That minor annoyance (IMO) aside, the good news it that you already have the correct bytes in a <class 'bytes'> object, ready to pass along to any Python function that expects to receive binary data.

ADODB Datatype Decimal MS Access and SQL Server Decimal doesn't match?

i have some vb code which loads some data from an sql-server with this code snippet to a local table(both databases are connected via ADODB) :
adorec_local.Fields(str_array_fields(int_i, 1)) = adorec_server.Fields(str_array_fields(int_i, 2))
For example I have a decimal value on the server like "100.50". If this value is transfered to the local table the value in the table is shown as "10050" without the seperator.
When I look into the direct coding window the value is converted from "100.50" to "100,50" which seems correct to me. If I put this value directly into the local table it works without any issue.
Any ideas what's the problem here?
Both fields(local and server) are defined as decimal(8,2).
Thank you in advance!
Edit: Tried the "Double" datatype in Access and it works with the correct values. But I want the decimal value to stay consistent.

SQL server management studio 2014 setting only returns whole numbers

Does anyone know if there is a setting within the app itself that would cause it to only return whole numbers?
Example - query is set up to return data 123456789.26 but is being rounded to a whole number 123456789
I cannot find any settings or options in the program. I was able to get the same results by using the STR command, but I shouldn't have to. My colleagues use other versions of SQL server and some return the decimals while others don't.
The short answer is no, there is no global setting that tells SQL Server to round all numeric values.
There is only one setting that can cause anything like this. However it would happen through truncation rather than a system setting that forces rounding.
Under Tools > Options > Query Results > Results to Text, there is a property called "Maximum number of characters displayed in each column".
Based on your description, I have a feeling this is not the case. Mostly because the default value is 256.

SSIS - How to convert real values for Oracle?

I'm facing a problem in a package to import some data from a MySQL table to Oracle table and MS SQL Server table. It works well from MySQL to SQL Server, however I get an error when I want to import to Oracle.
The table I want to import contains an attribute (unitPrice) of data type DT_R8.
The destination data type for Oracle is a DT_NUMBERIC as you can see in the capture.
I added a conversion step to convert the unitPrice data from DT_R8 to DT_NUMERIC.
It doesn't work, I get the following error.
I found the detail of the error :
An ORA-01722 ("invalid number") error occurs when an attempt is made to convert a character string into a number, and the string cannot be converted into a valid number. Valid numbers contain the digits '0' through '9', with possibly one decimal point, a sign (+ or -) at the beginning or end of the string, or an 'E' or 'e' (if it is a floating point number in scientific notation). All other characters are forbidden.
However, I don't know how to fix.
EDIT : I added a component to redirect rows/errors to an Excel file.
The following screenshot show the result of the process including errors :
By browsing the only 3000 rows recorded, It seems the process accept only int values no real. So if the price is equal to 10, it's OK but if it's 10,5 it's failed.
Any idea to solve this issue ?
Your NLS environment does not match the expected one. Default, Oracle assumes that "," is the grouping character and "." is the decimal separator. Make sure that your session uses the correct value for the NLS_NUMERIC_CHARACTERS parameter.
See Setting Up a Globalization Support Environment for docu.

Some questions about HierarchyId (SQL Server 2008)

I am a newbie in SQL Server 2008 and just got introduced to HierarchyId's.
I am learning from SQL Server 2008 - HIERARCHYID - PART I. So basically I am following the article line by line and while practicing in SSMS I found that for every ChildId some hexadecimal values are generated like 0x,0x58,0x5AC0 etc.
My questions are
What are these hexadecimal values?
Why are these generated and what is their use? I mean where can I use those hexa values?
Do we have any control over those hexa values? I mean can we update etc.
How to determine the hierarchy by looking into those hexa values.. I mean how can I determine which is the parent and which is the child?
Those hex values are simply a binary representation of the hierarchy level. In general, you should not use them directly.
You may want to check out the following example, which I think should be self-explanatory. I hope it will get you going in the right direction.
Create a table with a hierarchyid field:
CREATE TABLE groups (
group_name nvarchar(100) NOT NULL,
group_hierarchy hierarchyid NOT NULL
);
Insert some values:
INSERT INTO groups (group_name, group_hierarchy)
VALUES
('root', hierarchyid::Parse('/')),
('domain-a', hierarchyid::Parse('/1/')),
('domain-b', hierarchyid::Parse('/2/')),
('sub-a-1', hierarchyid::Parse('/1/1/')),
('sub-a-2', hierarchyid::Parse('/1/2/'));
Query the table:
SELECT
group_name,
group_hierarchy.ToString()
FROM
groups
WHERE
(group_hierarchy.IsDescendantOf(hierarchyid::Parse('/1/')) = 1);
Adam Milazzo wrote a great article about the innards of hierarchyid here:
http://www.adammil.net/blog/view.php?id=100
In a nutshell, it's not meaningful to work with things in straight hex, but rather convert the numbers out to binary. The reason is that things are not cut up on even byte boundaries. Representing a single node can be as short as 5 bits if it's one of the first four nodes. Becomes longer and longer as more nodes are used, 6 bits each for the next 4 nodes, 7 bits each for the next 8 nodes, and then it jumps to 12 bits each for the next 64 nodes! And then up to 18 bits each for the next 1024.
I needed to convert a database to Postgres, and wrote a script which parses these hex values. You can check out a version I made for AdventureWorks here, search for "hierarchyid":
https://github.com/lorint/AdventureWorks-for-Postgres/blob/master/install.sql
I'll let others address your specific questions, but I will tell you, that, IMO, the HierarchyId in SQL Server 2008 isn't one of Microsoft's greatest contributions to SQL Server. They are complex and somewhat awkward. I think you will find that for many hierarchical needs, common table expressions (CTE) work great.
Randy

Resources