We are observing an issue with latest Snowflake JDBC driver where even basic Select queries are failing with below exception. It used to work with earlier version 3.10.3. It seems issue with "Arrow" . Any plans of fixing this issue.
Caused by: java.lang.IllegalStateException: Arrow only runs on LittleEndian systems.
at net.snowflake.client.jdbc.internal.io.netty.buffer.UnsafeDirectLittleEndian.(UnsafeDirectLittleEndian.java:65)
at net.snowflake.client.jdbc.internal.io.netty.buffer.UnsafeDirectLittleEndian.(UnsafeDirectLittleEndian.java:50)
at net.snowflake.client.jdbc.internal.io.netty.buffer.PooledByteBufAllocatorL.(PooledByteBufAllocatorL.java:50)
at net.snowflake.client.jdbc.internal.apache.arrow.memory.AllocationManager.(AllocationManager.java:53)
snowflake-da
Related
Using Snowflake JDBC driver version 3.11.1 we get below error on big-endian platforms.
We are observing an issue with latest Snowflake JDBC driver where even basic Select queries are failing with below exception. It used to work with earlier version 3.10.3. It seems issue with "Arrow" . Any plans of fixing this issue. Caused by: java.lang.IllegalStateException: Arrow only runs on LittleEndian systems. at net.snowflake.client.jdbc.internal.io.netty.buffer.UnsafeDirectLittleEndian.(UnsafeDirectLittleEndian.java:65) at net.snowflake.client.jdbc.internal.io.netty.buffer.UnsafeDirectLittleEndian.(UnsafeDirectLittleEndian.java:50) at net.snowflake.client.jdbc.internal.io.netty.buffer.PooledByteBufAllocatorL.(PooledByteBufAllocatorL.java:50) at net.snowflake.client.jdbc.internal.apache.arrow.memory.AllocationManager.(AllocationManager.java:53) snowflake-da
There is no way to disable Arrow for Snowflake clients. I would suggest you to use an earlier version of the JDBC driver (e.g. 3.9.x) to workaround it for the moment and contact Snowflake Support to explore your options moving forward.
Have you tried using the following alter session commands using latest snowflake driver on AIX environment?
ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON'
Reference: https://community.snowflake.com/s/article/SAP-BW-Java-lang-NoClassDefFoundError-for-Apache-arrow
I have a DB2/zOS database that I need to access. When I create the connection on DBeaver it tests fine and when I open the connection it lets me browse schemas and tables.
The top stacktrace gives a generic SQLError, but I think the culprit is a java.nio.charset.UnsupportedCharsetException: Cp280.
I'm pretty sure it's a query problem (as opposed to a ResultSet problem) because even if I type gibberish in the query it throws the same exception.
I think it's a problem with my particular machine as trying it on my coworker's PC it goes through without a itch. We use the same version of DBeaver, the same connection, the same VPN and the same drivers. I've already tried updating my db2jcc4.jar to the latest 10.1 release, remaking the connection (although with the same settings) and rebooting my PC.
Would really appreciate some help. Thanks.
My thanks to Gord for trying to help, unfortunately it seemed like it was a DBeaver bug that was fixed in version 6.1.2, which came out yesterday. It works fine now.
I have strange effects when retrieving columns of type DATE from SQLServer2008 using the Microsoft JDBC-Driver version 3.0 when running under the official Oracle JDK 1.7.0. Host OS is Windows Server 2003.
All Date columns are retrieved as two days in the past with respect to the value actually stored in the column.
I cooked up a minimal code example the test this out (Test table and data):
CREATE TABLE Java7DateTest (
dateColumn DATE
);
INSERT INTO Java7DateTest VALUES('2011-10-10');
Code:
public class Java7SQLDateTest {
public static void main(final String[] argv) {
try {
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
Connection connection = DriverManager.getConnection(
"jdbc:sqlserver://192.168.0.1:1433;databaseName=dbNameHere",
"user", "password");
PreparedStatement statement = connection.prepareStatement("SELECT * FROM Java7DateTest");
ResultSet resultSet = statement.executeQuery();
while (resultSet.next()) {
final java.sql.Date date = resultSet.getDate("dateColumn");
final String str = resultSet.getString("dateColumn");
System.out.println(date + " (raw: " + str + ")");
}
resultSet.close();
statement.close();
connection.close();
} catch (final Throwable t) {
throw new RuntimeException(t.getMessage(), t);
}
}
}
Running this code on above configuration prints: "2011-10-08 (raw: 2011-10-08)".
Under JRE 1.6.0_27 it prints: "2011-10-10 (raw: 2011-10-10)"
I could not find anything that seems to relate to my problem with google, so I'm assuming that its either something stupid I overlooked or nobody is using Java7 yet.
Can anybody confirm this problem? What are my alternatives if I still want to use Java7?
Edit: The problem occurs even when running with -Xint, so its not caused by Hotspot bugs.
Edit2: Old drivers (Microsoft 1.28) work properly with JDK1.7.0 (we were using that driver until maybe two years ago, I think).
jTDS also works perfectly fine with the example. I am considering switching to jTDS, but I am reluctant to do so because I have not the faintest idea what the effects on our productive environment may be. Ideally it should just work, but that what I believed when I switched my dev box to Java7, too.
There is one pretty fat database in the production environment, that is too big to create a copy of, for testing (or rather our server has so little disk left). So setting up a test environment for that one app is not straigthforward, I would have to stitch up a shrinked database for that.
Edit3: jTDS has its own set of catches attached. I found a behavioral difference that breaks one of our applications. ResultSet.getObject() returns different object types for SmallInt columns depending on driver (Short vs Integer). Also jTDS does not implement JDBC4 Connection interface, Connect.isValid() is not supported.
Edit4: I noticed last week that MSSQL-JDBC 3.0 refuses to connect to any DB after I updated to JDK1.6.0_29. jTDS it is then... we switched the productive server yesterday (I fixed tow places where the application was relying on peculiarities of the driver), and so far we had have no problems.
Thank you for your feedback. The Microsoft JDBC Driver for SQL Server does not yet support JRE 1.7.
We are aware of the getDate issue between our JDBC driver & JRE 1.7 and we are looking into publishing a hotfix to enable customers to move forward with non-production testing of our driver with JRE 1.7.
We will publish a link to the hotfix on our blog once available.
http://blogs.msdn.com/b/jdbcteam/
The hotfix is now available. http://blogs.msdn.com/b/jdbcteam/archive/2012/01/20/hotfix-available-for-date-issue-when-using-jre-1-7.aspx
Our blog also contains information on the known issues with JRE 1.6u29 & 1.6u30.
Shamitha Reddy
Program Manager - Microsoft JDBC Driver for SQL Server
I don't quite have an answer for you. But, I've recreated your situation as you described. It is the same with the jdbc driver v3.101 and v3.202 and v4.ctp3 when run under jdk1.7. However, the v2 driver from MS gives your expected answer both under jdk1.6 and jdk1.7. If you need a quick fix and can move to an older jdbc driver, that may work for you.
Other thoughts are on how the MS jdbc driver handles dates and conversion of Date objects between SQL Server and the jvm. Since the storage of the date is without a time zone, the interpretation of the Date object by the driver is based on the default time zone for the machine running the jdbc driver. For instance, if you store a smalldate of '2011-10-11 12:00' and retrieve it from a machine with the default time zone set to GMT-7 then the resulting UTC time of the Date object would be '2011-10-11 19:00'. It could be that there is some change in jdk1.7 that impacts this conversion process in the driver resulting in a wild offset. You might experiment with the ResultSet.getDate(column, Calendar) method to see if a Calendar with a specific time zone gets you the result you want or helps make sense of why you are seeing the strange offset in the conversion.
I don't have a SQL Server setup, but I can't reproduce your problem with PostgreSQL 9.0 and MySQL 5.1 on Windows 7 x64 with JDK 1.7.0. So JDK 1.7.0 can be excluded from being suspect. I have the impression that the SQL Server JDBC driver is to blame here. I'd suggest to use the jTDS JDBC driver instead. It has always been praised for its better performance and stability as opposed to the MS-provided SQL Server JDBC driver.
Information and download link for the hotpatch from Microsoft Support can be found here:
http://support.microsoft.com/kb/2652061
I was experiencing the same issue, where the date would be off by two days, and this hotpatch fixed it.
This is also an issue in OpenJDK 1.6.0_20. However, the mssql driver works fine with Suns JRE 1.6.0_16.
I have written a job server that runs 1 or more jobs concurrently (or simultaneously depending on the number of CPUs on the system). A lot of the jobs created connect to a SQL Server database, perform a query, fetch the results and write the results to a CSV file. For these types of jobs I use pyodbc and Microsoft SQL Server ODBC Driver 1.0 for Linux to connect, run the query, then disconnect.
Each job runs as a separate process using the python multiprocessing module. The job server itself is kicked off as a double forked background process.
This all ran fine until I noticed today that the first SQL Server job ran fine but the second seemed to hang (i.e. look as though it was running forever).
On further investigation I noticed the process for this second job had become zombified so I ran a manual test as follows:
[root#myserver jobserver]# python
Python 2.6.6 (r266:84292, Dec 7 2011, 20:48:22)
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import pyodbc
conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD')
c = conn.cursor()
c.execute('select * from my_table')
<pyodbc.Cursor object at 0x1d373f0>
r = c.fetchall()
len(r)
19012
c.close()
conn.close()
conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD')
Segmentation fault
So as you can see the first connection to the database works fine but any subsequent attempts to connect fail with a segmentation fault.
I cannot for the life of me figure out why this has started happening or the solution, all worked fine before today and no code has been changed.
Any help on this issue would be much appreciated.
I had a very similar problem and in my case the solution was to upgrade the ODBC driver on the machine I was trying to make the connection from. I'm afraid I don't know much about why that fixed the problem. I suspect something was changed or upgraded on the database server I was trying to connect to.
This answer might be too late for the OP but I wanted to share it anyway since I found this question while I was troubleshooting the problem and was a little discouraged when I didn't see any answers.
I cannot detail the specifics of the underlying mechanics behind this problem. I can however say that the problem was being caused by using the Queue class in python's multiprocessing module. Whether I was implementing this Queue correctly remains unanswered but it appears the queue was not terminating the sub process (and the underlying database connection) properly after each job completed which led to the segmentation faults.
To solve this I implemented my own queuing system which was basically a list of Process objects executed in the order they were put into the list. A loop then made periodic checks on the status of those processes until all had completed. The next batch of jobs would then be retrieved and executed.
I also encounter this problem recently. My config includes unixODBC-2.3.0 plus MS ODBC Driver 1.0 for Linux. After some experiments, we speculate that the problem may arise due to database upgrade (to SQLServer 2008 SP1 in our case), thus triggering some bugs in the MS ODBC driver. The problem also occurs in this thread:
http://social.technet.microsoft.com/Forums/sqlserver/en-US/23fafa84-d333-45ac-8bd0-4b76151e8bcc/sql-server-driver-for-linux-causes-segmentation-fault?forum=sqldataaccess
I also tried upgrade my driver manager to unixODBC-2.3.2 but with no luck. My final solution is using FreeTDS 0.82.6+ with unixODBC-2.3.2. This version of FreeTDS driver goes badly along with unixODBC-2.3.0, for the manager keeps complaining about function non-support of the driver. Everything goes smooth if unixODBC is upgraded.
Here is the full error: SqlException: A transport-level error has occurred when receiving results from the server. (provider: Shared Memory Provider, error: 1 - I/O Error detected in read/write operation)
I've started seeing this message intermittently for a few of the unit tests in my application (there are over 1100 unit & system tests). I'm using the test runner in ReSharper 4.1.
One other thing: my development machine is a VMWare virtual machine.
I ran into this many moons ago. Bottom line is you are running out of available ports.
First make sure your calling application has connection pooling on.
If that does then check the number of available ports for the SQL Server.
What is happening is that if pooling is off then every call takes a port and it takes by default 4 minutes to have the port expire, and you are running out of ports.
If pooling is on then you need to profile all the ports of SQL Server and make sure you have enough and expand them if necessary.
When I came across this error, connection pooling was off and it caused this issue whenever a decent load was put on the website. We did not see it in development because the load was 2 or 3 people at max, but once the number grew over 10 we kept seeing this error. We turned pooling on, and it fixed it.
I ran into this many moons ago as well. However, not to discount #Longhorn213s explanation, but we had the exact opposite behavior. We received the error in development and testing, but not production where obviously the load was much greater. We ended up tolerating the issue in development as it was sporadic and didn't materially slow down progress. I think there could be several reasons for this error, but was never able to pin point the cause myself.
We've also run across this error and figured out that we were killing a SQL server connection from the database server. The client application is under the impression that the connection is still active and tries make use of that connection, but fails because it was terminated.
We saw this in our environment, and traced part of it down to the "NOLOCK" hint in our queries. We removed the NOLOCK hint and set our servers to use Snapshot Isolation mode, and the frequency of these errors was reduced quite a bit.
We have seen this error a few times and tried different resolutions with varying success. One common underlying theme has been that the system giving the error was running low on memory. This is especially true if the server that is hosting Sql Server is running ANY other non-OS process. By default SQL Server will grab any memory that it can, then if leaving little for other processes/drivers. This can cause erratic behavior and intermittent messages. It is good practice to configure your SQL Server for a maximum memory that leaves some headroom is there are other processes that might need it. Example: Visual Studio on a dev machine that is running a copy of SQL Server developers edition on the same machine.