I tried the sample from https://mariadb.com/kb/en/mariadb/bulk-insert-column-wise-binding with Server 10.2.6 and Connector/C 2.3.3 and with Server 10.3 and Connector/C 3.0.1 on Windows 64bit (InnoDB), but I always get that error message:
Server doesn't support function 'Bulk operation'
That's a clear error message, but all the docs show that MariaDB supports bulk operations. Is that a Windows problem? Or can I just enable that with a setting?
Turns out this is only possible with 10.2.3, but this fact is not mentioned in the docs.
Just heard today from the developers that this feature will be in 10.2.7, it was to late to get merged into 10.2.6
Related
SQL syntax - check the manual that corresponds to your MySQL server version for the right syntax to use near 'RECURSIVE __tree
Error log
mysql -V
mysql Ver 14.14 Distrib 5.7.23, for Linux (x86_64) using EditLine wrapper
I get this error when I create a test plan. Can you please help me take a look?
I'm using latest kiwi-tcms docker image.
Posted error logs are linked below: enter link description here
Your MySQL version seems to be supported. However make sure to disable ONLY_FULL_GROUP_BY mode which is enabled by default.
Using Snowflake JDBC driver version 3.11.1 we get below error on big-endian platforms.
We are observing an issue with latest Snowflake JDBC driver where even basic Select queries are failing with below exception. It used to work with earlier version 3.10.3. It seems issue with "Arrow" . Any plans of fixing this issue. Caused by: java.lang.IllegalStateException: Arrow only runs on LittleEndian systems. at net.snowflake.client.jdbc.internal.io.netty.buffer.UnsafeDirectLittleEndian.(UnsafeDirectLittleEndian.java:65) at net.snowflake.client.jdbc.internal.io.netty.buffer.UnsafeDirectLittleEndian.(UnsafeDirectLittleEndian.java:50) at net.snowflake.client.jdbc.internal.io.netty.buffer.PooledByteBufAllocatorL.(PooledByteBufAllocatorL.java:50) at net.snowflake.client.jdbc.internal.apache.arrow.memory.AllocationManager.(AllocationManager.java:53) snowflake-da
There is no way to disable Arrow for Snowflake clients. I would suggest you to use an earlier version of the JDBC driver (e.g. 3.9.x) to workaround it for the moment and contact Snowflake Support to explore your options moving forward.
Have you tried using the following alter session commands using latest snowflake driver on AIX environment?
ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON'
Reference: https://community.snowflake.com/s/article/SAP-BW-Java-lang-NoClassDefFoundError-for-Apache-arrow
On PhpStorm version 2017.1.3 but I think that this error is present on any JetBrains IDE with database support.
When I choose to synchronize an Oracle schema, some objects like triggers are not shown on database and I found an error on the log.
I could not find any reason and it was working on older PhpStorm / DataGrip version (before 2016.1)
In the options tab I've added an object filter. Without it there are 5000+ tables. Even removing the regular expression on the object filter I still have the same error.
Capture of Options and Advance Options. The oracle client used is Thin.
On your connection properties check if you are using Thin Driver and change it to change to OCI.
Firstly, my cconfig is:
Language: ColdFusion 10(and installed update 11)
DB is MS SQL Server 2012
Using the jtds jdbc(tried versions 1.2.6, 1.2.8 and 1.3.0)
I'm currently having a problem running queries where I use cfqueryparam with a cfsqltype of cf_sql_nvarchar. The problem is the page just hangs. If I look at the application log of ColdFusion, I see the error of:
"net.sourceforge.jtds.jdbc.JtdsPreparedStatement.setNString(ILjava/lang/String;)V The specific sequence of files included or processed is:" followed by the test filename.
I'm running a very basic select query on a nvarchar column, but the page doesn't load and that error is logged.
I know it's gotta be something to do with the jtds jdbc as if I connect through the regular sql driver it'll work perfectly.
So did anybody experience this before? If so, what was your resolution?
Thanks
I did a quick search and the results suggest jtds does not support setNString(). I checked the driver source for 1.3.1, and as mentioned in the comments here the method is not implemented:
"..while getNString is implemented the code just consists of // TODO
Auto-generated method stub and throw new AbstractMethodError();.."
So it sounds like you may need to use cf_sql_varchar, combined with the "String Format" setting, like in previous versions. Obviously, the other option is to use a different driver (one that does support setNString(), such as Adobe's driver or the MS SQL Server driver).
Try using cf_sql_varchar. cf_sql_nvarchar is not a valid option according to the Documentation and you should use cf_sql_varchar
We are in the process of upgrading WSO2 DSS from version 2.5.1 to 2.6.3. In version 2.5.1, we were able to execute stored procedures from our SQL Server 2005 database via services with no problems whatsoever. However, in this new version, that is not the case. When trying to execute a stored procedure in the TryIt window, an error is logged stating
ERROR {org.apache.axis2.transport.http.AxisServlet} - {org.apache.axis2.transport.http.AxisServlet} java.lang.AbstractMethodError
followed by a complete stack trace
If I change the query to a select statement, it works just fine.
Maybe there is some setting now that is needed prior to running stored procedures? Maybe it's another configuration issue? Hopefully someone can assist with this problem. I like the enhancements offered by this new version, but if we can't run stored procedures, it's not a viable option for us. Thanks in advance!
Jason
You have not actually posted the method they are complaining in the AbstractMethodError. But I'm guessing this is something to do with a JDBC driver which is not JDBC4 compliant. With Java 6, we are using JDBC4 features in WSO2 DSS, so you would have to upgrade to proper JDBC4 drivers, which is in MSSQL's case, would be the SQLJDBC4 driver. Hope this helps.
Cheers,
Anjana.