Source code compatibility between java 7 & 8 for overloaded functions - database

I have created a single jar for Java 7 & 8 for a JDBC driver (using -source/-target compile options). However, I am having difficulty compiling applications that use the new/overloaded methods in the ResultSet interface:
//New in Java 8
updateObject(int columnIndex, Object x, SQLType targetSqlType)
// Available in Java 7
updateObject(int columnIndex, Object x, int targetSqlType)
Note that SQLType is a new interface introduced in Java 8.
I have compiled the driver using Java 8, which worked fine. However, when any application using the driver accesses the method updateObject(int, Object, int) from Java 7, it gets a compilation error saying “class file for java.sql.SQLType not found”, although the application is not using SQLType. I think this is because Java looks at all the overloaded methods to determine the most specific one, and when doing so it can not access the new updateObject method in Java 8 (as SQLType is not defined in Java 7). Any idea how I can resolve this issue?
Note that the updateObject method has a default implementation in the ResultSet interface in Java 8 --- so I can not even use a more generic type instead of SQLType in the new method. In that case any application that uses the new method gets a compilation error saying updateObject is ambiguous.

You can't use something compiled in Java 8 (for instance) in a lower version (say Java 7). You will get something like Unsupported major.minor version.... You need to use two JARs, one for version 1.7 and the other one for version 1.8. Eventually, the one for the 1.7 can't have that SQLType if it's not supported on that JDK; on the other hand, you are encouraged to maintain the overloaded version when you do the 1.8 version.
Notice that this doesn't have nothing to do with backwards compatibility.

In this case, I would call it the application’s fault. After all, your class is implementing the ResultSet interface and applications using JDBC should be compiled against that interface instead of your implementation class.
If a Java 7 application is compiled under Java 7 (where SQLType does not exist) against the Java 7 version of the ResultSet interface, there should be no problems as that interface doesn’t have that offending updateObject method and it doesn’t matter which additional methods an implementation class has. If done correctly, the compiler shouldn’t even know that the implementation type will be your specific class.
You may enforce correct usage by declaring the methods of your Statement implementation class to return ResultSet instead of a more specific type. The same applies to the Connection returned by the Driver and the Statement returned by the Connection. It’s tempting to use covariant return types to declare your specific implementation classes but whenever your methods declare the interface type instead, you are guiding the application programmers to an interface based usage avoiding the problems described in your question.
The application programmer still may use a type cast, or better unwrap, to access custom features of your implementation if required. But then it’s explicit and the programmer knows what potential problems may occur (and how to avoid them).

Related

Using Flink with thrift

I'm seeing some logs within my flink app with respect to my thrift classes:
2020-06-01 14:31:28 INFO TypeExtractor:1885 - Class class com.test.TestStruct contains custom serialization methods we do not call, so it cannot be used as a POJO type and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance.
So I followed the instructions here:
https://flink.apache.org/news/2020/04/15/flink-serialization-tuning-vol-1.html#apache-thrift-via-kryo
And I did that for the thrift of TestStruct along with all the thrift structs within that. ( I've skipped over named types though ).
Also the thrift code that got generated is in Java whereas the flink app is written using scala.
How would I make that error disappear? Because I'm getting another bug where if I pass my dataStream to convert into that TestStruct, some fields are missing. I suspect this is due to serialization issues?
Actually, as of now, you can't get rid of this warning, but it is also not a problem for the following reason:
The warning basically just says that Flink's type system is not using any of its internal serializers but will instead treat the type as a "generic type" which means, it is serialized via Kryo. If you followed my blog post on this, this is exactly what you want: use Kryo to serialize via Thrift. You could use a debugger to set a breakpoint into TBaseSerializer to verify that Thrift is being used.
As for the missing fields, I would suspect that this happens during the conversion into your TestStruct in your (flat)map operator and maybe not in the serialization that is used to pass this struct to the next operator. You should verify where these fields get missing - if you have this reproducible, a breakpoint in the debugger of your favourite IDE should help you find the cause.

Use Storage.writeObject with a Runnable in Codename One

In Codename One, a code like the following doesn't compile:
Runnable r = (Runnable & Serializable)() -> Log.p("Serializable!");
I get:
error: cannot find symbol
symbol: method getImplMethodKind()
location: interface SerializedLambda
Is there any way to write a Runnable to the Storage? Thank you
No. Unlike Java serialization we don't write Class data only the data of the object. Since the bytecode is transpiled to native platforms there's no applicable class data to write. We also don't support the Serializable interface, only our version of Externalizable which isn't compatible.
You can write something based on that but it won't be as pretty as you need to create a regular class. That's because we can't use reflection voodoo to load an oddly structured class dynamically.

The client must not make any assumptions regarding the internal implementation

the full sentence taken from the EJB3.2 specifications:
When interacting with a reference to the no-interface view, the client
must not make any assumptions regarding the internal implementation of
the reference, such as any instance-specific state that may be
present in the reference
I'm actually trying to understand what that actually mean and I was wondering if someone could kindly provide some examples.
EDIT:
The Above sentence is take from Section 3.4.4 Session Bean’s No-Interface View, maybe this info is helpful
When generating a no-interface view proxy, the EJB container must create a subclass of the EJB class and override all public methods to provide proxying behavior (like security, transactions).
You can get a reference to the bean with (eg. for passing it to another ejb):
NoInterfaceBean bean = ejbContext.getBusinessObject(NoInterfaceBean.class);
This returns a reference with a class-type that is the same as the bean class itself (normally if the EJB has a business interface it would return the interface class), but it is not a reference to an instance of NoInterfaceBean (but to that proxy class with the same name). Think of it as a reference to a pimped version of your bean, about which you
must not make any assumptions regarding the internal implementation
It's basically the same with "normal" EJB's. You know that there is some magic around your bean instance, but since you get the interface as class-type it's already clear that every class implementing the interface can have a different internal implementation.
So the specification emphasizes this difference at that point. Even if it looks like a reference to your concrete class it is none (as they say in the next paragraph of the specification JSR-000345 Enterprise JavaBeansTM 3.2 Final Release:
Although the reference object is type-compatible with the
corresponding bean class type, there is no prescribed relationship
between the internal implementation of the reference and the
implementation of the bean instance.

db transaction in golang

In Java, it's pretty easy to switch between auto commit and manual commit of a database transaction. When I say easy, I mean it doesn't need to change the connection interface. Simply setAutoCommit to be true or false will switch the transaction between auto/manual mode. However, Go uses different connection interface, sql.DB for auto mode, and sql.Tx for manual mode. It's not a problem for a one off use. The problem is I have a framework which uses sql.DB to do the db work, now I want some of them to join my new transaction, and it seems to be not that easy without modifying the existing framework to accept sql.Tx. I am wondering if there is really not an easy way to do the auto/manual switch in Go?
Without knowing more about which framework you are using, I don't think there is a way to do it without modifying the framework. You should really try to get the modifications you do included in the framework because the main problem here is that the framework you are using is designed poorly. When writing for a new language (specially a library or framework) you should get to know the conventions and design your software accordingly.
In go, it is not to hard to accomplish this functionality you just have to declared the Queryer (or however you want to call it) interface like this:
type Queryer interface {
Query(string, ...interface{}) (*sql.Rows, error)
QueryRow(string, ...interface{}) *sql.Row
Prepare(string) (*sql.Stmt, error)
Exec(string, ...interface{}) (sql.Result, error)
}
This interface is implemented implicitly by the sql.DB and the sql.Tx so after declaring it, you just have to modify the functions/methods to accept the Queryer type instead of a sql.DB.

Getting types in mscorlib 2.0.5.0 (aka Silverlight mscorlib) via reflection on?

I am trying to add Silverlight support to my favorite programming langauge Nemerle.
Nemerle , on compilation procedure, loads all types via reflection mainly in 2 steps
1-) Uses Assembly.LoadFrom to load assembly
2-) Usese Assembly.GetTypes() to get the types
Then at the end of compilation it emits the resolved types with Reflection.Emit.
This procedure works for all assemblies including Silverlight ones except mscorlib of silverlight.
In c# this fails:
var a = System.Reflection.Assembly.LoadFrom(#"c:\mscorlib.dll");
but this passes :
var a = System.Reflection.Assembly.ReflectionOnlyLoadFrom(#"c:\mscorlib.dll");
Bu in the latter , a.GetTypes() throws an exception sayin System.Object's parent does not exist.
Is there a way out ?
Assuming you're trying to reflect over Silverlight's mscorlib from the standard CLR, this won't work because the CLR doesn't permit loading multiple versions of mscorlib. (Perhaps this is because it could upset resolution of its core types).
A workaround is to use Mono.Cecil to inspect the types:
http://mono-project.com/Cecil. This library actually performs better than .NET's Reflection and is supposed to be more powerful.
Here's some code to get you started:
AssemblyDefinition asm = AssemblyFactory.GetAssembly(#"C:\mscorlib.dll");
var types =
from ModuleDefinition m in asm.Modules
from TypeDefinition t in m.Types
select t.Name;
You can compile Nemerle with Silverlight assembly and then you have Nemerle working on top of Silverlight :)

Resources