Passing serialized Externalizable object from standard JVM to CodenameOne - codenameone

I'm trying to deserialize an object, which:
was created and serialized in another standard JVM (server)
implements traditional Java Externalizable interface
was passed over a network
public static void getData() {
ConnectionRequest req = new ConnectionRequest() {
#Override
protected void readResponse(InputStream is) throws IOException {
DataInputStream dis = new DataInputStream(is);
Employee recovered = new Employee();
recovered.internalize(1, dis);
}
};
req.setUrl(BASEURL);
req.setPost(false);
NetworkManager.getInstance().addToQueueAndWait(req);
}
From the remote jvm I'm passing object in ByteArray or ByteArrayInputStream and in CN1 I get EOFException.
Is it possible to transfer objects such way? Or should i use JSON.
I thought I don't need JSON, if I have Java on both sides..

Codename One's externalization interface isn't compatible with Java SE. Serialization and externalization relies on reflection and dynamic invocation which aren't practical on all of Codename One's targets (even Android where the binary is usually obfuscated).
You can pass an object however you will need to use the Codename One API to do so. You can effectively take the JavaSE.jar file from the Codename one project and use the API there to write/read the object.
Other than that your code to read the object is incorrect. You should use Util.readObject/writeObject. I suggest reading the great tutorial Steve Hannah wrote on the subject.

Related

Unit testing Flink Topology without using MiniClusterWithClientResource

I have a Fink topology that consists of multiple Map and FlatMap transformations. The source/sink are from/to Kafka. The Kakfa records are of type Envelope (defined by someone else), and are not marked as "serializable". I want to Unit test this topology.
I defined a simple SourceFunction that returns a list of Envelope as the source:
public class MySource extends RichParallelSourceFunction<Envelope> {
private List<Envelope> input;
public MySource(List<Envelope> input) {
this.input = input;
}
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
}
#Override
public void run(SourceContext<Envelope> ctx) throws Exception {
for (Envelope listElement : inputOfSubtask) {
ctx.collect(listElement);
}
}
#Override
public void cancel() {}
}
I am using MiniClusterWithClientResource to Unit test the topology. I ran onto two problems:
I need to make MySource serializable, as Flink wants/needs to serialize the source. As a workaround, I make input transient. The allowed the code to compile.
Then I ran into the runtime error:
org.apache.flink.api.common.functions.InvalidTypesException: The return type of function 'Custom Source' could not be determined automatically, due to type erasure. You can give type information hints by using the returns(...) method on the result of the transformation call, or by letting your function implement the 'ResultTypeQueryable' interface.
I am trying to understand why I am getting this error, which I was not getting before when the topology is consuming from a kafka cluster using a KafkaConsumer. I found a workaround by providing the Type info using the following:
.returns(TypeInformation.of(Envelope.class))
However, during runtime, after deserialization, input is set to null (obviously, as there is no deserialization method defined.).
Questions:
Can someone please help me understand why I am getting the InvalidTypesException exception?
Why if MySource being deserialized/serialized? Is there a way I can void this while usingMiniClusterWithClientResource?
I could hack some writeObject() and readObject() method in MySource. But I prefer to avoid that route. Is it possible to use some framework / class to test the Topology without providing a Source (and Sink) that is Serializable? It would be great if I could use something like KeyedOneInputStreamOperatorTestHarness that I could pass as topology, and avoid the whole deserialization / serialization step in the beginning.
Any ideas / pointers would be greatly appreciated.
Thank you,
Ahmed.
"why I am getting the InvalidTypesException exception?"
Not sure, usually I'd need to see the workflow definition to understand where the type information is getting dropped.
"Why if MySource being deserialized/serialized?"
Because Flink distributes operators to multiple tasks on multiple machines by serializing them, then sending over the network, and then deserializing.
"Is there a way I can void this while using MiniClusterWithClientResource?"
Yes. Since the MiniCluster runs in a single JVM, you can use a static ConcurrentLinkedQueue to hold all of the Envelope records, and your MySource just reads from this queue.
Nit: Your MySource should set a transient boolean running flag to true in the open() method, false in the cancel() method, and check it in the run() method's loop.

Non Serializable object in Apache Flink

I am using Apache Flink to perform analytics on streaming data.
I am using a dependency whose object takes more than 10 secs to create as it is reads several files present in hdfs before initialisation.
If I initialise the object in open method I get a timeout Exception and if in the constructor of a sink/flatmap, I get serialisation exception.
Currently I am using static block to initialise the object in some other class, using Preconditions.checkNotNull(MGenerator.mGenerator) in main file and then it's working if used in a flatmap of sink.
Is there a way to create a non serializable dependency's object which might take more than 10 secs to be initialised in Flink's flatmap or sink?
public class DependencyWrap {
static MGenerator mGenerator;
static {
final String configStr = "{}";
final Config config = new Gson().fromJson(config, Config.class);
mGenerator = new MGenerator(config);
}
}
public class MyStreaming {
public static void main(String[] args) throws Exception {
Preconditions.checkNotNull(MGenerator.mGenerator);
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(parallelism);
...
input.flatMap(new RichFlatMapFunction<Map<String,Object>,List<String>>() {
#Override
public void open(Configuration parameters) {
}
#Override
public void flatMap(Map<String,Object> value, Collector<List<String>> out) throws Exception {
out.collect(MFVGenerator.mfvGenerator.generateMyResult(value.f0, value.f1));
}
});
}
}
Also, Please correct me if I am wrong about the question.
Doing it in the Open method is 100% the right way to do it. Is Flink giving you a timeout exception, or the object?
As a last ditch method, you could wrap your object in a class that contains both the object and it's JSON string or Config (is Config serializable?) with the object marked transient and then override the ReadObject/WriteObject methods to call the constructor. If the mGenerator object itself is stateless (and you'll have other problems if it's not), the serialization code should get called only once when jobs are distributed to taskmanagers.
Using open is usually the right place to load external lookup sources. The timeout is a bit odd, maybe there is a configuration around it.
However, if it's huge using a static loader (either static class as you did or singleton) has the benefit that you only need to load it once for all parallel instances of the task on the same task manager. Hence, you save memory and CPU time. This is especially true for you, as you use the same data structure in two separate tasks. Further, the static loader can be lazily initialized when it's used for the first time to avoid the timeout in open.
The clear downside of this approach is that the testability of your code suffers. There are some ways around that, which I could expand if there is interest.
I don't see a benefit of using the proxy serializer pattern. It's unnecessarily complex (custom serialization in Java) and offers little benefit.

meaning of parameters in parse method OWLAPI (building an AST)

I was looking for a good parser for OWL ontologies - initially in Python since I have very limited experience with Java. It seems that OWLAPI is the best choice as far as I can tell, and well, it is Java.
So, I am trying to parse an .owl file and build the AST from it. I downloaded owlapi and I´m having problems with it since it doesn´t seem to have much in terms of documentation.
My very basic question is what do the two first parameters of - say - OWLXMLParser(), stand for:
- document source: Is this the .owl file read as a stream (in getDocument below)?
- root ontology: what goes here? initially I thought that this is where the .owl file goes, seems not to be the case.
Does the parse method construct the AST or am I barking up the wrong tree?
I´m pasting some of my intents below - there are more of them but for I´m trying to be less verbose :)
[The error I´m getting is this - if anyone cares - although the question is more fundamental:
java.lang.NullPointerException: stream cannot be null
at org.semanticweb.owlapi.util.OWLAPIPreconditions.checkNotNull(OWLAPIPreconditions.java:102)
at org.semanticweb.owlapi.io.StreamDocumentSourceBase.(StreamDocumentSourceBase.java:107)
at org.semanticweb.owlapi.io.StreamDocumentSource.(StreamDocumentSource.java:35)
at testontology.testparsers.OntologyParser.getDocument(App.java:72)
at testontology.testparsers.OntologyParser.test(App.java:77)
at testontology.testparsers.App.main(App.java:58)]
Thanks a lot for your help.
public class App
{
public static void main( String[] args )
{
OntologyParser o = new OntologyParser();
try {
OWLDocumentFormat p = o.test();
} catch (Exception e) {
e.printStackTrace();
}
}
}
class OntologyParser {
private OWLOntology rootOntology;
private OWLOntologyManager manager;
private OWLOntologyDocumentSource getDocument() {
System.out.println("access resource stream");
return new StreamDocumentSource(getClass().getResourceAsStream(
"/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl"));
}
public OWLDocumentFormat test() throws Exception {
OWLOntologyDocumentSource documentSource = getDocument();
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology rootOntology = manager.loadOntologyFromOntologyDocument (new FileDocumentSource(new File("/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl")));
OWLDocumentFormat doc = parseOnto(documentSource, rootOntology);
return doc;
}
private OWLDocumentFormat parseOnto(
#Nonnull OWLOntologyDocumentSource initialDocumentSource,
#Nonnull OWLOntology initialOntology) throws IOException {
OWLParser initialParser = new OWLXMLParser();
OWLOntologyLoaderConfiguration config = new OntologyConfigurator().buildLoaderConfiguration();
//// option 1:
//final OWLOntologyManager managerr = new OWLOntologyManagerImpl(new OWLDataFactoryImpl(), new ReentrantReadWriteLock(true));
//final IRI iri = IRI.create("testasdf");
//final IRI version = IRI.create("0.0.1");
//OWLOntologyDocumentSource source = new FileDocumentSource(new File("/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl"));
//final OWLOntology onto = new OWLOntologyImpl(managerr, new OWLOntologyID(iri,version));
//return initialParser.parse(initialDocumentSource, onto, config);
////
//option 2:
return initialParser.parse(initialDocumentSource, initialOntology, config);
}
Click here to Reply or Forward
15.32 GB (13%) of 115 GB used
Manage
Terms - Privacy
Last account activity: 1 hour ago
Details
The owlapi parsers are designed for use by the OWLOntologyManager implementations, which are managed (unless you're writing a new owlapi implementation) by the OWLManager singleton. There's plenty of examples on how to use that class in the wiki pages.
All parsers included in the owlapi distribution are meant to create OWLAxiom instances in an OWLOntology, not create an AST of an owl file - the syntactic shape of the files depends on the specific format, on the preferences of the writer, and so on, while the purpose of the api is to provide ontology manipulation functionality to the caller. The details of the output format can be tweaked but exposing them to the caller is not part of the main design.

Objectify with Cloud Endpoints

I am using appengine cloud endpoints and objectify. I have previously deployed these endpoints before and now I am updating them and it is not working with Objectify. I have moved to a new machine and running latest appengine 1.8.6. Have tried putting objectify in the classpath and that did not work. I know this can work, what am I missing??
When running endpoints.sh:
Error: Parameterized type
com.googlecode.objectify.Key<MyClass> not supported.
UPDATE:
I went back to my old computer and ran endpoints.sh on same endpoint and it worked fine. Old machine has 1.8.3. I am using objectify 3.1.
UPDATE 2:
Updated my old machine to 1.8.6 and get same error as other machine. Leaves 2 possibilities:
1) Endpoints no longer support objectify 3.1
or
2) Endpoints have a bug in most recent version
Most likely #1...I've been meaning to update to 4.0 anyways...
Because of the popularity of Objectify, a workaround was added in prior releases to support the Key type, until a more general solution was available. Because the new solution is available, the workaround has been removed. There are two ways you can now approach the issue with the property.
Add an #ApiResourceProperty annotation that causes the key to be omitted from your object during serialization. Use this approach if you want a simple solution and don't need access to the key in your clients.
Add an #ApiTransformer annotation that provides a compatible mechanism to serialize/deserialize the field. Use this approach if need access to the key (or a representation of it) in your clients. As this requires writing a transformer class, it is more work than the first option.
I came up with the following solution for my project:
#Entity
public class Car {
#Id Long id;
#ApiResourceProperty(ignored = AnnotationBoolean.TRUE)
Key<Driver> driver;
public Key<Driver> getDriver() {
return driver;
}
public void setDriver(Key<Driver> driver) {
this.driver = driver;
}
public Long getDriverId() {
return driver == null ? null : driver.getId();
}
public void setDriverId(Long driverId) {
driver = Key.create(Driver.class, driverId);
}
}
#Entity
public class Driver {
#Id Long id;
}
I know, it's a little bit boilerplate, but hey - it works and adds some handy shortcut methods.
At first, I did not understand the answer given by Flori, and how useful it really is. Because others may benefit, I will give a short explanation.
As explained earlier, you can use #ApiTransformer to define a transformer for your class. This would transform an unserializable field, like those of type Key<myClass> into something else, like a Long.
It turns out that when a class is processed by GCE, methods called get{fieldName} and set{FieldName} are automatically used to transform the field {fieldName}. I have not been able to find this anywhere in Google's documentation.
Here is how I use it for the Key{Machine} property in my Exercise class:
public class Exercise {
#ApiResourceProperty(ignored = AnnotationBoolean.TRUE)
public Key<Machine> machine;
// ... more properties
public Long getMachineId() {
return this.machine.getId();
}
public void setMachineId(Long machineId) {
this.machine = new Key<Machine>(Machine.class, machineId);
}
// ...
}
Others already mentioned how to approach this with #ApiResourceProperty and #ApiTransformer. But I do need the key available in client-side, and I don't wanna transform the whole entity for every one. I tried replacing the Objectify Key with com.google.appengine.api.datastore.Key, and it looks like it worked just fine as well in my case, since the problem here is mainly due to that endpoint does not support parameterized types.

Serializer library for Silverlight

I'm developing a modular app using prism in SL3, one of the modules is responsible for persisting the application settings in the isolated storage (so that when you open the app next time, you continue where you were). It works perfectly, except that I don't like the way dependencies are wired now.
I want to have a type-agnostic settings manager that has a generic store and then I add custom data from each module, some thing like this:
AppSettings["OpenForEditEmployees"] = new List<EmployeeDTO>();
AppSettings["ActiveView"] = ViewsEnum.Report;
I have implemented this part, but serialising that dictionary to xml proved to be harder than I suspected. I was wondering if there is an easy way to serialise a Dictionary<string, object> into XML.
Since you are using a Dictionary, the regular XmlSerializer won't work, you can serialize using DataContractSerializer.
These 2 static classes will handle all of your serialization/deserialization needs for string representation of xml in silverlight (and any .NET)
You will need a reference to System.Runtime.Serialization for the DataContractSerializer
public static void SerializeXml<T>(T obj, Stream strm)
{
DataContractSerializer ser = new DataContractSerializer(typeof(T));
ser.WriteObject(strm, obj);
}
public static T DeserializeXml<T>(Stream xml)
{
DataContractSerializer ser = new DataContractSerializer(typeof(T));
return (T)ser.ReadObject(xml);
}
and if you would rather use JSON, you can add a reference to the System.ServiceModel.Web assembly and use this version instead.
public static void SerializeJson<T>(T obj, Stream strm)
{
DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(T));
ser.WriteObject(strm, obj);
}
public static T DeserializeJson<T>(Stream json)
{
DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(T));
return (T)ser.ReadObject(json);
}
Have you looked at json.net
http://json.codeplex.com/
It's not XML but it does a great job with serialization.
And, works great in Silverlight.

Resources