I am trying to convert a byte[] to an Object using Groovy. My actual Groovy Class that is represented by the byte array implements the Serializable interface and is stored in a separate Groovy class file. However I always get a ClassNotFoundException of this class when trying to call my toObject function. My code is written in Java and works when using Java but not when using Groovy.
private static byte[] toByteArray(Object obj) {
byte[] bytes = null;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
try {
ObjectOutputStream oos = new ObjectOutputStream(bos);
oos.writeObject(obj);
oos.flush();
oos.close();
bos.close();
bytes = bos.toByteArray();
} catch (Exception ex) {
ex.printStackTrace();
}
return bytes;
}
private static Object toObject(byte[] bytes) {
Object obj = null;
try {
ByteArrayInputStream bis = new ByteArrayInputStream(bytes);
ObjectInputStream ois = new ObjectInputStream(bis);
obj = ois.readObject();
} catch (Exception ex) {
ex.printStackTrace(); // ClassNotFoundException
}
return obj;
}
What is the best way to do this?
Edit: A class that would be used in this context where the ClassNotFoundException would occur is something like this:
public class MyItem implements Serializable {
/**
*
*/
private static final long serialVersionUID = -615551050422222952L;
public String text
MyItem() {
this.text = ""
}
}
Then testing the whole thing:
void test() {
MyItem item1 = new MyItem ()
item1.text = "bla"
byte[] bytes = toByteArray(item1) // works
Object o = toObject(bytes) // ClassNotFoundException: MyItem
MyItem item2 = (MyItem) o
System.out.print(item.text + " <--> " + item2.text)
}
My guess (and this is what #JustinPiper suggested as well) is that you are not compiling your groovy class before referencing it. You must compile your groovy class using groovyc and make sure that the compiled class file is placed on the classpath (probably in the same file as the java test class).
Related
I am using Apache flink on Kinesis Data Analytics.
Flink Version : 1.13.2
Jave : 1.11
I am consuming json messages from kafka. Sample Input records look as below
null {"plateNumber":"506b9910-74a7-4c3e-a885-b5e9717efe3a","vignetteStickerId":"9e69df3f-d728-4fc8-9b09-42104588f772","currentTimestamp":"2022/04/07 16:19:55","timestamp":1649362795.444459000,"vehicleType":"TRUCK","vehicleModelType":"TOYOTA"}
null {"plateNumber":"5ffe0326-571e-4b97-8f7b-4f49aebb6993","vignetteStickerId":"6c2e1342-b096-4cc9-a92c-df61571c2c7d","currentTimestamp":"2022/04/07 16:20:00","timestamp":1649362800.638060000,"vehicleType":"CAR","vehicleModelType":"HONDA"}
null {"plateNumber":"d15f49f9-5550-4780-b260-83f3116ba64a","vignetteStickerId":"1366fbfe-7d0a-475f-9249-261ef1dd6de2","currentTimestamp":"2022/04/07 16:20:05","timestamp":1649362805.643749000,"vehicleType":"TRUCK","vehicleModelType":"TOYOTA"}
null {"plateNumber":"803508fb-9701-438e-9028-01bb8d96a804","vignetteStickerId":"b534369f-533e-4c15-ac3f-fc28cf0f3aba","currentTimestamp":"2022/04/07 16:20:10","timestamp":1649362810.648813000,"vehicleType":"CAR","vehicleModelType":"FORD"}
I want to aggregate sum these records into 20 seconds window using vehicleType (CAR OR TRUCK) and vehicleModelType (TOYOTA,HONDA or FORD) . SQL Analogy (sum() ,Group by vehicleType, vehicleModelType)
I am using Aggregate function to achieve this.
import static java.util.Objects.isNull;
import org.apache.flink.api.common.functions.AggregateFunction;
import org.springframework.stereotype.Component;
import com.helecloud.streams.demo.model.Vehicle;
import com.helecloud.streams.demo.model.VehicleStatistics;
#Component
public class VehicleStatisticsAggregator implements AggregateFunction<Vehicle, VehicleStatistics, VehicleStatistics> {
/**
*
*/
private static final long serialVersionUID = 1L;
#Override
public VehicleStatistics createAccumulator() {
System.out.println("Creating Accumulator!!");
return new VehicleStatistics();
}
#Override
public VehicleStatistics add(Vehicle vehicle, VehicleStatistics vehicleStatistics) {
System.out.println("vehicle in add method : " + vehicle);
if (isNull(vehicleStatistics.getVehicleType())) {
vehicleStatistics.setVehicleType(vehicle.getVehicleType());
}
if (isNull(vehicleStatistics.getVehicleModelType())) {
vehicleStatistics.setVehicleModelType(vehicle.getVehicleModelType());
}
// if(isNull(vehicleStatistics.getStart())) {
//
// vehicleStatistics.setStart(vehicle.getTimestamp());
// }
// if(isNull(vehicleStatistics.getCurrentTimestamp())) {
//
// vehicleStatistics.setCurrentTimestamp(vehicle.getCurrentTimestamp());
// }
if (isNull(vehicleStatistics.getCount())) {
vehicleStatistics.setCount(1);
} else {
System.out.println("incrementing count for : vehicleStatistics : " + vehicleStatistics);
vehicleStatistics.setCount(vehicleStatistics.getCount() + 1);
}
vehicleStatistics.setEnd(vehicle.getTimestamp());
System.out.println("vehicleStatistics in add : " + vehicleStatistics);
return vehicleStatistics;
}
#Override
public VehicleStatistics getResult(VehicleStatistics vehicleStatistics) {
System.out.println("vehicleStatistics in getResult : " + vehicleStatistics);
return vehicleStatistics;
}
#Override
public VehicleStatistics merge(VehicleStatistics vehicleStatistics, VehicleStatistics accumulator) {
System.out.println("Coming to merge!!");
VehicleStatistics vs = new VehicleStatistics(
// vehicleStatistics.getStart(),
accumulator.getEnd(),
// vehicleStatistics.getCurrentTimestamp(),
vehicleStatistics.getVehicleType(), vehicleStatistics.getVehicleModelType(),
vehicleStatistics.getCount() + accumulator.getCount());
System.out.println("VehicleStatistics in Merge :" + vs);
return vs;
}
}
In the above code I am also not seeing the merge code being called.
Below is the main processing code
#Service
public class ProcessingService {
#Value("${kafka.bootstrap-servers}")
private String kafkaAddress;
#Value("${kafka.group-id}")
private String kafkaGroupId;
public static final String TOPIC = "flink_input";
public static final String VEHICLE_STATISTICS_TOPIC = "flink_output";
#Autowired
private VehicleDeserializationSchema vehicleDeserializationSchema;
#Autowired
private VehicleStatisticsSerializationSchema vehicleStatisticsSerializationSchema;
#PostConstruct
public void startFlinkStreamProcessing() {
try {
processVehicleStatistic();
} catch (Exception e) {
// log.error("Cannot process", e);
e.printStackTrace();
}
}
public void processVehicleStatistic() {
try {
StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
FlinkKafkaConsumer<Vehicle> consumer = createVehicleConsumerForTopic(TOPIC, kafkaAddress, kafkaGroupId);
consumer.setStartFromLatest();
System.out.println("Starting to consume!!");
consumer.assignTimestampsAndWatermarks(WatermarkStrategy.forMonotonousTimestamps());
FlinkKafkaProducer<VehicleStatistics> producer = createVehicleStatisticsProducer(VEHICLE_STATISTICS_TOPIC, kafkaAddress);
DataStream<Vehicle> inputMessagesStream = environment.addSource(consumer);
inputMessagesStream
.keyBy((vehicle -> vehicle.getVehicleType().ordinal()))
// .keyBy(vehicle -> vehicle.getVehicleModelType().ordinal())
// .keyBy(new KeySelector<Vehicle, Tuple2<VehicleType, VehicleModelType>>() {
//
// /**
// *
// */
// private static final long serialVersionUID = 1L;
//
// #Override
// public Tuple2<VehicleType, VehicleModelType> getKey(Vehicle vehicle) throws Exception {
// return Tuple2.of(vehicle.getVehicleType(), vehicle.getVehicleModelType());
// }
// })
// .filter(v -> CAR.equals(v.getVehicleType()))
.window(TumblingEventTimeWindows.of(Time.seconds(20)))
// .windowAll(TumblingEventTimeWindows.of(Time.seconds(10)))
.aggregate(new VehicleStatisticsAggregator())
.addSink(producer);
System.out.println("Adding to Sink!!");
environment.execute("Car Truck Counts By Model");
} catch(Exception e) {
e.printStackTrace();;
}
}
private FlinkKafkaConsumer<Vehicle> createVehicleConsumerForTopic(String topic, String kafkaAddress, String kafkaGroup ) {
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", kafkaAddress);
properties.setProperty("group.id", kafkaGroup);
return new FlinkKafkaConsumer<>(topic, vehicleDeserializationSchema, properties);
}
private FlinkKafkaProducer<VehicleStatistics> createVehicleStatisticsProducer(String topic, String kafkaAddress){
return new FlinkKafkaProducer<>(kafkaAddress, topic, vehicleStatisticsSerializationSchema);
}
}
I am getting the result as below.
null {"end":1649362835.665466000,"vehicleType":"TRUCK","vehicleModelType":"HONDA","count":3}
null {"end":1649362825.656024000,"vehicleType":"CAR","vehicleModelType":"TOYOTA","count":1}
null {"end":1649362850.675786000,"vehicleType":"CAR","vehicleModelType":"TOYOTA","count":3}
null {"end":1649362855.677596000,"vehicleType":"TRUCK","vehicleModelType":"TOYOTA","count":1}
But is there a way to validate this ?
Also other question is I am trying to aggregate the result based on multiple keys is AggregateFunction the correct way to do this.
I am asking this as I saw this How can I sum multiple fields in Flink?
So If I have to aggregate sum on multiple fields can aggregate function accomplish the same ?(the way I wrote the code)
Kindly let me know. Thanks in advance.
Merge will only be called if you are using windows that merge -- in other words, if you are using session windows, or a custom merging window.
The correct way to aggregate based on multiple keys is to use keyBy with a composite type, such as Tuple2<VehicleType, VehicleModelType>>. Each time you call keyBy the stream is repartitioned from scratch (and not in addition to any previous partitioning).
This is my code of my class which gets the json response in "resp".
public class Caller extends Thread {
public CallSoap cs;
public String resp, Blood_Group, City;
ClassObj object;
Gson gson = new GsonBuilder().create();
public void run() {
try {
cs = new CallSoap();
resp = cs.CallGetDonorList(Blood_Group, City);
object = gson.fromJson(resp,ClassObj.class);
int a = 0;
}catch(Exception ex)
{
MainActivity.rslt=ex.toString();
}
}
}
resp has value like this:
[{
"Donor_Id":1,
"Donor_Name":"Rakesh",
"Donor_ContactNo":9044234578,
"Blood_Group":"A+",
"City":"Lucknow"},
{....}
]
and I have created a class to serialize these values as:
import com.google.gson.annotations.SerializedName;
public class ClassObj {
#SerializedName("Donor_Id")
private int Donor_Id;
#SerializedName("Donor_Name")
private String Donor_Name;
#SerializedName("Donor_ContactNo")
private String Donor_ContactNo;
#SerializedName("Blood_Group")
private String Blood_Group;
#SerializedName("City")
private String City;
}
You're trying to deserialize an array of donor objects, not just a single one. As such, you'll need to specify the type correctly:
//declaration
ClassObj[] objects;
//deserialization
objects = gson.fromJson(resp,ClassObj[].class);
I would also strongly consider naming your classes and variables appropriately, i.e. ClassObj could be Donor and objects could be donors.
This question already has answers here:
Serialize and Deserialize Json and Json Array in Unity
(9 answers)
Closed 5 years ago.
I need to save multiple player's data. I am doing it by making an array of PlayersInfo class and trying to convert the array into JSON. here is my code
PlayerInfo[] allPlayersArray = new PlayerInfo[1];
allPlayersArray[0] = new PlayerInfo();
allPlayersArray[0].playerName = "name 0";
string allPlayersArrayJson = JsonUtility.ToJson(allPlayersArray);
print(allPlayersArrayJson);
PlayerPrefs.SetString("allPlayersArrayJson", allPlayersArrayJson);
string newJson = PlayerPrefs.GetString("allPlayersArrayJson");
print(newJson);
PlayerInfo[] newArray = new PlayerInfo[1];
newArray = JsonUtility.FromJson<PlayerInfo[]>(newJson);
print(newArray[0].playerName);
First two print statements returns "{}" and 3rd one gives null reference error. TIA
Like I said in my comment, there is no direct support. Helper class is needed. This is only reason I am making this answer is because you are still having problems even after reading the link I provided.
Create a new script called JsonHelper. Copy and paste the code below inside it.
using UnityEngine;
using System.Collections;
using System;
public class JsonHelper
{
public static T[] FromJson<T>(string json)
{
Wrapper<T> wrapper = UnityEngine.JsonUtility.FromJson<Wrapper<T>>(json);
return wrapper.Items;
}
public static string ToJson<T>(T[] array)
{
Wrapper<T> wrapper = new Wrapper<T>();
wrapper.Items = array;
return UnityEngine.JsonUtility.ToJson(wrapper);
}
[Serializable]
private class Wrapper<T>
{
public T[] Items;
}
}
The code in your question should now work. All you have to do is to replace all JsonUtility words with JsonHelper. I did that for you below:
void Start()
{
PlayerInfo[] allPlayersArray = new PlayerInfo[1];
allPlayersArray[0] = new PlayerInfo();
allPlayersArray[0].playerName = "name 0";
string allPlayersArrayJson = JsonHelper.ToJson(allPlayersArray);
print(allPlayersArrayJson);
PlayerPrefs.SetString("allPlayersArrayJson", allPlayersArrayJson);
string newJson = PlayerPrefs.GetString("allPlayersArrayJson");
print(newJson);
PlayerInfo[] newArray = new PlayerInfo[1];
newArray = JsonHelper.FromJson<PlayerInfo>(newJson);
print(newArray[0].playerName);
}
Based on the JsonUtility documentation, naked arrays are not supported. Put the array inside a class.
Internally, this method uses the Unity serializer; therefore the
object you pass in must be supported by the serializer: it must be a
MonoBehaviour, ScriptableObject, or plain class/struct with the
Serializable attribute applied.
In general, you'll use this to serialize MonoBehaviour objects, or a custom class/struct with the Serializable attribute.
YES! It is not supported currently but there is always a work around. Use this class :
public static class JsonHelper
{
public static T[] getJsonArray<T>(string json)
{
try
{
string newJson = "{ \"array\": " + json + "}";
Wrapper<T> wrapper = JsonUtility.FromJson<Wrapper<T>> (newJson);
return wrapper.array;
}
catch (Exception ex)
{
throw new Exception(ex.Message);
}
}
[Serializable]
private class Wrapper<T>
{
public T[] array = null;
}
}
I would use Json.NET
PlayerInfo[] allPlayersArray = new PlayerInfo[] { p1, p2, p3 };
string json = JsonConvert.SerializeObject(allPlayersArray);
More On SerializeObject
(Code example not yet tested)
We have a JavaEE application that uses jython to execute some python scripts. By and by the used heapspace gets bigger and bigger until there is no more heapspace left. In a heapdump i can se that there are a lot of Py*-classes.
So i wrote a small test-program:
TestApp
public class TestApp {
private final ScriptEngineManager scriptEngineManager = new ScriptEngineManager();
private HashMap<String, ScriptEngine> scriptEngines = new HashMap<String, ScriptEngine>();
private final String scriptContainerPath = "";
public static void main(String[] args) throws InterruptedException {
int counter = 1;
while(true) {
System.out.println("iteration: " + counter);
TestApp testApp = new TestApp();
testApp.execute();
counter++;
Thread.sleep(100);
}
}
void execute() {
File scriptContainer = new File(scriptContainerPath);
File[] scripts = scriptContainer.listFiles();
if (scripts != null && scripts.length > 0) {
Arrays.sort(scripts, new Comparator<File>() {
#Override
public int compare(File file1, File file2) {
return file1.getName().compareTo(file2.getName());
}
});
for (File script : scripts) {
String engineName = ScriptExecutor.getEngineNameByExtension(script.getName());
if(!scriptEngines.containsKey(engineName)) {
scriptEngines.put(engineName, scriptEngineManager.getEngineByName(engineName));
}
ScriptEngine scriptEngine = scriptEngines.get(engineName);
try {
ScriptExecutor scriptExecutor = new ScriptExecutor(scriptEngine, script, null);
Boolean disqualify = scriptExecutor.getBooleanScriptValue("disqualify");
String reason = scriptExecutor.getStringScriptValue("reason");
System.out.println("disqualify: " + disqualify);
System.out.println("reason: " + reason);
} catch (Exception e) {
e.printStackTrace();
}
}
// cleanup
for(Map.Entry<String, ScriptEngine> entry : scriptEngines.entrySet()) {
ScriptEngine engine = entry.getValue();
engine.getContext().setErrorWriter(null);
engine.getContext().setReader(null);
engine.getContext().setWriter(null);
}
}
}
}
ScriptExecutor
public class ScriptExecutor {
private final static String pythonExtension = "py";
private final static String pythonEngine = "python";
private final ScriptEngine scriptEngine;
public ScriptExecutor(ScriptEngine se, File file, Map<String, Object> keyValues) throws FileNotFoundException, ScriptException {
scriptEngine = se;
if (keyValues != null) {
for (Map.Entry<String, Object> entry : keyValues.entrySet()) {
scriptEngine.put(entry.getKey(), entry.getValue());
}
}
// execute script
Reader reader = null;
try {
reader = new FileReader(file);
scriptEngine.eval(reader);
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException e) {
// nothing to do
}
}
}
}
public Boolean getBooleanScriptValue(String key) {
// convert Object to Boolean
}
public String getStringScriptValue(String key) {
// convert Object to String
}
public static String getEngineNameByExtension(String fileName) {
String extension = fileName.substring(fileName.lastIndexOf(".") + 1);
if (pythonExtension.equalsIgnoreCase(extension)) {
System.out.println("Found engine " + pythonEngine + " for extension " + extension + ".");
return pythonEngine;
}
throw new RuntimeException("No suitable engine found for extension " + extension);
}
}
In the specified directory are 14 python scripts that all look like this:
disqualify = True
reason = "reason"
I start this program with the following VM-arguments:
-Xrs -Xms16M -Xmx16M -XX:MaxPermSize=32M -XX:NewRatio=3 -Dsun.rmi.dgc.client.gcInterval=300000 -Dsun.rmi.dgc.server.gcInterval=300000 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -server
These are the arguments our AppServer is running with. Only Xms, Xmx and MaxPermSize are smaller in my testcase.
When I run this application I can see that the CMS Old Gen pool increases to its max size. After that the Par Eden Space pool increases. In addition at any time the ParNewGC does not run anymore. The cleanup part improved the situation but didn't resolve the problem. Has anybody an idea why my heap isn't completly cleaned?
I think I have found a solution for my problem: I removed the JSR223 stuff und now use the PythonInterpreter directly.
hi i am building a dynamic web project in which the welcome page have struts2 file tag now i want to store that specified file to mysql database would some one help me...
Thanks in advance.
Here is the Code i developed but it takes the file parameter statically means manually i am specifying path. but it should take path from the struts 2 file tag see the java class u will get it..
public class FileUploadACtion
{
public String execute() throws IOException
{
System.out.println("Hibernate save image into database");
Session session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
//save image into database
File file = new File("C:\\mavan-hibernate-image-mysql.gif");
byte[] bFile = new byte[(int) file.length()];
try {
FileInputStream fileInputStream = new FileInputStream(file);
//convert file into array of bytes
fileInputStream.read(bFile);
fileInputStream.close();
} catch (Exception e) {
e.printStackTrace();
}
FileUpload tfile = new FileUpload();
avatar.setImage(bFile);
session.save(tfile);
//Get image from database
FileUpload tfile2 = (FileUpload)session.get(FileUpload.class,FileUpload.getAvatarId());
byte[] bAvatar = avatar2.getImage();
try{
FileOutputStream fos = new FileOutputStream("C:\\test.gif");
fos.write(bAvatar);
fos.close();
}catch(Exception e){
e.printStackTrace();
}
session.getTransaction().commit();
}
}
You should be storing the image in the table as a BLOB type. Lets assume you have a Person class with the an image of the person stored in the DB. If you want to map this, just add a property in your person POJO that holds the image.
#Column(name="image")
#Blob
private Blob image;
When you display it, convert it to a byte[] and show.
private byte[] toByteArray(Blob fromImageBlob) {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
try {
return toByteArrayImpl(fromImageBlob, baos);
} catch (Exception e) {
}
return null;
}
private byte[] toByteArrayImpl(Blob fromImageBlob,
ByteArrayOutputStream baos) throws SQLException, IOException {
byte buf[] = new byte[4000];
int dataSize;
InputStream is = fromImageBlob.getBinaryStream();
try {
while((dataSize = is.read(buf)) != -1) {
baos.write(buf, 0, dataSize);
}
} finally {
if(is != null) {
is.close();
}
}
return baos.toByteArray();
}
You can see the below examples to know more about it.
http://i-proving.com/space/Technologies/Hibernate/Blobs+and+Hibernate
http://snehaprashant.blogspot.com/2008/08/how-to-store-and-retrieve-blob-object.html
http://viralpatel.net/blogs/2011/01/tutorial-save-get-blob-object-spring-3-mvc-hibernate.html
Well you need not to do this manually and when you will use Struts2 to upload the file, its build in file up-loader interceptor will do the major uplifting for you.
All you need to specify some properties in your action class so that Framework will inject the require data in your action class and you can do the other work.
here is what you have to do.In you JSP page you need to use <s:file> tag
<s:form action="doUpload" method="post" enctype="multipart/form-data">
<s:file name="upload" label="File"/>
<s:submit/>
</s:form>
The fileUpload interceptor will use setter injection to insert the uploaded file and related data into your Action class. For a form field named upload you would provide the three setter methods shown in the following example:
And in you action class this is all you have to do
public class UploadAction extends ActionSupport {
private File file;
private String contentType;
private String filename;
public void setUpload(File file) {
this.file = file;
}
public void setUploadContentType(String contentType) {
this.contentType = contentType;
}
public void setUploadFileName(String filename) {
this.filename = filename;
}
public String execute() {
//...
return SUCCESS;
}
}
The uploaded file will be treat as a temporary file, with a long random file name and you have to copy this inside your action class execute() method.You can take help of FileUtils.
I suggest you to read the official File-upload document of Struts2 for complete configurations Struts2 File-upload