JavaFX - Playing loop video - loops

How should I loop a video in JavaFX?
I'm trying to just play a video one time after another, so I was looking for some sample code in many places and I could'nt make it work!
This is what doesn't work for me:
public MyMediaPlayer (){
media = new Media(getVideo());
mediaPlayer = new MediaPlayer(media);
mediaView = new MediaView(mediaPlayer);
startMediaPlayer();
}
private String getVideo() {
return getClass().getResource("videos/limbo.mp4").toString();
}
public final void startMediaPlayer() {
mediaPlayer.setMute(true);
mediaPlayer.setCycleCount(javafx.scene.media.MediaPlayer.INDEFINITE); //this is the line that should do the magic, but it doesn't...
mediaPlayer.play();
}

The following works for me (video loops forever). I can't replicate your issue.
import javafx.application.Application;
import javafx.scene.*;
import javafx.scene.media.*;
import javafx.stage.Stage;
public class VideoPlayerExample extends Application {
public static void main(String[] args) throws Exception { launch(args); }
#Override public void start(final Stage stage) throws Exception {
final MediaPlayer oracleVid = new MediaPlayer(
new Media("http://download.oracle.com/otndocs/products/javafx/oow2010-2.flv")
);
stage.setScene(new Scene(new Group(new MediaView(oracleVid)), 540, 208));
stage.show();
oracleVid.setMute(true);
oracleVid.setRate(20);
oracleVid.setCycleCount(MediaPlayer.INDEFINITE);
oracleVid.play();
}
}
I'm under Java 7, doesn't work there . . . the problem seems to be MP4 format.
If you can't play MP4 files, either:
The MP4 is not encoded in a format JavaFX understands (the JavaFX 2.2 Media javadoc details the allowed formats).
OR
You don't have appropriate codecs installed on your machine to allow the MP4 file to be decoded. See the JavaFX 2.2 Media system requirements for information on what you need to install on your machine to allow MP4 files to be displayed.

Related

Taking a screenshot of a remote desktop using Sikuli

I have the following Java code which I am using to capture a screenshot:
import org.sikuli.script.Screen;
import javax.imageio.ImageIO;
import java.io.File;
import java.io.IOException;
public class Screenshot{
public static void main(String[] args) throws IOException
{
Screen screen = new Screen();
ImageIO.write(screen.capture(screen).getImage(), "png", new File("D:\\myScreen.png"));
}
}
I compile the piece of code using:
javac -classpath .;sikulixapi-2.0.4.jar Screenshot.java
and run it with:
java -classpath .;sikulixapi-2.0.4.jar Screenshot
I tried to run it remotely, using
psexec \\xx.xx.xxx.xxx -w "D:\Sikuli" java -classpath .;sikulixapi-2.0.4.jar Screenshot
The result is not the picture of the remote screen, but only a black background.
Is there any way to make this work?
To check wether you principally could get a shot of the remote screen this way, you can use what SikuliX uses internally: java.awt.Robot
try this:
import java.awt.*;
...
BufferedImage img = new Robot().createScreenCapture(new Rectangle(0, 0, 500, 500))
... and then your coding to store the image somewhere.
If the image is black, then you have a problem with the monitor setup on the remote system. It must be non-headless (real screen) and Robot must have access to an unlocked screen.
RaiMan from SikuliX

Codename One "out of memory" when using Object-C native interface (HEIC to JPEG conversion)

Since I'm implementing a custom gallery for Android and iOS, I have to access directly to the gallery files stored in the FileSystemStorage through native interfaces.
The basic idea is to retrieve the file list through a native interface, and then make a cross-platform GUI in Codename One. This works on Android, I had to make the thumbs generation (in the Codename One side, not in the native interface side) as fast as possible and the overall result is quite acceptable.
On iOS, I have an additional issue, that is the HEIC image file format, that needs to be converted in JPEG to become usable in Codename One. Basically, I get the file list through the code in this question (I'm waiting for an answer...), then I have to convert each HEIC file to a temporary JPEG file, but my HEICtoJPEG native interface makes the app crashing after few images with an "out of memory" Xcode message...
I suspect that the problematic code is the following, maybe the UIImage* image and/or the NSData* mediaData are never released:
#import "myapp_utilities_HEICtoJPEGNativeImpl.h"
#implementation myapp_utilities_HEICtoJPEGNativeImpl
-(NSData*)heicToJpeg:(NSData*)param{
UIImage* image = [UIImage imageWithData:param];
NSData* mediaData = UIImageJPEGRepresentation(image, 0.9);
return mediaData;
}
-(BOOL)isSupported{
return YES;
}
#end
This is the Java native interface:
import com.codename1.system.NativeInterface;
/**
* #deprecated
*/
public interface HEICtoJPEGNative extends NativeInterface {
public byte[] heicToJpeg(byte[] heicInput);
}
and this the Java public API:
import com.codename1.io.FileSystemStorage;
import com.codename1.io.Util;
import com.codename1.system.NativeLookup;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
public class HEICtoJPEG {
private static HEICtoJPEGNative nativeInterface = NativeLookup.create(HEICtoJPEGNative.class);
/**
* Public API to convert an HEIC file to a new JPEG file (placed in /heic)
* #param heicFile in the FileSystemStorage
* #return a new file (with unique name)
*/
public static String convertToJPEG(String heicFile) throws IOException {
if (nativeInterface != null && nativeInterface.isSupported()) {
// It ensures that the directory exists.
FileSystemStorage fss = FileSystemStorage.getInstance();
String heicDir = fss.getAppHomePath() + "/heic";
if (!fss.isDirectory(heicDir)) {
fss.mkdir(heicDir);
}
ByteArrayOutputStream outHeic = new ByteArrayOutputStream();
InputStream inHeic = fss.openInputStream(heicFile);
Util.copy(inHeic, outHeic);
byte[] heicData = outHeic.toByteArray();
byte[] jpegData = nativeInterface.heicToJpeg(heicData);
String jpegFile = heicDir + "/" + DeviceUtilities.getUniqueId() + ".jpg";
OutputStream outJpeg = fss.openOutputStream(jpegFile);
ByteArrayInputStream inJpeg = new ByteArrayInputStream(jpegData);
Util.copy(inJpeg, outJpeg);
return jpegFile;
} else {
return null;
}
}
}
Since the Android counterpart works, I hope that the rest of my custom gallery code is fine and that this out-of-memory issue is inside code I posted here.
I hope you can indicate me a working solution. Thank you
There was a memory leak in the way that the iOS port invoked native interface methods which received or returned primitive arrays (byte[], int[], etc..).
I have just committed a fix for this (native interface invocations are now wrapped in an autorelease pool) which will be available on the build server next Friday (October 9, 2020).
EDIT: (Friday October 2, 2020)
This fix has been deployed to the build server already so it you should be able to build it again immediately and see if it fixes your issue.

Using date-time format for fileName in apache camel

I am trying to use datetime format as name of the file in apache camel using fileName option. The program is not throwing any error but it is not creating any file in "output" folder. So I tried something like this :
from("stream:in?promptMessage=Enter Something:").
to("file:C:\\output?fileName=abc.txt");
Running the above code generated "abc.txt" file in "output" folder. But when I am using the date syntax with fileName option in below code it is not generating any file in "output" folder.
import org.apache.camel.CamelContext;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import java.time.LocalDateTime;
public class Filetransfer {
public static void main(String[] args) throws Exception {
CamelContext context = new DefaultCamelContext();
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
// TODO Auto-generated method stub
from("stream:in?promptMessage=Enter Something:").
to("file:C:\\output?fileName=${date:now:ddMMyyyy-hh:mm:ss}.txt");
}
});
while(true)
context.start();
//Thread.sleep(10000);
//context.stop();
}
}
The format for hours is HH(capitals). I am running camel 3.2 and this works for me:
wireTap("file:data/out?fileName=${date:now:yyyy/MM/dd/HH-mm-ss}.json")
I think colons ":" in between hh & mm & ss were causing the trouble. I replaced them with "-" and now I am able to see those files generated with date and time. Thanks btw Sneharghya Pathak :)

Please confirm this is the right way to stream data to Hadoop using Flink

I need some help with Flink Streaming. I have produced a simple Hello-world type of code below. This streams Avro messages from RabbitMQ and persists it to HDFS. I hope someone can review the code, and maybe it can help others.
Most examples I've found for Flink streaming sends results to std-out. I actually wanted to save the data to Hadoop. I read that, in theory, you can stream with Flink to wherever you like. I haven't found any example saving data to HDFS actually. But, based on the examples I did find, and trials and errors, I have come with the below code.
The source of the data, here, is RabbitMQ. I use a client app to send "MyAvroObjects" to RabbitMQ. MyAvroObject.java - not included - is generated from avro IDL... Can be any avro message.
The code below, consumes the RabbitMQ messages, and saves this to HDFS, as avro files... Well, that's what I hope.
package com.johanw.flink.stackoverflow;
import java.io.IOException;
import org.apache.avro.io.Decoder;
import org.apache.avro.io.DecoderFactory;
import org.apache.avro.mapred.AvroKey;
import org.apache.avro.mapred.AvroOutputFormat;
import org.apache.avro.mapred.AvroWrapper;
import org.apache.avro.mapreduce.AvroJob;
import org.apache.avro.specific.SpecificDatumReader;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.hadoop.mapred.HadoopOutputFormat;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.typeutils.TypeExtractor;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.FileSinkFunctionByMillis;
import org.apache.flink.streaming.connectors.rabbitmq.RMQSource;
import org.apache.flink.streaming.util.serialization.DeserializationSchema;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class RMQToHadoop {
public class MyDeserializationSchema implements DeserializationSchema<MyAvroObject> {
private static final long serialVersionUID = 1L;
#Override
public TypeInformation<MyAvroObject> getProducedType() {
return TypeExtractor.getForClass(MyAvroObject.class);
}
#Override
public MyAvroObject deserialize(byte[] array) throws IOException {
SpecificDatumReader<MyAvroObject> reader = new SpecificDatumReader<MyAvroObject>(MyAvroObject.getClassSchema());
Decoder decoder = DecoderFactory.get().binaryDecoder(array, null);
MyAvroObject MyAvroObject = reader.read(null, decoder);
return MyAvroObject;
}
#Override
public boolean isEndOfStream(MyAvroObject arg0) {
return false;
}
}
private String hostName;
private String queueName;
public final static String path = "/hdfsroot";
private static Logger logger = LoggerFactory.getLogger(RMQToHadoop.class);
public RMQToHadoop(String hostName, String queueName) {
super();
this.hostName = hostName;
this.queueName = queueName;
}
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
public void run() {
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
logger.info("Running " + RMQToHadoop.class.getName());
DataStream<MyAvroObject> socketStockStream = env.addSource(new RMQSource<>(hostName, queueName, new MyDeserializationSchema()));
Job job;
try {
job = Job.getInstance();
AvroJob.setInputKeySchema(job, MyAvroObject.getClassSchema());
} catch (IOException e1) {
e1.printStackTrace();
}
try {
JobConf jobConf = new JobConf(Job.getInstance().getConfiguration());
jobConf.set("avro.output.schema", MyAvroObject.getClassSchema().toString());
org.apache.avro.mapred.AvroOutputFormat<MyAvroObject> akof = new AvroOutputFormat<MyAvroObject>();
HadoopOutputFormat<AvroWrapper<MyAvroObject>, NullWritable> hof = new HadoopOutputFormat<AvroWrapper<MyAvroObject>, NullWritable>(akof, jobConf);
FileSinkFunctionByMillis<Tuple2<AvroWrapper<MyAvroObject>, NullWritable>> fileSinkFunctionByMillis = new FileSinkFunctionByMillis<Tuple2<AvroWrapper<MyAvroObject>, NullWritable>>(hof, 10000l);
org.apache.hadoop.mapred.FileOutputFormat.setOutputPath(jobConf, new Path(path));
socketStockStream.map(new MapFunction<MyAvroObject, Tuple2<AvroWrapper<MyAvroObject>, NullWritable>>() {
private static final long serialVersionUID = 1L;
#Override
public Tuple2<AvroWrapper<MyAvroObject>, NullWritable> map(MyAvroObject envelope) throws Exception {
logger.info("map");
AvroKey<MyAvroObject> key = new AvroKey<MyAvroObject>(envelope);
Tuple2<AvroWrapper<MyAvroObject>, NullWritable> tupple = new Tuple2<AvroWrapper<MyAvroObject>, NullWritable>(key, NullWritable.get());
return tupple;
}
}).addSink(fileSinkFunctionByMillis);
try {
env.execute();
} catch (Exception e) {
logger.error("Error while running " + RMQToHadoop.class + ".", e);
}
} catch (IOException e) {
logger.error("Error while running " + RMQToHadoop.class + ".", e);
}
}
public static void main(String[] args) throws IOException {
RMQToHadoop toHadoop = new RMQToHadoop("localhost", "rabbitTestQueue");
toHadoop.run();
}
}
If you prefer another source, other than RabbitMQ, then it works fine using another source instead. E.g. using a Kafka consumer:
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer082;
...
DataStreamSource<MyAvroObject> socketStockStream = env.addSource(new FlinkKafkaConsumer082<MyAvroObject>(topic, new MyDeserializationSchema(), sourceProperties));
Questions:
Please review. Is this good practice for saving data to HDFS?
What if the process of streaming is causing an issue, say during serialisation. It generates and exception, and the code just exits. Spark streaming depends on Yarn automatically restarting the app. Is this also good practice when using Flink?
I'm using the FileSinkFunctionByMillis. I was actually hoping to use something like a HdfsSinkFunction, but that doesn't exist. So the FileSinkFunctionByMillis was the closest to this, which made sense to me. Again the documentation that I found lacks any explanation what to do, so I'm only guessing.
When I run this locally, then a I find a directory structure like "C:\hdfsroot_temporary\0_temporary\attempt__0000_r_000001_0", which is... basare. Any ideas here?
By the way, when you want to save the data to Kafka back, I was able to do so using...
Properties destProperties = new Properties();
destProperties.setProperty("bootstrap.servers", bootstrapServers);
FlinkKafkaProducer<MyAvroObject> kafkaProducer = new FlinkKafkaProducer<L3Result>("MyKafkaTopic", new MySerializationSchema(), destProperties);
Many thanks in advance!!!!
I think FileSinkFunctionByMillis can be used but this would mean that your streaming program is not fault-tolerant. Meaning that if your sources or machines or writing fail then your program will crash without being able to recover.
I suggest you look at using the RollingSink (https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/streaming_guide.html#hadoop-filesystem). This can be used to create Flum-like pipelines to ingest data into HDFS (or other file systems). The rolling sink is a recoverable sink, meaning that your program would be fault-tolerant since the Kafka consumer is also fault-tolerant. Also you can specify a custom Writer to write the data in any format you want, for example Avro.

How to make a java interface that can open an image file and save it to database and resize it?

Say I have a code like this:
import java.awt.*;
import java.awt.event.*;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import javax.swing.*;
public class InsertImg2Dbase extends JFrame implements ActionListener{
JButton open = new JButton("Open image to save...");
private Connection con;
public void getConnection(){
try{
if(con==null){
Class.forName("com.ibm.db2.jcc.DB2Driver").newInstance();
con=DriverManager.getConnection("jdbc:db2://localhost:50001/sample","username","password");
}
}catch (SQLException e){
JOptionPane.showMessageDialog(null, e.getMessage());
}catch (Exception e){
JOptionPane.showMessageDialog(null, e);
}
}
public void actionPerformed(ActionEvent e){
Object source = e.getSource();
if(source==open){
// this is where the event when opening the file to be saved will be coded
}
}
public InsertImg2Dbase(){
ActionListener al = new InsertImg2Dbase();
open.setBounds(20, 20, 175, 25);
open.addActionListener(al);
add(open);
}
public void properties(){
setLayout(null);
setTitle("Open Image To Save");
setResizable(false);
setSize(220,90);
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setLocationRelativeTo(null);
setVisible(true);
}
public static void main(String[] args){
InsertImg2Dbase ins = new InsertImg2Dbase();
ins.properties();
}
}
What should I add in the code so the JButton will open an explorer to open an image file(such as jpeg, png, gif, bmp) which will be saved to my database (say my tablename is: "images") as a blob file. And can I add a function which will resize my image let's say for a dimension of 300x650 before being saved to my database?
I would welcome any kind help, I'm still in the learning process and if you could just guide me, I am more than grateful. Any help would be welcomed! Thank you
You could incorporate an external tool inside DB2 in order to extend the basic functionality and add support to specific things, like in your case images. For this, you need ImageMagik, then you create some external stored procedures and that's is.
A very good tutorial about this is here in DeveloperWorks: http://www.ibm.com/developerworks/data/library/techarticle/dm-0504stolze/
Once you have done that, you can call db2 procedures that manipulates the stored image before retrieve to the application.

Resources