android webrtc rotate VideoFrame - webrtc-android

I try below code to rotate VideoFrame by 90 degree but it doesn't work:
private VideoFrame rotateFrame(VideoFrame inVideoFrame) {
VideoFrame outVideoFrame = new VideoFrame(
inVideoFrame.getBuffer(),
90, inVideoFrame.getTimestampNs());
return outVideoFrame;
}
What's correct logic to rotate a videoFrame?

I have same problem but your code it's work for me.
I publish stream to Wowza Streaming Engine using webRTC and I try to debug my code while video frame publish to wowza. The orientation is 270. When I received this orientation back from service. it's 0 I try to change it before pass this frame to SurfaceViewRenderer. It's work.
remoteVideoTrack.addSink {
remoteView?.onFrame(VideoFrame(it.buffer, 270, -1))
}

Related

VideoFram frame to byte array

I'm using the Ant Media SDK to develop a webRTC capable app. The thing is that I need to send some frames every 300 ms to a recognition service. I have found that there's a listener that pass the frames:
surfaceTextureHelper.startListening((VideoFrame frame) -> {
//some code
}
The thing is, how can I get a byte[] from that frame, in Camera1API you had setPreviewCallback and received directly a byte[] with the frame. How can I make the same conversion??
Thanks in advance

From iOS Objective-C code and Android Java code to a Codename One PeerComponent

At the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-ios there are the following examples:
if (self.goCoder != nil) {
// Associate the U/I view with the SDK camera preview
self.goCoder.cameraView = self.view;
// Start the camera preview
[self.goCoder.cameraPreview startPreview];
}
// Start streaming
[self.goCoder startStreaming:self];
// Stop the broadcast that is currently running
[self.goCoder endStreaming:self];
The equivalent Java code for Android is reported at the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-android#start-the-camera-preview, it is:
// Associate the WOWZCameraView defined in the U/I layout with the corresponding class member
goCoderCameraView = (WOWZCameraView) findViewById(R.id.camera_preview);
// Start the camera preview display
if (mPermissionsGranted && goCoderCameraView != null) {
if (goCoderCameraView.isPreviewPaused())
goCoderCameraView.onResume();
else
goCoderCameraView.startPreview();
}
// Start streaming
goCoderBroadcaster.startBroadcast(goCoderBroadcastConfig, this);
// Stop the broadcast that is currently running
goCoderBroadcaster.endBroadcast(this);
The code is self-explaining: the first blocks start a camera preview, the second blocks start a streaming and the third blocks stop it. I want the preview and the streaming inside a Codename One PeerComponent, but I didn't remember / understand how I have to modify both these native code examples to return a PeerComponent to the native interface.
(I tried to read again the developer guide but I'm a bit confused on this point).
Thank you
This is the key line in the iOS instructions:
self.goCoder.cameraView = self.view;
Here you define the view that you need to return to the peer and that we can place. You need to change it from self.view to a view object you create. I think you can just allocate a UIView and assign/return that.
For the Android code instead of using the XML code they use there you can use the WOWZCameraView directly and return that as far as I can tell.

can I use the codename one camera kit library to take a picture automatically from the code?

I'm new to using codename one and I can not understand how we can take a picture from the camera using captureImage (); from the camerakit library.
I know it's possible with the Capture API (Capture.capturePhoto ();) but this library uses an application to take the photo and I want to do this directly
I created a button :
FloatingActionButton capture_button =
FloatingActionButton.createFAB(FontImage.MATERIAL_CAMERA);
capture_button.bindFabToContainer(hi, CENTER, BOTTOM);
capture_button.addActionListener(e -> {
ck.captureImage();
.............
and after that I tried to get my picture from the onImage function but it does not work.
#Override
public void onImage(CameraEvent ev) {
try {
byte[] jpegData = ev.getJpeg();
String str = new String(jpegData);
InputStream stream = FileSystemStorage.getInstance().openInputStream(jpegData);
OutputStream out = Storage.getInstance().createOutputStream("MyImage.jpg");
Util.copy(stream, out);
Util.cleanup(stream);
Util.cleanup(out);
StorageImage out = StorageImage.create("MyImage.jpg", jpegData, -1, -1);
............................
}
the byte array is empty. Help please.
Camera Kit broke a bit after its release due to changes in Camera Kit which is still not 1.0 level. This is tracked in this issue. Camera kit was supposed to reach 1.0 status months ago but still hasn't reached that point. We
are waiting for it to be at 1.0 level so we can make fixes against a stable version.
We also need a bit of time/resources to do that work which is something we are sorely lacking.

Finding eye position on camera using camera2 api android

Below are my code snippet to get faces on camera using camera2 api. In that I am able to get eye position only for few devices. Rest of them returning NULL values. Is there a way to find eye position in camera using camera2 api?
Integer mode = result.get(CaptureResult.STATISTICS_FACE_DETECT_MODE);
Face[] faces = result.get(CaptureResult.STATISTICS_FACES);
if(faces != null && mode != null) {
if (faces.length > 0) {
Rect rect = faces[0].getBounds();
Log.e("tag", "faces : leftEye" + faces[0].getLeftEyePosition());
Log.e("tag", "faces : RightEye" + faces[0].getRightEyePosition());
}
}
Face detection is a feature that needs to be supported by the underlying camera module and is not related with Android frameworks. Hence your code works in certain devices and fails in rest. I believe Android framework doesn't have any explicit API for Face Detection.

Windows 10 MediaClip rotation

How can I get video file rotation information using Windows 10 universal platform API?
For instance I have a video of size 1920 x 1080, rotated 90, so effectively it is 1080 x 1920.
The MediaElement control renders fine the video but during processing pipeline one of the steps is affected by this rotation and I need to detect this situation.
Sample MediaInfo from the source file is here:
I couldn't find any way to discover it.
But I have found a workaround, by using this piece of code:
private async Task<bool> CheckVideoFileRotated(StorageFile file)
{
var profile = await MediaEncodingProfile.CreateFromFileAsync(file);
var mediaClip = await MediaClip.CreateFromFileAsync(file);
var videoProp = mediaClip.GetVideoEncodingProperties();
return profile.Video.Width == videoProp.Height &&
profile.Video.Height == videoProp.Width;
}
I was able to detect if the rotation is in place. Luckily I don't need more specific information if it is 90 or 270 degrees.

Resources