VideoFram frame to byte array - arrays

I'm using the Ant Media SDK to develop a webRTC capable app. The thing is that I need to send some frames every 300 ms to a recognition service. I have found that there's a listener that pass the frames:
surfaceTextureHelper.startListening((VideoFrame frame) -> {
//some code
}
The thing is, how can I get a byte[] from that frame, in Camera1API you had setPreviewCallback and received directly a byte[] with the frame. How can I make the same conversion??
Thanks in advance

Related

From iOS Objective-C code and Android Java code to a Codename One PeerComponent

At the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-ios there are the following examples:
if (self.goCoder != nil) {
// Associate the U/I view with the SDK camera preview
self.goCoder.cameraView = self.view;
// Start the camera preview
[self.goCoder.cameraPreview startPreview];
}
// Start streaming
[self.goCoder startStreaming:self];
// Stop the broadcast that is currently running
[self.goCoder endStreaming:self];
The equivalent Java code for Android is reported at the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-android#start-the-camera-preview, it is:
// Associate the WOWZCameraView defined in the U/I layout with the corresponding class member
goCoderCameraView = (WOWZCameraView) findViewById(R.id.camera_preview);
// Start the camera preview display
if (mPermissionsGranted && goCoderCameraView != null) {
if (goCoderCameraView.isPreviewPaused())
goCoderCameraView.onResume();
else
goCoderCameraView.startPreview();
}
// Start streaming
goCoderBroadcaster.startBroadcast(goCoderBroadcastConfig, this);
// Stop the broadcast that is currently running
goCoderBroadcaster.endBroadcast(this);
The code is self-explaining: the first blocks start a camera preview, the second blocks start a streaming and the third blocks stop it. I want the preview and the streaming inside a Codename One PeerComponent, but I didn't remember / understand how I have to modify both these native code examples to return a PeerComponent to the native interface.
(I tried to read again the developer guide but I'm a bit confused on this point).
Thank you
This is the key line in the iOS instructions:
self.goCoder.cameraView = self.view;
Here you define the view that you need to return to the peer and that we can place. You need to change it from self.view to a view object you create. I think you can just allocate a UIView and assign/return that.
For the Android code instead of using the XML code they use there you can use the WOWZCameraView directly and return that as far as I can tell.

can I use the codename one camera kit library to take a picture automatically from the code?

I'm new to using codename one and I can not understand how we can take a picture from the camera using captureImage (); from the camerakit library.
I know it's possible with the Capture API (Capture.capturePhoto ();) but this library uses an application to take the photo and I want to do this directly
I created a button :
FloatingActionButton capture_button =
FloatingActionButton.createFAB(FontImage.MATERIAL_CAMERA);
capture_button.bindFabToContainer(hi, CENTER, BOTTOM);
capture_button.addActionListener(e -> {
ck.captureImage();
.............
and after that I tried to get my picture from the onImage function but it does not work.
#Override
public void onImage(CameraEvent ev) {
try {
byte[] jpegData = ev.getJpeg();
String str = new String(jpegData);
InputStream stream = FileSystemStorage.getInstance().openInputStream(jpegData);
OutputStream out = Storage.getInstance().createOutputStream("MyImage.jpg");
Util.copy(stream, out);
Util.cleanup(stream);
Util.cleanup(out);
StorageImage out = StorageImage.create("MyImage.jpg", jpegData, -1, -1);
............................
}
the byte array is empty. Help please.
Camera Kit broke a bit after its release due to changes in Camera Kit which is still not 1.0 level. This is tracked in this issue. Camera kit was supposed to reach 1.0 status months ago but still hasn't reached that point. We
are waiting for it to be at 1.0 level so we can make fixes against a stable version.
We also need a bit of time/resources to do that work which is something we are sorely lacking.

Silverlight Plug in Crashes in Video Conference

We hav developed an application for video conference using silverlight.
It works properly for 15 to 19 min then video get stopped and silverlight plugin has crashed.
for video encoding we r using the JPEG encoder and single image from capturesource get encoded and send on each tick of timer..
I also tried to use Silversuite but message popup arrives i.e. Silversuite expire
Is der proper solution for encoding or timer or plug in...
Thanx...
we extend the crashing period from 15 min to 1 to 1 n 1/2 hr ......by flushing the mem stream and decreasing the receiving buffer size .....

Syncronization audio and video

I need to display stream video using MediaElement in Windwso Phone application.
I'm getting from web-service a stream that contains frames in H264 format AND raw-AAC bytes (strange, but ffmpeg can parse with -f ac3 parameter only).
So, if try to play only one of stream (audio OR video) it plays nice. But I have issues when try it both.
For example, if I report video sample without timestamp and report audio with timestamp, my video plays 3x-5x faster then I need.
MediaStreamSample msSamp = new MediaStreamSample(
_videoDesc,
vStream,
0,
vStream.Length,
0,
_emptySampleDict);
ReportGetSampleCompleted(msSamp);
From my web-service I getting a DTS and PTS for video and audio frames in following format:
120665029179960
but when I set it for sample, my audio stream plays too slow and with delays.
Timebase is 90khz.
So, could someone tell me how I can resolve it? Maybe I should calculate others timestamps for samples? If so, show me the way, please.
Thanks.
Okay, I solved it.
So, what I need to do for sync A/V:
Calculate right timestamps for each video and audio frames using framerate.
For example, for video I have 90 kHz and for audio 48 kHz and 25 frames per second - my frame increments will be:
_videoFrameTime = (int)TimeSpan.FromSeconds((double)0.9 / 25).Ticks;
_audioFrameTime = (int)TimeSpan.FromSeconds((double)0.48 / 25).Ticks;
And now we should add these values for each sample:
private void GetAudioSample()
{
...
/* Getting sample from buffer */
MediaStreamSample msSamp = new MediaStreamSample(
_audioDesc,
audioStream,
0,
audioStream.Length,
_currentAudioTimeStamp,
_emptySampleDict);
_currentAudioTimeStamp += _audioFrameTime;
ReportGetSampleCompleted(msSamp);
}
For gettign video frame method will be the same with a _videoFrameTime incrementation instead.
Hope this will be helpfull for someone.
Roman.

Windows Phone 7.1 play recorded PCM/WAV audio

I'm working on a WP7.1 app that records audio and plays it back. I'm using a MedialElement to playback audio. The MediaElement works fine for playing MP4 (actually M4A files renamed) downloaded from the server. However, when I try to play a recorded file with or without the WAV RIFF header (PCM in both cases) it does not work. It gives me an error code 3001, which I cannot find the definition for anywhere.
Can anyone point me to some sample code on playing recorded audio in WP7.1 that does not use the SoundEffect class. Don't want to use the SoundEffect class because it's meant for short audio clips.
This is how I load the audio file:
using (IsolatedStorageFile storage = IsolatedStorageFile.GetUserStoreForApplication())
{
using (Stream stream = storage.OpenFile(audioSourceUri.ToString(), FileMode.Open))
{
m_mediaElement.SetSource(stream);
}
}
This playing code looks good. Issue have to be in storing code. BTW 3001 means AG_E_INVALID_FILE_FORMAT.
I just realized that the "Average bytes per second" RIFF header value was wrong. I was using the wrong value for the Bits per Sample value, which should've been 16 bit since the microphone records in 16-bit PCM.

Resources