Media.getDuration() for Video always returns 0 - codenameone

I am looking for the duration of a Video just recorded.
Code:
String file = Capture.captureVideo();
Media m = MediaManager.createMedia(file, true);
m.getDuration();
returns 0 on (PC, IOS, Android)
Thoughts?

Duration isn't available for most cases until after media playback was started since the media might be streamed this can take time but normally just playing the media will make duration return a valid value.

Related

Is it possible to record audio in iOS with Codename One?

My app features a button to record audio (and another to play it back when the recording is over). I send the recorded audio files on a server. On Android the files is recorded as .amr (mime type audio/amr) and can be played back.
On iOS however the file can neither be played back on the device (iPhone 4 or 4S) nor on a computer. ffmpeg -i reports
[mov,mp4,m4a,3gp,3g2,mj2 # 0x2fac120] moov atom not found
9gMjOnnmsj9JJZR3.m4a: Invalid data found when processing input
Please note that VLC cannot play it either.
I give the m4a extension because Voice recorder uses it (along with aac codec).
Here is the code I use (mostly based on https://github.com/codenameone/CodenameOne/blob/master/Ports/iOSPort/src/com/codename1/impl/ios/IOSImplementation.java#L2768-L2794 ) :
audioMemoPath = ParametresGeneraux.getWRITABLE_DIR() + "AudioMemo-"
+ System.currentTimeMillis() +
(Display.getInstance().getPlatformName().equals("and")
? ".amr"
: ".m4a");
audioMemoMimeType = MediaManager.getAvailableRecordingMimeTypes()[0];
audioMemoRecorder = MediaManager.createMediaRecorder(audioMemoPath, audioMemoMimeType);
// If the permission audio has not been granted
// the audiomemorecoreder will be null
if (audioMemoRecorder != null) {
audioMemoRecorder.play();
boolean b = Dialog.show("Recording", "", "Save", "Cancel");
audioMemoRecorder.pause();
audioMemoRecorder.cleanup();
...
}
Moreover if I display the available mime types on iOS, it yields "audio/amr" which I doubt according to all the posts I could read that tell you amr is not supported on iOS. Looking at the source it appears amr is the by default mime type because it is always returned :
/**
* Gets the available recording MimeTypes
*/
public String [] getAvailableRecordingMimeTypes(){
return new String[]{"audio/amr"};
}
So my question : is it possible to record audio on iOS, and if it is, how can it be done ?
Any help appreciated,
Have you looked at the Capture class? That seems to be more straightforward.
https://www.codenameone.com/javadoc/index.html
Ok I got it working by overloading some methods of the MediaManager, namely getAvailableRecordingMimeTypes() and also createMediaRecorder() to prevent it from using its getAvailableRecordingMimeTypes method.
Here is the code for getAvailableRecordingMimeTypes():
/**
* Normally the method returns amr even for ios. So we overload the original
* method to return aac on ios.
* #return
*/
public static String[] getAvailableRecordingMimeTypes() {
if (Display.getInstance().getPlatformName().equals("ios")) {
return new String[]{"audio/aac"};
} else {
return new String[]{"audio/amr"};
}
}
createMediaRecorder() is left as is (copied without changes).
Now it is possible to record audio in iOS and play it back in both iOS and Android!

Silverlight Plug in Crashes in Video Conference

We hav developed an application for video conference using silverlight.
It works properly for 15 to 19 min then video get stopped and silverlight plugin has crashed.
for video encoding we r using the JPEG encoder and single image from capturesource get encoded and send on each tick of timer..
I also tried to use Silversuite but message popup arrives i.e. Silversuite expire
Is der proper solution for encoding or timer or plug in...
Thanx...
we extend the crashing period from 15 min to 1 to 1 n 1/2 hr ......by flushing the mem stream and decreasing the receiving buffer size .....

Syncronization audio and video

I need to display stream video using MediaElement in Windwso Phone application.
I'm getting from web-service a stream that contains frames in H264 format AND raw-AAC bytes (strange, but ffmpeg can parse with -f ac3 parameter only).
So, if try to play only one of stream (audio OR video) it plays nice. But I have issues when try it both.
For example, if I report video sample without timestamp and report audio with timestamp, my video plays 3x-5x faster then I need.
MediaStreamSample msSamp = new MediaStreamSample(
_videoDesc,
vStream,
0,
vStream.Length,
0,
_emptySampleDict);
ReportGetSampleCompleted(msSamp);
From my web-service I getting a DTS and PTS for video and audio frames in following format:
120665029179960
but when I set it for sample, my audio stream plays too slow and with delays.
Timebase is 90khz.
So, could someone tell me how I can resolve it? Maybe I should calculate others timestamps for samples? If so, show me the way, please.
Thanks.
Okay, I solved it.
So, what I need to do for sync A/V:
Calculate right timestamps for each video and audio frames using framerate.
For example, for video I have 90 kHz and for audio 48 kHz and 25 frames per second - my frame increments will be:
_videoFrameTime = (int)TimeSpan.FromSeconds((double)0.9 / 25).Ticks;
_audioFrameTime = (int)TimeSpan.FromSeconds((double)0.48 / 25).Ticks;
And now we should add these values for each sample:
private void GetAudioSample()
{
...
/* Getting sample from buffer */
MediaStreamSample msSamp = new MediaStreamSample(
_audioDesc,
audioStream,
0,
audioStream.Length,
_currentAudioTimeStamp,
_emptySampleDict);
_currentAudioTimeStamp += _audioFrameTime;
ReportGetSampleCompleted(msSamp);
}
For gettign video frame method will be the same with a _videoFrameTime incrementation instead.
Hope this will be helpfull for someone.
Roman.

Windows Phone 7.1 play recorded PCM/WAV audio

I'm working on a WP7.1 app that records audio and plays it back. I'm using a MedialElement to playback audio. The MediaElement works fine for playing MP4 (actually M4A files renamed) downloaded from the server. However, when I try to play a recorded file with or without the WAV RIFF header (PCM in both cases) it does not work. It gives me an error code 3001, which I cannot find the definition for anywhere.
Can anyone point me to some sample code on playing recorded audio in WP7.1 that does not use the SoundEffect class. Don't want to use the SoundEffect class because it's meant for short audio clips.
This is how I load the audio file:
using (IsolatedStorageFile storage = IsolatedStorageFile.GetUserStoreForApplication())
{
using (Stream stream = storage.OpenFile(audioSourceUri.ToString(), FileMode.Open))
{
m_mediaElement.SetSource(stream);
}
}
This playing code looks good. Issue have to be in storing code. BTW 3001 means AG_E_INVALID_FILE_FORMAT.
I just realized that the "Average bytes per second" RIFF header value was wrong. I was using the wrong value for the Bits per Sample value, which should've been 16 bit since the microphone records in 16-bit PCM.

MJPEG internet streaming - accurate fps

I want to write MJPEG picture internet stream viewer. I think getting jpeg images using sockets it's not very hard problem. But i want to know how to make accurate streaming.
while (1)
{
get_image()
show_image()
sleep (SOME_TIME) // how to make it accurate?
}
Any suggestions would be great.
In order to make it accurate, there are two possibilities:
Using framerate from the streaming server. In this case, the client needs to keep the same framerate (calculate each time when you get frame, then show and sleep for a variable amount of time using feedback: if the calculated framerate is higher than on server -> sleep more; if lower -> sleep less; then, the framerate on the client side will drift around the original value from server). It can be received from server during the initialization of streaming connection (when you get picture size and other parameters) or it can be configured.
The most accurate approach, actually, is using of timestamps from server per each frame (which is either taken from file by demuxer or generated in image sensor driver in case of camera device). If MJPEG is packeted into RTP stream, these timestamps are already in RTP header. So, client's task is trivial: show picture using time calculating from time offset, current timestamp and time base.
Update
For the first solution:
time_to_sleep = time_to_sleep_base = 1/framerate;
number_of_frames = 0;
time = current_time();
while (1)
{
get_image();
show_image();
sleep (time_to_sleep);
/* update time to sleep */
number_of_frames++;
cur_time = current_time();
cur_framerate = number_of_frames/(cur_time - time);
if (cur_framerate > framerate)
time_to_sleep += alpha*time_to_sleep;
else
time_to_sleep -= alpha*time_to_sleep;
time = cur_time;
}
, where alpha is a constant parameter of reactivity of the feedback (0.1..0.5) to play with.
However, it's better to organize queue for input images to make the process of showing smoother. The size of queue can be parametrized and could be somewhere around 1 sec time of showing, i.e. numerically equal to framerate.

Resources