I make an application for iOS and Android using ActionScript 3 and Adobe AIR ( 3.7 ) to build the ipa and apk. In this application, I load a Video from an FLV and add it in the scene.
The problem is, on the emulator or the Flash view, all is ok, but, on the iPad ( test on iPad 1, 2 and 3 with same results ) the video makes shorts jumps ( like a sudden freeze follow by a short jump into the time-line ) every 2 secondes, approximately.
Of course, I make sure that the video wasn't under other elements or above moving clips. I try to load the video without the rest of the interface : same result. Change the renderMode to "direct" or "gpu", no... Export the video in different quality and assure no redimensionnement ( Even with a dimension in a multiple of 8 ), no again.
I use a similarity of this code to load my video ( It's the test code I use to be sur that the problem wasn't elsewhere in my code )
var myVideo:Video = new Video();
this.addChild(myVideo);
var nc:NetConnection = new NetConnection();
nc.connect(null);
var ns:NetStream = new NetStream(nc);
ns.client = { onMetaData:ns_onMetaData, NetStatusEvent:ns_onPlayStatus };
myVideo.attachNetStream(ns);
ns.play("myLink.flv");
var ns_onMetaData:* = function(item:Object):void { }
var ns_onPlayStatus:* = function(event:NetStatusEvent):void {}
ns.addEventListener(NetStatusEvent.NET_STATUS, ns_onPlayStatus);
Thanks in advance and sorry for my bad english
You should not use FLV on iOS devices. That is my personal guess as to why you are seeing the "jump". FLV is software decoded, so it is relatively slow. My personal guess is you are experiencing dropped frames while the video is being decoded.
On iOS (and all mobile devices, really), you want to use h.264 video with an mp4 extension (m4v will work on iOS, but not on Android, I believe). For playback, you want to use either StageVideo or StageWebView rather than an AS3-based video-player. StageVideo will render using the actual media framework of the device. StageWebView will only work on iOS and certain Android devices, and will render using the actual media player of the device.
The difference between this and Video or FLVPlayback (or the Flex or OSMF-based video players) is that the video will be hardware accelerated/decoded. This means that your app's render time (and thus the video render time) will not also be dictated by how fast the video is decoded because a separate chip will be handling it.
Additionally, hardware accelerated video will be much, much better on battery life. I ran a test last year on an iPad 3 and the difference between battery life consumed by software/CPU decoded FLV and hardware decoded h.264 was somewhere in the neighborhood of 30%.
Keep in mind that both of these options do not render in the display list. StageWebView renders above the display list and StageVideo renders below.
I suggest viewing my previous answers on video render in an AIR for Mobile app as well. I have gone into more detail about video in AIR for mobile in the past. I have built three video-on-demand apps using AIR for Mobile now and it is definitely a delicate task.
NetStream http Video not playing on IOS device
Optimizing video playback in AIR for iOS
How to implement stagevideo in adobe air for android?
Hopefully this is of some help to you.
Related
Whenever I try to play a video (on PEPPERS tablet), that is locally saved, I encounter the following problem - on the PEPPER's tablet the error message "Video could not be played" is displayed. I am using Choregraphe and its standard 'play video' box.
Here is a screenshot of the project
EDITED
I think the problem may occur because:
The path to the video is not set correctly, but I highly doubt this is the case.
The video formats I have tested are mp4 and mov, that are converted from random youtube videos.
So my question is why the video can not be played on the PEPPER's tablet this way?
Try with just the name my_video.mp4 as parameter, no quote or "/".
Recommended format is mp4 container, video codec H.264, audio codec AAC...
Also, disconnect the output, otherwise as soon as you video starts, it will be required to stop ;-)
I am using Codename One to record the microphone input and play it back to the connected earphones.
First of all if I record audio from mic to a file, and play it back when the recording is over, it works as expected. That's why based on this 2014 question I implemented 2 periodic tasks (timer and timertask), as long as 2 files : one for recording, one for playing. I set the periodic tasks period to values between 100 ms and some seconds, but the result was awful on the Android device. Indeed there were random gaps, it was not smooth at all, nor understandable.
I assume the overhead of writing to a file every period is too high and consequently is causing that behaviour. So using proper high-level Codename One methods does not seem the way to go.
Then in the same question from 2014, the requester is suggesting to create an inputstream from the recording Media and use it as input for the playing Media. However the method MediaManager.createMediaRecorderStream() does not seem to be available anymore. I tried to use the file used to record audio as InputStream for the playing Media through fs.openInputStream(recFilepath) but it did not output any sound nor error on the device.
So my question is whether or not I can achieve my goal with bare Codename One or do I have to use the native interface ? Moreover Shai (in the 2014 above mentioned question) wrote that the second approach with MediaManager.createMediaRecorderStream() might work on some platforms : is the android platform among these, or only iOS platform was aimed at ?
Any help appreciated and sorry for not posting code since I cleared it as soon as an attempt did not appear to work. So I really messed up with my code which now is not doing anything I targetted initially.
Cheers,
As far as I recall Android back in the day didn't support input stream for media and later only allowed capturing input directly as uncompressed WAV which makes full duplex usage impractical. This might have changed since as I recall they did some overhaul of their media libraries.
I'm not sure if this is exposed in our higher level code. Besides using native interfaces you can also help us improve Codename One by forking and hacking it e.g. this is the relevant code in the Android project:
https://github.com/codenameone/CodenameOne/blob/master/Ports/Android/src/com/codename1/impl/android/AndroidImplementation.java#L2804-L2858
This is a contribution guide to Codename One, it covers running in the simulator but that's a good start: https://www.codenameone.com/blog/how-to-use-the-codename-one-sources.html
You can test your changes on an Android device with instructions here: https://www.codenameone.com/blog/debug-a-codename-one-app-on-an-android-device.html
This presentation: http://www.slideshare.net/invalidname/core-audioios6portland on Core Audio in iOS6 seems to suggest (slide 87) that it is possible to over-ride the automatic output / input of audio devices using Av Session.
So, specifically, it is possible to have an external mic.plugged into an iOS6 device and output sound through the internal speaker ? I've seen this asked before on this site: iOS: Route audio-IN thru jack, audio-OUT thru inbuilt speaker but no answer was forthcoming.
Many thanks !
According to Apple's documentation:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html#//apple_ref/occ/instm/AVAudioSession/overrideOutputAudioPort:error:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html#//apple_ref/doc/c_ref/AVAudioSessionPortOverride
You can override to the speaker, but if you look more closely at the C based Audio Session services (which is actually being deprecated, but still has helpful information) reference:
https://developer.apple.com/library/ios/documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html#//apple_ref/doc/constant_group/Audio_Session_Property_Identifiers
If a headset is plugged in at the time you set this property’s value
to kAudioSessionOverrideAudioRoute_Speaker, the system changes the
audio routing for input as well as for output: input comes from the
built-in microphone; output goes to the built-in speaker.
I would suggest looking at the documentation for iOS 7 to see if they've added any new functionality. I'd also suggest running tests with external devices like iRiffPort or USB based inputs (if you have an iPad with CCK).
I'm running into problems on the WP7 with MediaElement downloading a 128kbps mp3 stream from a web service for a music player app that i'm working on. The file downloads correctly when the wp7 is on a wifi connection, but downloading sometimes stops when off of wifi. The problem is that i'm not getting any errors or exceptions when the downloading fails and the MediaElement state is "playing". MediaElement runs right past the downloaded portion of the stream and acts like it is playing, but there is nothing to play since the download stopped. I can somewhat replicate this issue based upon my location and using the 3g instead of wifi, so i believe it is due to a low connection. I don't believe any code needs to be shown in this instance, but i try to post something. I want to know if I have any control over this? Are there any other events I could use to detect when the download has failed? Is there another way I could download a mp3 stream that is more reliable and play it? Is there another player/component I should try?
Thanks in advance
You could always use MediaStreamSource to try to handle the download and implement streaming, to some extent. It is a more "painful" way of doing this since you will have to work with an extra media layer, but it pays off by improving playback stability.
Here is a starter example by Tim Heuer. Take a look specifically at how he takes advantage of a custom implementation of MediaStreamSource. Here is a more complex sample.
If streaming is not a requirement, you could download the file (and store it in the Isolated Storage) and then play from there.
I have a C program that runs a scientific simulation and displays a visualisation in an OpenGL window. I want to make this visualisation into a video, which will eventually go on YouTube.
Question: What's the best way to make a video from a C / OpenGL program?
The way I've done it in the past is to use a screen capture program, but this is very labour-intensive (have to start/stop the screen capture program, save the video file, etc...). It seems like there should be a way to automate the process of making a video from within the C program. Then I can leave it running overnight and have 20 videos to look through in the morning, and choose the best one to put on YouTube.
YouTube recommend "MPEG4 (Divx, Xvid) format at 640x480 resolution".
I'm using GLUT 3.7.6_3, if that makes a difference. I can change windowing system if there's a good reason.
I'm running Windows (XP), so would prefer answers that work on Windows, but Linux answers are ok too. I can use Linux if it's not possible to do the video stuff easily on Windows. I have a friend who makes a .png image for each frame of the video and then stitches them together using "mencoder" on Linux.
you can use the glReadPixels function (see example)
But if the stuff you are trying to display is made of simple objects (i.e. spheres, rods, etc..), I would "export" each frame into a POV-ray files, render these, and then make a video out of these pictures. you will reach a much higher quality like that.
Use a 3rd party application like FRAPS to do the job for you.
Fraps can capture audio and video up
to 2560x1600 with custom frame rates
from 1 to 120 frames per second!
All movies are recorded in outstanding
quality.
They have video samples on the site. They seem good.
EDIT:
You could execute a tool to record the screen from your C application by calling it like system("C:\screen_recorder_app.exe -params"). Check camstudio, it has a command line version.