iOS 6 Audio multi-route - use external microphone AND internal speaker simultaneously - ios6

This presentation: http://www.slideshare.net/invalidname/core-audioios6portland on Core Audio in iOS6 seems to suggest (slide 87) that it is possible to over-ride the automatic output / input of audio devices using Av Session.
So, specifically, it is possible to have an external mic.plugged into an iOS6 device and output sound through the internal speaker ? I've seen this asked before on this site: iOS: Route audio-IN thru jack, audio-OUT thru inbuilt speaker but no answer was forthcoming.
Many thanks !

According to Apple's documentation:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html#//apple_ref/occ/instm/AVAudioSession/overrideOutputAudioPort:error:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html#//apple_ref/doc/c_ref/AVAudioSessionPortOverride
You can override to the speaker, but if you look more closely at the C based Audio Session services (which is actually being deprecated, but still has helpful information) reference:
https://developer.apple.com/library/ios/documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html#//apple_ref/doc/constant_group/Audio_Session_Property_Identifiers
If a headset is plugged in at the time you set this property’s value
to kAudioSessionOverrideAudioRoute_Speaker, the system changes the
audio routing for input as well as for output: input comes from the
built-in microphone; output goes to the built-in speaker.
I would suggest looking at the documentation for iOS 7 to see if they've added any new functionality. I'd also suggest running tests with external devices like iRiffPort or USB based inputs (if you have an iPad with CCK).

Related

Chromium Embedded Framework cannot access system audio with getDisplayMedia

Is it possible to capture system screen & audio with chromium embedded framework using getUserMedia or getDisplayMedia? I've managed to get a video only stream of the system so far, but i cannot get audio capture to work.
In standard Chrome you can get the system audio by using:
navigator.mediaDevices.getDisplayMedia({ video: true, audio: true })
This results in a popup where you can tick a checkbox to enable audio capture. The stream has an audio track labeled as "System Audio".
In cef this popup-dialog does not exist (but can be skipped using a launch-config flag). When calling getDisplayMedia in the web-application, you get an audio track aswell, but the track is labeled as "Fake audio". It seems that this track is actually the sound of a microphone and not the system audio.
Any idea why this does not work? Is it actually implemented in the Cef core? I wasn't able to find any info on this. Thanks in advance!
This is supported with the Chrome runtime.
Run your app with the key --enable-chrome-runtime or set chrome_runtime CEF setting to true (1).

Azure Kinect: how to find the windows device id

I have been developing a Win application that uses 3 Azure Kinects. Since there is no C# wrapper available yet, I made a C++ app that does what I need and the C# app just grabs its output files.
I now need to figure out which camera is which. In the C# app I can get the windows device id in a form similar to
\\.\USB#VID_045E&PID_097C#001007692912#{A5DCBF10-6530-11D2-901F-00C04FB951ED}
However the C API for the Kinect only provides ways to get the serial number of the device.
I tried to dig into the API, since I'm sure it must be somewhere in the code but, due to my limited C skills, I got lost pretty quickly.
Anybody with the same issue or can help?
Thanks,
Guido
The SDK is designed to use serial number specifically to determine which device or devices you are connected to. If you are just trying to use 2 Kinects with 2 instances of your C# then you will need to open devices until you find the serial number you are looking for. If you are trying to use multiple devices in a master/subordinate configuration then you can query for jack state to determine if you have connected to one or the other.
Also please be aware that we just released our own C# wrapper for the SDK. Checkout https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/608 and https://microsoft.github.io/Azure-Kinect-Sensor-SDK/master/namespace_microsoft_1_1_azure_1_1_kinect_1_1_sensor.html for more details.

Is it possible with Codename One to record microphone input and play it back simultaneously?

I am using Codename One to record the microphone input and play it back to the connected earphones.
First of all if I record audio from mic to a file, and play it back when the recording is over, it works as expected. That's why based on this 2014 question I implemented 2 periodic tasks (timer and timertask), as long as 2 files : one for recording, one for playing. I set the periodic tasks period to values between 100 ms and some seconds, but the result was awful on the Android device. Indeed there were random gaps, it was not smooth at all, nor understandable.
I assume the overhead of writing to a file every period is too high and consequently is causing that behaviour. So using proper high-level Codename One methods does not seem the way to go.
Then in the same question from 2014, the requester is suggesting to create an inputstream from the recording Media and use it as input for the playing Media. However the method MediaManager.createMediaRecorderStream() does not seem to be available anymore. I tried to use the file used to record audio as InputStream for the playing Media through fs.openInputStream(recFilepath) but it did not output any sound nor error on the device.
So my question is whether or not I can achieve my goal with bare Codename One or do I have to use the native interface ? Moreover Shai (in the 2014 above mentioned question) wrote that the second approach with MediaManager.createMediaRecorderStream() might work on some platforms : is the android platform among these, or only iOS platform was aimed at ?
Any help appreciated and sorry for not posting code since I cleared it as soon as an attempt did not appear to work. So I really messed up with my code which now is not doing anything I targetted initially.
Cheers,
As far as I recall Android back in the day didn't support input stream for media and later only allowed capturing input directly as uncompressed WAV which makes full duplex usage impractical. This might have changed since as I recall they did some overhaul of their media libraries.
I'm not sure if this is exposed in our higher level code. Besides using native interfaces you can also help us improve Codename One by forking and hacking it e.g. this is the relevant code in the Android project:
https://github.com/codenameone/CodenameOne/blob/master/Ports/Android/src/com/codename1/impl/android/AndroidImplementation.java#L2804-L2858
This is a contribution guide to Codename One, it covers running in the simulator but that's a good start: https://www.codenameone.com/blog/how-to-use-the-codename-one-sources.html
You can test your changes on an Android device with instructions here: https://www.codenameone.com/blog/debug-a-codename-one-app-on-an-android-device.html

Fingerprint scanner in codename one

Question: 1
I want to use fingerprint scanner in codename one. Can anybody tell me is it available in codename one or not? If yes, how to use and if no, then how can I code it in codename one ?
Question: 2
How to get the maximum device info in codename one like android version, mobile model, or other stuff ?
Thanks,
No. Fingerprint scanning isn't available at this time.
You can use native interfaces to integrate native device functionality check out this quick video and the advanced section in the developer guide.
Device information is available in Display.getProperty() as well as some other methods in that class. Notice that if you get things such as UDID you will get a permission prompt.

WP7 MediaElement download problems

I'm running into problems on the WP7 with MediaElement downloading a 128kbps mp3 stream from a web service for a music player app that i'm working on. The file downloads correctly when the wp7 is on a wifi connection, but downloading sometimes stops when off of wifi. The problem is that i'm not getting any errors or exceptions when the downloading fails and the MediaElement state is "playing". MediaElement runs right past the downloaded portion of the stream and acts like it is playing, but there is nothing to play since the download stopped. I can somewhat replicate this issue based upon my location and using the 3g instead of wifi, so i believe it is due to a low connection. I don't believe any code needs to be shown in this instance, but i try to post something. I want to know if I have any control over this? Are there any other events I could use to detect when the download has failed? Is there another way I could download a mp3 stream that is more reliable and play it? Is there another player/component I should try?
Thanks in advance
You could always use MediaStreamSource to try to handle the download and implement streaming, to some extent. It is a more "painful" way of doing this since you will have to work with an extra media layer, but it pays off by improving playback stability.
Here is a starter example by Tim Heuer. Take a look specifically at how he takes advantage of a custom implementation of MediaStreamSource. Here is a more complex sample.
If streaming is not a requirement, you could download the file (and store it in the Isolated Storage) and then play from there.

Resources