Voice modulation in live phone call in iOS 6+ - ios6

I want to do voice modulation in live phone call meaning while conversation is happening , I want to do voice modulation. I saw ios apps which do voice modulation after they record the voice. I also checked one application named "VoiceChanger" , which adds the different voice in the ongoing calls. That call happens over VOIP as I can call any number from ipod touch & it is also written in their app description. For ipod touch we don't have calling functionality. Correct me if I am wrong.
Is it possible to do Voice modulation in live call in iOS ? If yes , any reference or library is really appreciated.
Thanks in advance.

Related

apirtc, Receive calls in the background or minimized the app

How to receive calls when the app is in background or quit state?
They are two different states of the app, background and completely closed.
or because example:
"How to receive calls with apiRTC as does WhatsApp?"
I am implementing ApiRTC JS SDK
Hi ApiRTCSupport and HardcoreGamer I am one of the developers in charge of the integration process of APIRTC with the application that ian mentions, actually we already communicated directly with APIRTC support and they mentioned that we should ask the respective question here.
the overall picture is as follows:
We have the application fully functional and integrated with the API/SDK JS.
The application is developed in Ionic with Cordova
Ionic#v4
CordovaCLI#10,
cordova-android#10.1.1
cordova-ios#6.2.0
The minimum necessary plugins were installed (including cordova-iosrtc)
Now the integrated p2p calls and p2p video calls are fully functional and without problems.
Now, among the new requirements are:
The possibility of receiving incoming calls and video calls when the app is in background (minimize state), what happens is that when the app is minimized the websocket together with the execution of the JS code is completely interrupted, so for another user who wants to call the one who put the minimized app says "user is offline", we need the guide on how to handle this first use case.
Second requirement the possibility to receive calls and video calls when the app is in quit state (app killed) [ like Telegram, WhatsApp, Messenger, etc], so it should be able to receive the notification and allow to execute the JS code that would allow the connection with the apiRTC services/servers. in this case we have already investigated about VoIP(for iOS) and push notifications with high Priority(for Android), but we wanted to know if this is the way to go or you have a different implementation for ApiRTC services.
Thank you and really sorry for the little information provided at the beginning.
If you need any more specific information I will be waiting for your answers.

Agora Voice Call Disconnects after 40 seconds

I developed app in react native and for voice calls use Agora. When I connects the call I am able to hear voice for 40 seconds and after that voice loss. When I saw agora analytics it showed graph saying choppy audio. Can anyone help to let me know why voice losts after 40 seconds is there any setting I am missing? Bundle of thanks in advance
I can't seem to reproduce the error using the demo app. The app terminating can be because of a few reasons, it can be cleaned up by the system if it's in background for a while. It can even be quit due to a memory leak. To narrow it down check your app logs.

How does wakeword work in alexa voice service javaclient sample?

I found that there are some wording "wakewordAgentEnabled" found in alexa voice service javaclient sample but when I run the program and android companion app, it shows a "Listen" button, it works properly, but how to call the wakeword "Hey Alexa" instead of using the "Listen" button?
Actually, I would like to use the logic of wakeword in Android app, so no need to click a button.
Is the sample support wakeword?
Is it needed to work with the Kitt-AI snowboy together?
From what I understand (I work in the Alexa org at Amazon) the reason the Echo can respond to wake words ('Alexa' 'Amazon' and 'Echo') is actually hardware in the device that opens up the connection. In order to obtain this on another device such as an Android phone you would actually need to constantly be listening, converting speech to text, and validating text for the wake word which would be very resource intensive and a large power drain. To reduce that drain it is just a button to open the connection.

How to read phone state over bluetooth

I try to develop an application that connect to mobile phone and read its phone state (incoming call, outgoing call etc.) like a smart watch. I used a smartwatch and connect to my phone, then open android studio to look at logcat. I understand the watch using bluetooth Obex service,Hfp,a2dp Rfcomm services but I didn't find any example or instructions for using them.
how can i do?
thanks.
To know the states kindly use the BluetoothHeadsetClient class. This class contains the necessary function to know the phone state (incoming call, outgoing call etc.)

How to leave Voice mail using Twilio?

I'am trying to send recorded messages to phone numbers using twilio & salesforce. The problem i am facing that some times the call is going to Voice mail and the message is not getting recorded as the voice mail recording starts after a certain time. How can twilio manage to monitor that time and play the message after the voice mail starts recording.
Now i know that the voice mail recording system uses a beep before it starts recording. Can i use that DTMF tone to instruct twilio to start playing the recorded message.
Twilio developer evangelist here.
Twilio is able to do some, experimental, checking for answering machines such that it will only start playing after it hears a beep. You can see how to do this in the documentation here. Basically, you need to pass an ifMachine parameter of "Continue". You will then get an "AnsweredBy" parameter in calls to your TwiML so that you can decide what to do. If you do continue, Twilio will wait for the beep.
Let me know if that helps!
Update
The ifMachine parameter has been deprecated and replaced with the new Twilio Answering Machine Detection.
Now you can pass a parameter called MachineDetection with the argument Enable or DetectMessageEnd. Enable tries to give you an answer as soon as possible, passing the result to the TwiML webhook within the AnsweredBy parameter. DetectMessageEnd will call the webhook once the voicemail message has finished playing.

Resources