I try to develop an application that connect to mobile phone and read its phone state (incoming call, outgoing call etc.) like a smart watch. I used a smartwatch and connect to my phone, then open android studio to look at logcat. I understand the watch using bluetooth Obex service,Hfp,a2dp Rfcomm services but I didn't find any example or instructions for using them.
how can i do?
thanks.
To know the states kindly use the BluetoothHeadsetClient class. This class contains the necessary function to know the phone state (incoming call, outgoing call etc.)
Related
I am able to create a call through my react application using graph API to a MS-Teams user in organization which is working fine call is being made to the graph API and it's dialing to the user in the organization but I don't think user can interact without any device setup i.e laptop speakers and all in order to listen and speak over the call.
API USED :
To make a call
https://graph.microsoft.com/beta/communications/calls
To get call summary
https://graph.microsoft.com/beta/communications/callRecords/{id}
I want to know how :
Do I need to create interface in my ReactJs App just in order to provide user all the facilities of calling? Right now I have only provided the Call button.
How can I handle the callback in the development phase and test things in my react app as callback are only sent to the https routes.
Note : I am using NestJs as backend.
Can anyone please provide a demo for this how to handle things properly as its now like a brain twister working with MS Graph APIs. I shall be highly obliged for the same as I am trying this thing first time.
Thanks in Advance.
To Create call You will need to register the calling bot. Which enables your bot to create a new outgoing peer-to-peer or group call, or join an existing meeting. Please go throgh this documentation and samples for more info.
How to open a desktop application(like Nuke) through browser.
for example: RV software have a url protocol. we can use that (rvlink://).
I think this is what you're looking for:
https://support.shotgunsoftware.com/hc/en-us/articles/219031308-Launching-applications-using-custom-browser-protocols
Note that this is asking your operating system to "launch the thing registered with the requested custom browser protocol". Similarly you can have a hyperlink mailto:/// which opens the email application registered on the users computer. rvlink:/// is registered by RV as one of these custom browser protocols during it's installation.
If you want more control you would need a process running on the user's machine that you interact with. For example that's the approach shotgun's competitor ftrack took leveraging a local process they call ftrack connect (http://ftrack-connect.rtd.ftrack.com/en/0.1.17/developing/tutorial/custom_applications.html)
If you want to run something completely custom you could take a look at running your own RPC. You would initiate the registered RPC command from the web application. Check out http://www.zerorpc.io/ or https://crossbar.io/ for some more information.
Good luck!
I spotted in an earlier response of yours that you're using Shotgun and it's launch in RV features. In which case - are you aware of Shotgun Toolkit?
https://support.shotgunsoftware.com/hc/en-us/articles/219039788-Toolkit-Home-Page
It provides app launching via the website, which it accomplishes via Shotgun desktop (a desktop app). It used to work via a browser method that chrome etc moved away from so now requires the desktop app hook.
There's actually a huge amount that SGTK does, though I think you might be able to disable everything except the app launching if you wanted. We've got it implemented across 4 locations here and it's pretty decent.
I would like to auto read the OTP send via SMS in my application, so that user does not need to do it manually.
Is there an API available for it.
No.
Incoming SMS is a special case in Android and nowhere else. You can use native interfaces to grab such messages.
I found that there are some wording "wakewordAgentEnabled" found in alexa voice service javaclient sample but when I run the program and android companion app, it shows a "Listen" button, it works properly, but how to call the wakeword "Hey Alexa" instead of using the "Listen" button?
Actually, I would like to use the logic of wakeword in Android app, so no need to click a button.
Is the sample support wakeword?
Is it needed to work with the Kitt-AI snowboy together?
From what I understand (I work in the Alexa org at Amazon) the reason the Echo can respond to wake words ('Alexa' 'Amazon' and 'Echo') is actually hardware in the device that opens up the connection. In order to obtain this on another device such as an Android phone you would actually need to constantly be listening, converting speech to text, and validating text for the wake word which would be very resource intensive and a large power drain. To reduce that drain it is just a button to open the connection.
i have done a security app which locate a iPhone and send the GPS location of the phone through message to an associated number this functions works good until iOS 5, but the issue is sending SMS without users knowledge is restricted in iOS 6, so i need a help here instead of sending message, is there any other possible way or replacement for this function? any answer related to this method are appreciated.
Thank you.
You haven't specified whether your ios app has a server. If yes, you can transmit location to server and that in turn can transmit it to intended user via specific api.
If this is not correct, APNS is your friend. This is a way to send messages to desired devices only, the ones who explicitly registers through your app.
Another quite equivalent option is to store it in public back-end like parse.com. As soon as other devices start your app, they can pull your location from there. If their device is already live, parse.com can make sure to notify them as soon as you change your location value in their DB.