I am automating a test for an Android TV Box in which it is required to type out the name of certain VOD asset to watch. I am using adb shell input text <string> for that and it's working beautifully.
The issue comes when I try to input a number, the device considers it as if I'm changing channels, and it exits my search whenever I send a number. Does anyone know how to go around that?
Edit: the product owner confirmed that sending integers as text without having the box interpreting them as channel change is inevitable, and so I should close this question.
Related
Currently there doesn't seem to be an easy way to copy/paste text in the Oculus Quest, nor to 'type' into a companion app and have it sent into the headset (at least that I have seen). This makes it extremely challenging to enter complex passwords from password managers, etc.
I have read some articles that say it might be possible to pair a bluetooth keyboard with the headset, which would be slightly better, but still doesn't allow me to copy/paste from my password manager.
Does anyone know of a way to achieve this?
After some Googling/SO'ing, it seems like this might be possible using the Android Debug Bridge (adb) (Oculus has their own help page for it as well)
Your device needs to be in developer mode for this to work:
Create/join an organisation in the Oculus Dashboard
Open the Oculus app on your mobile phone.
In the Settings menu, select the Oculus Quest headset that you’re using for development.
Select More Settings.
Toggle Developer Mode on.
If you're using homebrew on macOS, you can install adb with:
brew cask install android-platform-tools
Next, plug your headset into your computer with the USB-C cable. You then should be able to list connected devices:
adb devices
If it says 'unauthorized', check in the headset for a dialog box asking for permission to connect. Ticking 'always allow' will make this easier in future.
At this point, we should be good to send text to the device. In the headset, focus a field that you want to 'type' into, then use adb shell input to 'type' your text:
adb shell input text "sometext"
It seems it is also possible to send a 'paste' command using adb shell input keyevent:
adb shell input keyevent 279
In older Android devices, you could send a 'copy' command in a similar way, but this has since been deprecated:
service call clipboard 2 i32 1 i32 0 s16 "text"
It seems that on newer devices, you need to leverage an external service (eg. Clipper) to 'copy to clipboard'. Using Clipper, you can send a command in adb shell such as:
am broadcast -a clipper.set -e text "text"
There are many different inputs we can send using these methods. You can find a full list of KeyEvent's in the Android Developer Documentation.
Using one (or more) of these methods, it should be possibly to 'copy'/'paste'/'type' passwords stored in a password manager on your computer 'into' the Oculus Quest headset.
One of my clients wants to use a check scanner. They purchased software and have a scanner however they do not want to store any of the data on the workstation the scanner is attached to. I'm wondering if we can utilize RemoteApp to deploy the software? I've built a test of the application being deployed via RemoteApp and it seems to work however I don't have a check scanner to test with. Will I run into driver issues or should this POC work?
Setup a test environment using RemoteApp software works fine however do not have a check scanner to test with.
It should work ok, but it will often depend on the scanner software. Useally these scanners simply type as if the keyboard was being pressed. So you have to place your cursor in the field on the form, and then scan, and it “types in” what the scanner saw. So, you can launch word, or Access or even note pad for this to work. If you are using remote desktop, then this should also work. If the scanner does not type keys as it scans, then you can’t use remote desktop, but in most cases it should.
And in most cases, the field (text box) you scan into likely will need to parse out the bits and parts of the string into separate text boxes.
So given how most scanners work, then you should be ok. So, you install the scanner software on the client side - and all it really does is press keys as if you were typing. So the trick then becomes to ensure that your cursor is in the right text box before you scan.
It is an ancient binary file extension, actually a video file created by Inter-Tel Web Conference software. It contains a screen recording video and voice audio, and also can capture the keyboard chat log, attendees and the document manager window during a conference. It can be played with Inter-Tel Collaboration Player, a standalone application included with the Web Conference software package.
What I am trying to do now is finding a way to play these files on mobile, although Inter-Tel Collaboration Player offers exporting the files in AVI format, I want to know how to make a command line script for that because the application have lots of problems with Windows 7,8,10 and don't have a Mac OS version.
What is the way to create a new player for that kind of extensions?
"Linktivity stopped support on this app, http://linktivity.com even disappeared from the web..."
It seems they were bought out by Mitel Software so now everything is under the Mitel brand name.
"I just want to find a way to manipulate this file extension, a new good player for mobile and computer"
To open/edit those .lrec files with modern software you'll have to look at their :
Collaboration products.
Unified Communication products.
I tried :
To contact them just to double-check facts but they expect a realtime phone conversation with a salesperson so it wasn't an option. I'd be a fake potential customer, but you can provide a real-world issue (with background details) to see if they can solve it.
Also downloaded for Android the MiCollab app but it needs login details before even starting anything (so no progress to just check if an .lrec file from PC would open within Android).
Export videos for mobile playback :
I've tried the desktop software. Unfortunately it does not accept external commands so there is no way to make a script that takes multiple lrecs and gives back multiple AVI.
The only option is to extract frames from .lrec bytes and use a tool like FFmpeg to combine the images (since appears to do image grabs as frames) into one .MP4 video. MP4 is then playable on mobile devices.
Also any of your existing AVI files should be converted with FFmpeg to MP4.
You can download FFmpeg for Windows here (just the big blue button, ignore other options).
Copy the ffmpeg.exe file to some folder like c:\ffmpeg and put your avi's there.
Now open Command prompt and do cd C:\ffmpeg to reach folder, then type : ffmpeg -i filename.avi filename.mp4 (replace filename with preferred for input and output)
If you know how, just include ffmpeg.exe path to Control Panel PATH settings so that FFmpeg can be accessed from any folder (no need to move files to its own folder).
PS:
I am still researching how to get the frames it's an akward format without the specs (bytes order is Big Endian but then entry values are filled as Little Endian, then also not sure whether to reverse every two or four bytes cos it's mixed up like that etc and the pixel bytes themselves seem to have compression but it's not JPEG more like ZIP or whatever). Only confirmed bytes so far are for video width and video height. It seems doable though if the .lrec only contains screen recordings.
After some research, I found that Media Player Classic can play .lrec files. I don't know, if this helps you a bit.
For a own video player for your company, you would need the encoding infos or a decoder directly from Inter-Tel since they own the licences, without it you can't create one.
Edit: Deprecated info see comments.
I manage a small project where I occasionally distribute a different version of a google glass application to some (4) google glass devices manually through the adb shell. One version of the package is used for each google glass, and it depends on which gmail address the glass uses. (they all look the same)
Usually I would have to turn on the glass to see, but I was looking for a method of getting the gmail address associated with the glass through the adb shell. Is this possible?
Thank you.
There is no way I know to retrieve such info about the user.
But, you can simply put a file in the external storage containing the email account once, and then query this file when you want:
# write the user info
adb shell "echo \"xyz#gmail.com\" > /sdcard/user.info"
# get the user info
adb shell cat /sdcard/user.info
If you want this info without turning the glass on, you'll have to print a note directly on the glass ;)
Using the adb command, one way to differentiate each Glass device is through their serial numbers. You can then choose which device to install on by running the following command:
adb -s <serial number> install <apk name>.apk
I have used various adb shell commands to automate the loading of a URL to a handset via adb. It loads up the page but I need to then select an image on the screen. I have tried adb shell input keyevents but this is sometimes missing the image so I need to use the send event and mimic the touching of the screen.
Can someone please help. How do I find out the co-ordinates of the picture?
Thanks
You can use monkeyrunner to get screenshot
import sys
import os
from com.android.monkeyrunner import MonkeyRunner, MonkeyDevice
device = MonkeyRunner.waitForConnection(10 , os.getenv('ANDROID_SERIAL'));
image = device.takeSnapshot();
image.writeToFile("/folder/test.png","png");
Don't forget to SET on windows and EXPORT on Linux ANDROID_SERIAL
ANDROID_SERIAL - it's your device number (can be obtain by using adb devices)
Then open image in Paint or any other, and get the coordinates of picture.
Enable Show pointer location from developer options. It will show coordinates of the point where you touch. using the coordinates you can touch on required points using monkeyrunner or
adb shell input tap <x><y>
.