My solution has 2-machines where one is customer facing and the other is used by store personnel; in some situations when store personnel want to take control of the customer facing system we use UltraVNC to remote into it, push our application to a 2nd virtual display and put a PLEASE WAIT screen to the customer!
With Windows10 the concept of a virtual display is no longer possible so our remote view via UltraVNC lands on the primary display which means that the customer can SEE what the store personnel are doing and can also interact/interfere with it.... and this is my struggle today!
We found with UltraVNC we can "display user input" which works with keyboard/mouse but doesn't support blocking TOUCH input (we use a touchscreen) and there seems to be no way to put a PLEASE WAIT only to disable the screen (goes black - which has customers asking if the program crashed).
So opening this up to the general public to see if anyone has experience either with UltraVNC or has some completely different proposal for me to consider! I am open to all suggestions!
You rely on windows solution, On linux there is a dbus messaging system , I don't know if there is a similar messaging system in windows.
You may have a powershell script that unplugs the display or the touchscreen input.
and send a toast message to user.
https://gist.github.com/atao/12b5857d388068bab134d5f9c36241b1
Related
I have a door telephone on our office building. When somebody presses the button it calls a telephone number for which we have a simcard. Right now that simcard is in a cellphone. Everytime we have a meeting at our office, we have to pick up the phone and press 3 to open the door. I'm looking for a solution to be able to programatically pick up the phone and press the 3. Does any such software exist? I have googled but found nothing.
TLDR; I need some software (and a sim card reader) that can programmatically pick up the phone when it rings and respond with a 3 on the numpad.
The OS doesn't matter.
Not sure if Stackoverflow is the right place to ask. Let me know if you have suggestions for other better places to ask.
You could try using the SIM Card in a normal 3G USB Dongle and an application called "Gammu" which can answer a call and sendDTMF codes i.e. number presses. I have only used Gammu on linux systems but I believe it works on Windows as well.
Another possible solution:
Setup voicemail for that SIM card, and when you are asked to leave your voicemail message (the message which is played to whoever gets to the voicemail) just press the button #3.
When you want, use call forwarding to redirect all calls to the voicemail. Alternatively, turn off the phone (on most cellular networks this will redirect all calls to the voicemail).
I am trying to find a way to secure our robot against unwanted Choregraphe connections. We are required to work on a University-wide network, and we need a way to stop people from connecting who may have obtained the robot's IP address at some stage without our knowledge.
As there is no access to the root user account on the Pepper, I cannot simply lock down access using iptables, so I thought I might try looking at a way to forcibly close connections from ALChoregraphe when it registers on the robot.
However, running the command:
qicli info ALChoregraphe
I can see that the only method available is requestDisconnection. There is no way to close the connection forcibly.
I have tried using ALServiceManager to stop the service, but it apparently only knows about services that are installed as packages.
So far the only solution I have is to change the color of the eye LEDs to indicate that a connection has been established, and reset them when a disconnect is received.
Aside from moving the robot to its own network, do you have any suggestions on how I could go about handling this?
Thanks!
At the moment, there is no other way to prevent connections to the robots. All you can do is to make sure that unwanted clients cannot access the network of your robot.
In Choregraphe 2.4 and later, you can kick the existing Choregraphe after 30 seconds. If anyway it fails, you should unregister the services ALChoregraphe and ALChoregrapheRecorder using qicli call ServiceDirectory.unregisterService <serviceID> where serviceID is the number facing the services when listed with qicli info.
Is it possible to create a alexa skill which sends back custom directives created by me back to my alexa enabled devices, so that I can parse them in responses and take action.
Thanks
In your example, Alexa start service1, service1 becomes the invocation (e.g. skill) name. This is fixed as one per skill. So, you can what you've explicitly requested. Some additional info that may be helpful:
As for other content after the invocation, it isn't clear there is a mode to get the Echo to recognize anything. I've heard some suggest providing very large intent sample lists or long slot dictionaries and it convinces the system to provide a more open recognition, but I've never seen that behavior myself.
If you list of items can be constrained, you can create multiple skills. For a personal, aka developer setup, this works well. I have multiple echos on the same account. I have two defined skills that route to the same ASK service layer on my PI. The launch URL routes the request into two different paths that are parsed on my NodeJS logic setting up different defaults in my code.
This allows my wife's version of the skill to work differently than mine. We just use different invocation names without having to have separate accounts and implementing oAuth.
Sorry this is old but I just spotted it while searching something else. In case you have not solved it, the answer is yes. I use my Rpi with Alexa this way. If you are publishing a skill you need to use proper security measures that include account linking, Oauth2 etc. and there are limits on the types of commands you can accept without a user PIN. However, if you are willing to assume the risk on a skill for your own use in developer mode, you can put http calls, with or without basic authentication, directly into your skill code as though it were any other public ip address. As long as your pi is addressable from outside via http, you can command it. I use my pi with a web server to control a media center. I was sending Alexa commands to that via smart things, but now also have developed a custom skill to go straight fro Alexa tot he Pi and to link commands together. I can say "Alexa, start listening..." and then send multiple menu commands via voice that are recognized by the Pi and executed (e.g. menu, guide, page down, channel name, select, etc...) until I exit or it times out due to no input. I don't have to repeat "Alexa" or "turn on/off" as though each command were a device as is the case when going through smart things. I would only recommend this if your htpc and pi have no secure data and are firewalled from the rest of your network.
I am working with an embedded platform. Typical software in this devices are Linux 2.6 + Busybox, so resources are limited.
I need to run an user space application every time a USB device is connected. I need to pass as parameter to this user space app the DeviceID and ProductID.
I don't really know which strategy should I follow to achieve this:
Writing a linux kernel module.
Doing it from inside the kernel (usb drivers) i'm currently doing this, but i dont think its the best way to do it
A user space app that 'polls' for usb connected devices.?
Which one should be the best way?
Thanks for your answer!
If you want to remain in user space, then you can use libudev.
You have an example here. You can extract product id and device id from this.
Even though other options like #aisbaa mentioned, modifying kernel is interesting and challenging one. I suggest you to modify the USB driver. Reason is, you need to send the arguments to the user space application(Product ID, Device ID).
These Ids will be obtained in driver. so calling user space app with these Ids are my choice.
For calling user space app nice explanation available here.
To the best of my knowledge, there is a mechanism for USB hot plugging in the kernel.
When a hot plug event happens, the user can be notified. Unfortunately, I am not very familiar with the details.
Maybe linux-3.3.5/samples/kobject/kset-example.c will give you some ideas.
I am trying to write something for windows using WINAPI, so I can make the touch pad do whatever the mac touch pad do.
I have checked using Spy++ what WM messages the two finger taps and etc. send to the OS, but figured out it sends only those plus/minus:
WM_LBUTTONDOWN
WM_LBUTTONUP
WM_MOUSEHOVER
WM_MOUSEHWHEEL
WM_MOUSELEAVE
WM_MOUSEMOVE
WM_RBUTTONDOWN
WM_RBUTTONUP
When I tried to see what happend when clicking with 3 or 2 fingers it didn't send any particular message, unless I moved them a bit.
firstly i would like to start with this:
when 5 fingers going down show desktop (as win+D does).
How to write (driver?) something that can diagnose 5 fingers touching simultaneously the touch pad?
Of curse there is no OS messages for this, but I can make some unique combination of existed messages and by that diganose it.
If I need to write a driver can I do it generic for most of the touchpad, can I do it as add-on?
If you can post a good tutorial you are familiar with for writing a driver for windows, pls, cause I have no clue about it.
Do I need anything else to take into account :
1. Diagnose 5 fingers mouse events.
2. Make a thread in Explorer on startup that handle those new mouse messages.
thanks in advance
Mouse Input Notifications
In short, you can't.
First, there are touchpads that can physically detect only 1 finger touch, and for those who can detect many - their drivers do the translation for you.
Windows does not have any inherent support for reading multiple touch inputs - it relies on the touchpad drivers to provide them.
You can achieve your goal for SOME devices by writing your own touchpad driver (probably starting from Linux touchpad drivers and Windows driver development kit), but this is far from being simple.
And, you'll need to do this for each and every touchpad device you want to support (from Synaptics, Alps Electric, Cirque to name the few)
Only after that you can move on to implementing the reaction for the touchpad actions in applications like Explorer.