Identify the Alexa Dot Versus Show - Is a display present? - alexa

We are looking for a function or code to quickly identify if Lambda code is running on an Alexa Dot/Echo or a Show/Spot. More than the device ID, we want the device type. Based on the device capability, display or no display, the code will provide different types of content. We are hoping not to include a "device registration" feature. We are building a health-related Skill, which will also be for seniors, so we want to include visual interactions if a display is present, in the simplest possible fashion. Suggestions?

You should be able to examine the request and determine if a display is present, such as context.System.device.supportedInterfaces, I think it is. Under there you can look for Display, so you are not looking so much for a particular model, rather for specific functionality, such as a display.

Related

Integration of IR-Blasters with Alexa Smart Home

My goal is to use Alexa Voice Service(AVS) to control TV,AC.. etc. through an IR blaster.
I have my own IoT application which can be used to on-board the said IR blaster and control it manually. The application supports other devices as well. In a previous implementation I have linked AVS with the application and it is now able to control switches,sockets and bulbs through voice.
When implementing for the blaster what I found was that this could be done by using the device category as "OTHER"(as there aren't a category for blasters) and using Alexa.ModeController Interface? I have to create separate modes for each action of the remote(for AC it would be cool mode, fan mode.. etc.) and include each possible parameter values(1,2,3,4) under that action? Problem with this approach is that the implementation is complex and also device specific i.e. I have to do separate implementations for TV and AC.
Is there a better way to achieve this?
Can you list exactly what actions you want?
ModeController is a primitive one, so it allows defining basically anything, but yes, instance ids must match. Other primitives are the Range Controller and the Toggle Controller. All of them use instance ids for the cases if you have multiple capabilities on the same controller.
There a lot of specific controllers. For example to control AC, you can use the Alexa.ThermostatController

Is it possible to restrict Alexa or an AVS device to a custom skill?

Is it possible to restrict and AVS device (a device running Alexa) to a single skill? So if I built an AI skill and have it running on a device, is it possible to keep the experience inside the custom skill so I don't have to keep saying Alexa, open
One trick you can do with AVS is to prepend every single request with a sound clip equivalent to: "ask to ..." It's definitely a hack, but I was able to use it with some success.
See my write-up here: https://www.linkedin.com/pulse/adding-context-alexa-will-blaschko-ma
The relevant parts (in case the link goes away).
Regular voice commands don't carry my extra information about the
user, but I wanted to find a way to tack on metadata to the voice
commands, and so I did just that--glued it right onto the end of the
command and updated my intents to know what the new structure would
be.
...
In addition to facial recognition, voice recognition could help
identify users, but let's not stop there. Any amount of context can be
added to a request based on available local data.
“Find frozen yogurt nearby" could silently become “Alexa open Yelp and
find frozen yogurt near 1st and Pine, Seattle” using some built in
geolocation in the device (phone, in this case).
I also use something similar in my open source Android Alexa library to send prerecorded commands: https://github.com/willblaschko/AlexaAndroid
I think you are looking for AWS Lex which allows you to write Alexa like skills without the rest of Alexa feature set.
http://docs.aws.amazon.com/lex/latest/dg/what-is.html

Need to screen scrape browser as opposed to webpage

I have a webpage that needs to be scraped to look for certain text. The problem is it's not really webscraping that I am trying to achieve. The website is opened by a separate process. I am specifically talking about a webpage but really, it is more of a universal screen scraping issue. Conceptually, It's more like I am scraping the browser instead of the page itself. Is there a program that can scan any open process and look for and match text? To put it another way, it would be like having a separate program from the browser's built-in ctrl+f find function. I just need a simple utility to tell my if a given text is present in a boolean type fashion. I realize this is a very broad question but I haven't been able to find anything about it. Maybe I don't quite know how to articulate it in a Google search because my research keeps coming up empty.
If you already know the structure of the page, like it's always Google search results, or always an Amazon product, you might look at Selenium or one of the many Chrome screen-scraping add-ons.
If you want to grab data off of any page without knowing the format in advance, I don't know a way.

Should we create custom pages for all objects?

I noticed that salesforce doesn't allow to override control function for all objects.
Say if you want to do something whenever objects get saved there is no way to attach the action
unless you create a custom page and include either standard controller or extension. Or if you want
to add the same meta-tag on all pages I run into this limitation. Is there better way to do this?
Generally - no. Roughly speaking if Salesforce doesn't allow you to do something it usually means there's pretty good hint you're doing in it wrong. I realize it sounds like I'm a fanboy but in reality - can you expand your question with concrete example why would you want to do something like that? For example governor limits are evil, annoying etc. - but they force you to write effective code that doesn't strain the database too much.
if you want to do something whenever objects get saved
That's what triggers are for. Ask yourself a question if the "action" you need to make should happen only from web UI or also when performed from API (mass data load, a smartphone application etc).
if you want to add the same meta-tag on all pages
You could maybe pull off similar result by adding a component to the sidebar. It won't cover all cases (like accessing Reports/Dashboards) but it's hard to say more without knowing what you're really after. Then again - custom VF page overrides won't help you when it comes to Reports either.
I wanted to add this as a comment, but was unable to.
Anyways, For the example that you mentioned in the comment, You can add that jQuery plugin in the Home page side bar component and activate the plugin only on those custom objects where you wnat to run this plugin. You might already know that we can deduce which object a record belongs to by looking at the 1st 3 letter of the record Id, using this logic, check if the record belongs to the custom object you want your plugin to act on and run the plugin.
But As eyescream has pointed out adding script in side bar has its own limitations: you cannot use the global variables , side bar components are not loaded on the reports and dashboard tabs etc.
-ಸಮಿರ್

How do screen scrapers work? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I hear people writing these programs all the time and I know what they do, but how do they actually do it? I'm looking for general concepts.
Technically, screenscraping is any program that grabs the display data of another program and ingests it for it's own use.
Quite often, screenscaping refers to a web client that parses the HTML pages of targeted website to extract formatted data. This is done when a website does not offer an RSS feed or a REST API for accessing the data in a programmatic way.
One example of a library used for this purpose is Hpricot for Ruby, which is one of the better-architected HTML parsers used for screen scraping.
Lots of accurate answers here.
What nobody's said is don't do it!
Screen scraping is what you do when nobody's provided you with a reasonable machine-readable interface. It's hard to write, and brittle.
As an example, consider an RSS aggregator, then consider code that gets the same information by working through a normal human-oriented blog interface. Which one breaks when the blogger decides to change their layout?
Of course, sometimes you have no choice :(
In general a screen scraper is a program that captures output from a server program by mimicing the actions of a person sitting in front of the workstation using a browser or terminal access program. at certain key points the program would interpret the output and then take an action or extract certain amounts of information from the output.
Originally this was done with character/terminal outputs from mainframes for extracting data or updating systems that were archaic or not directly accessible to the end user. in modern terms it usually means parsing the output from an HTTP request to extract data or to take some other action. with the advent of web services this sort of thing should have died away, but not all apps provide a nice api to interact with.
A screen scraper downloads the html page, and pulls out the data interested either by searching for known tokens or parsing it as XML or some such.
In the early days of PC's, screen scrapers would emulate a terminal (e.g. IBM 3270) and pretend to be a user in order to interactively extract, update information on the mainframe. In more recent times, the concept is applied to any application that provides an interface via web pages.
With emergence of SOA, screenscraping is a convenient way in which to services enable applications that aren't. In those cases, the web page scraping is the more common approach taken.
Here's a tiny bit of screen scraping implemented in Javascript, using jQuery (not a common choice, mind you, since scraping is usually a client-server activity):
//Show My SO Reputation Score
var repval = $('span.reputation-score:first'); alert('StackOverflow User "' + repval.prev().attr('href').split('/').pop() + '" has (' + repval.html() + ') Reputation Points.');
If you run Firebug, copy the above code and paste it into the Console and see it in action right here on this Question page.
If SO changes the DOM structure / element class names / URI path conventions, all bets are off and it may not work any longer - that's the usual risk in screen scraping endeavors where there is no contract/understanding between parties (the scraper and the scrapee [yes I just invented a word]).
Technically, screenscraping is any program that grabs the display data of another program and ingests it for it's own use.In the early days of PC's, screen scrapers would emulate a terminal (e.g. IBM 3270) and pretend to be a user in order to interactively extract, update information on the mainframe. In more recent times, the concept is applied to any application that provides an interface via web pages.
With emergence of SOA, screenscraping is a convenient way in which to services enable applications that aren't. In those cases, the web page scraping is the more common approach taken.
Quite often, screenscaping refers to a web client that parses the HTML pages of targeted website to extract formatted data. This is done when a website does not offer an RSS feed or a REST API for accessing the data in a programmatic way.
Typically You have an HTML page that contains some data you want. What you do is you write a program that will fetch that web page and attempt to extract that data. This can be done with XML parsers, but for simple applications I prefer to use regular expressions to match a specific spot in the HTML and extract the necessary data. Sometimes it can be tricky to create a good regular expression, though, because the surrounding HTML appears multiple times in the document. You always want to match a unique item as close as you can to the data you need.
Screen scraping is what you do when nobody's provided you with a reasonable machine-readable interface. It's hard to write, and brittle.
As an example, consider an RSS aggregator, then consider code that gets the same information by working through a normal human-oriented blog interface. Which one breaks when the blogger decides to change their layout.
One example of a library used for this purpose is Hpricot for Ruby, which is one of the better-architected HTML parsers used for screen scraping.
You have an HTML page that contains some data you want. What you do is you write a program that will fetch that web page and attempt to extract that data. This can be done with XML parsers, but for simple applications I prefer to use regular expressions to match a specific spot in the HTML and extract the necessary data. Sometimes it can be tricky to create a good regular expression, though, because the surrounding HTML appears multiple times in the document. You always want to match a unique item as close as you can to the data you need.
Screen scraping is what you do when nobody's provided you with a reasonable machine-readable interface. It's hard to write, and brittle.
Not quite true. I don't think I'm exaggerating when I say that most developers do not have enough experience to write decents APIs. I've worked with screen scraping companies and often the APIs are so problematic (ranging from cryptic errors to bad results) and often don't give the full functionality that the website provides that it can be better to screen scrape (web scrape if you will). The extranet/website portals are used my more customers/brokers than API clients and thus are better supported. In big companies changes to extranet portals etc.. are infrequent, usually because it was originally outsourced and now its just maintained. I refer more to screen scraping where the output is tailored, e.g. a flight on particular route and time, an insurance quote, a shipping quote etc..
In terms of doing it, it can be as simple as web client to pull the page contents into a string and using a series of regular expressions to extract the information you want.
string pageContents = new WebClient("www.stackoverflow.com").DownloadString();
int numberOfPosts = // regex match
Obviously in a large scale environment you'd be writing more robust code than the above.
A screen scraper downloads the html
page, and pulls out the data
interested either by searching for
known tokens or parsing it as XML or
some such.
That is cleaner approach than regex... in theory.., however in practice its not quite as easy, given that most documents will need normalized to XHTML before you can XPath through it, in the end we found the fine tuned regular expressions were more practical.

Resources