I am trying to get contents of http://www.yahoo.com using WebClient#DownloadStringAsync(). However as Silverlight doesn't allow cross domain calls i am getting TargetInvocationException. I know we have to put clientaccesspolicy.xml and crossdomain.xml in our web server root but that is possible only if i have control on my services. Currently Google is not under my control ;), so how do i handle it?
I've did a workaround by making a WCF service in my web application and then calling WebClient. This works perfectly but it is rather ineffecient. Is there any other better way than this?
Thanks in advance :)
Silverlight's cross domain restricitions cause many developers to implement workarounds. If you need to display the html page you get back you should look into Silverlight 4 (WebBrowser) control although this only seems to work when running out-of-browser mode.
If you need to parse through the content you can try some of the following:
For a managed code solution the proxy service you have already implemented is your best option.
Write a Java applet that returns this information. Silverlight can interopt to javascript which can interopt into Java applets. This also works in the reverse but a little difficult to setup. (If you need more info on this let me know).
Use javascript XmlHttpRequest to get the data you want from the source. This can be difficult when supporting multiple browsers. This link shows an example of how to do this (you will need to scroll down). Javascript get Html
Code:
var xmlHttpRequestHandler = new Object();
var requestObject;
xmlHttpRequestHandler.createXmlHttpRequest = function(){
var XmlHttpRequestObject;
if(typeof XMLHttpRequest != "undefined")
{
XmlHttpRequestObject = new XMLHttpRequest();
}
else if(window.ActiveXObject)
{
var tryPossibleVersions =["MSXML2.XMLHttp.5.0", "MSXML2.XMLHttp.4.0", "MSXML2.XMLHttp.3.0", "MSXML2.XMLHttp","Microsoft.XMLHttp"];
for(i=0;i<tryPossibleVersions.length;i++)
{
try
{
XmlHttpRequestObject = new ActiveXObject(tryPossibleVersions[i]);
break;
}
catch(xmlHttpRequestObjectError)
{
// Ignore Exception
}
}
}
return XmlHttpRequestObject;}
function getHtml(){
var url = document.getElementById('url').value;
if(url.length > 0)
{
requestObject = xmlHttpRequestHandler.createXmlHttpRequest();
requestObject.onreadystatechange=onReadyStateChangeResponse;
requestObject.open("Get",url, true);
requestObject.send(null);
}}
function onReadyStateChangeResponse(){
var ready, status;
try
{
ready = requestObject.readyState;
status = requestObject.status;
}
catch(e) {}
if(ready == 4 && status == 200)
{
alert(requestObject.responseText);
}}
Related
I am trying to use the Client Libraries provided by Google to move traffic from one version of an app in AppEngine to another. However, the documentation for doing this just talks about using the rest API and not the client libraries.
Here is some example code:
var servicesClient = Google.Cloud.AppEngine.V1.ServicesClient.Create();
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = "apps/myProject/services/myService";
var updateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask = updateMask;
// See below for what should go here...
var updateResponse = servicesClient.UpdateService(updateServiceRequest);
My question is what format do I use for the update mask?
According to the documentation I should put in:
split {"split": { "allocations": { "newVersion": 1 } } }
But when I try: updateMask.Paths.Add(#"split { ""split"": { ""allocations"": { ""myNewVersion"": 1 } } }");
... I get the exception:
"This operation is only supported on the following field(s): [labels, migration_config, network_settings, split, tag_to_target_map], but got field(s): [split { "split": { "allocations": { "myNewVersion": 1 } } }] from the update request.
Any ideas where I should put the details of the split in the field mask object? The property Paths just seems to be a collection of strings.
The examples for these libraries in Google's doco is pretty poor :-(
I raised a support ticket with Google and despite them suggesting a solution which didn't work exactly (due to trying to assign a string to the UpdateMask which needs a FieldMask object), I managed to use it to find the correct solution.
The code should be:
// appService is a previously retrieved Service object from the ListServices method
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = appService.Name;
updateServiceRequest.UpdateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask.Paths.Add("split");
appService.Split.Allocations.Clear();
appService.Split.Allocations["newServiceVerison"] = 1;
updateServiceRequest.Service = appService;
I have created a javascript application (aka UWA) in order to play with my Belkin wemo and then turn on or turn off the ligth with Cortana. The following function is well called but Cortana ends up with an issue. If I remove the call to the HTTP call, the program works fine. Who can tell me what's wrong with the following function because no more details are exposed unfortunately (of course in the real program is replaced with the right URL):
function setWemo(status) {
WinJS.xhr({ url: "<url>" }).then(function () {
var userMessage = new voiceCommands.VoiceCommandUserMessage();
userMessage.spokenMessage = "Light is now turned " + status;
var statusContentTiles = [];
var statusTile = new voiceCommands.VoiceCommandContentTile();
statusTile.contentTileType = voiceCommands.VoiceCommandContentTileType.titleOnly;
statusTile.title = "Light is set to: " + status;
statusContentTiles.push(statusTile);
var response = voiceCommands.VoiceCommandResponse.createResponse(userMessage, statusContentTiles);
return voiceServiceConnection.reportSuccessAsync(response);
}).done();
}
Make sure that your background task has access to the WinJS namespace. For background tasks, since there isn't any default.html, base.js won't be getting imported automatically unless you explicitly do it.
I had to update winjs to version 4.2 from here (or the source repository on git), then add that to my project to update from the released version that comes with VS 2015. WinJS 4.0 has a bug where it complains about gamepad controls if you try to import it this way (see this MSDN forum post)
Then I added a line like
importScripts("/Microsoft.WinJS.4.0/js/base.js");
to the top of your script's starting code to import WinJS. Without this, you're probably getting an error like "WinJS is undefined" popping up in your debug console, but for some reason, whenever I hit that, I wasn't getting a debug break in visual studio. This was causing the Cortana session to just hang doing nothing, never sending a final response.
I'd also add that you should be handling errors and handling progress, so that you can periodically send progress reports to Cortana to ensure that it does not time you out (which is why it gives you the error, probably after around 5 seconds):
WinJS.xhr({ url: "http://urlhere/", responseType: "text" }).done(function completed(webResponse) {
... handle response here
},
function error(errorResponse) {
... error handling
},
function progress(requestProgress) {
... <some kind of check to see if it's been longer than a second or two here since the last progress report>
var userProgressMessage = new voiceCommands.VoiceCommandUserMessage();
userProgressMessage.DisplayMessage = "Still working on it!";
userProgressMessage.SpokenMessage = "Still working on it";
var response = voiceCommands.VoiceCommandResponse.createResponse(userProgressMessage);
return voiceServiceConnection.reportProgressAsync(response);
});
I have a game finished and I´m saving the scores in a database perfectly, only if I use webview+, if I change to canvas+ the save proccess is not working (and no errors).
I´m trying to save this data sending it to a php file connected to de database (as I said, is working using webview or webview+)
How would you do this using canvas+?
var http = new XMLHttpRequest();
var url = path+"getScores.php";
var params = "game=1&order=ASC";
http.open("POST", url, true);
http.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
http.setRequestHeader("Content-length", params.length);
http.setRequestHeader("Connection", "close");
http.onreadystatechange = function() {
if(http.readyState == 4 && http.status == 200) {
alert(http.responseText);
}
}
http.send(params);
Please, try passing the parameters in the request URL using a GET (just in case). Also, try compiling a custom launcher in the cloud using the latest version (2.1.1). Actually, the JQuery errors should disappear by doing this.
And as Iker said, you can fix this with a custom launcher. I didn´t know about this thing but well, it´s working perfectly now, so if you have this problem, just try that and good luck.
I am new to SEO and just want to get the idea about how it works for Single Page Application with dynamic content.
In my case, I have a single page application (powered by AngularJS, using router to show different state) that provides some location-based search functionalities, similar to Zillow, Redfin, or Yelp. On mt site, user can type in a location name, and the site will return some results based on the location.
I am trying to figure out a way to make it work well with Google. For example, if I type in "Apartment San Francisco" in Google, the results will be:
And when user click on these links, the sites will display the correct result. I am thinking about having similar SEO like these for my site.
The question is, the page content is purely depending on user's query. User can search by city name, state name, zip code, etc, to show different results, and it's not possible to put them all into sitemap. How google can crawl the content for these kind of dynamic page results?
I don't have experience with SEO and not sure how to do it for my site. Please share some experience or pointers to help me get started. Thanks a lot!
===========
Follow up question:
I saw Googlebot can now run Javascript. I want to understand a bit more of this. When a specific url of my SPA app is opened, it will do some network query (XHR request) for a few seconds and then the page content will be displayed. In this case, will GoogleBot wait for the http response?
I saw some tutorial says we need to prepare static html specifically for Search Engines. If I only want to deal with Google, does it mean I don't have to serve static html anymore because Google can run Javascript?
Thanks again.
If a search engine should come across your JavaScript application then we have the permission to redirect the search engine to another URL that serves the fully rendered version of the page.
For this job
You can either use this tool by Thomas Davis available on github
SEOSERVER
Or
you can use the code below which does the same job as above this code is also available here
Implementation using Phantom.js
We can setup a node.js server that given a URL, it will fully render the page content. Then we will redirect bots to this server to retrieve the correct content.
We will need to install node.js and phantom.js onto a box. Then start up this server below. There are two files, one which is the web server and the other is a phantomjs script that renders the page.
// web.js
// Express is our web server that can handle request
var express = require('express');
var app = express();
var getContent = function(url, callback) {
var content = '';
// Here we spawn a phantom.js process, the first element of the
// array is our phantomjs script and the second element is our url
var phantom = require('child_process').spawn('phantomjs',['phantom-server.js', url]);
phantom.stdout.setEncoding('utf8');
// Our phantom.js script is simply logging the output and
// we access it here through stdout
phantom.stdout.on('data', function(data) {
content += data.toString();
});
phantom.on('exit', function(code) {
if (code !== 0) {
console.log('We have an error');
} else {
// once our phantom.js script exits, let's call out call back
// which outputs the contents to the page
callback(content);
}
});
};
var respond = function (req, res) {
// Because we use [P] in htaccess we have access to this header
url = 'http://' + req.headers['x-forwarded-host'] + req.params[0];
getContent(url, function (content) {
res.send(content);
});
}
app.get(/(.*)/, respond);
app.listen(3000);
The script below is phantom-server.js and will be in charge of fully rendering the content. We don't return the content until the page is fully rendered. We hook into the resources listener to do this.
var page = require('webpage').create();
var system = require('system');
var lastReceived = new Date().getTime();
var requestCount = 0;
var responseCount = 0;
var requestIds = [];
var startTime = new Date().getTime();
page.onResourceReceived = function (response) {
if(requestIds.indexOf(response.id) !== -1) {
lastReceived = new Date().getTime();
responseCount++;
requestIds[requestIds.indexOf(response.id)] = null;
}
};
page.onResourceRequested = function (request) {
if(requestIds.indexOf(request.id) === -1) {
requestIds.push(request.id);
requestCount++;
}
};
// Open the page
page.open(system.args[1], function () {});
var checkComplete = function () {
// We don't allow it to take longer than 5 seconds but
// don't return until all requests are finished
if((new Date().getTime() - lastReceived > 300 && requestCount === responseCount) || new Date().getTime() - startTime > 5000) {
clearInterval(checkCompleteInterval);
console.log(page.content);
phantom.exit();
}
}
// Let us check to see if the page is finished rendering
var checkCompleteInterval = setInterval(checkComplete, 1);
Once we have this server up and running we just redirect bots to the server in our client's web server configuration.
Redirecting bots
If you are using apache we can edit out .htaccess such that Google requests are proxied to our middle man phantom.js server.
RewriteEngine on
RewriteCond %{QUERY_STRING} ^_escaped_fragment_=(.*)$
RewriteRule (.*) http://webserver:3000/%1? [P]
We could also include other RewriteCond, such as user agent to redirect other search engines we wish to be indexed on.
Though Google won't use _escaped_fragment_ unless we tell it to by either including a meta tag; <meta name="fragment" content="!">or using #! URLs in our links.
You will most likely have to use both.
This has been tested with Google Webmasters fetch tool. Make sure you include #! on your URLs when using the fetch tool.
To my understanding Protractor is meant to run on top of WebDriver on a Node.js server and send commands to a selenium server (feel free to correct me if I'm wrong).
Anyway, I was wondering if it is possible to load Protractor into a web page as a JavaScript library (like you would load jQuery for example) so it would be accessible from the JavaScript code in the page.
Can it be done? If so, how? What files do I need? What dependencies?
My goal is to use its capabilities of selecting elements by their various angular bindings, and its waiting for angular events capabilities.
I don't think you can. You can write your own locators, or use directives to isolate element/binding. Another good thing to do would be to dive into the protractor source code and see how they do it. Particularly check out their clientsidescripts.js file. Here is an example of how they find bindings. You would call it like this: findBindings('exampleBinding');
clientSideScripts.findBindings = function() {
var binding = arguments[0];
var using = arguments[1] || document;
var bindings = using.getElementsByClassName('ng-binding');
var matches = [];
for (var i = 0; i < bindings.length; ++i) {
var dataBinding = angular.element(bindings[i]).data('$binding');
if(dataBinding) {
var bindingName = dataBinding.exp || dataBinding[0].exp || dataBinding;
if (bindingName.indexOf(binding) != -1) {
matches.push(bindings[i]);
}
}
}
return matches; // Return the whole array for webdriver.findElements.
};