Protractor-net, non angular login page - angularjs

Using protractor-net, Login page is non-angular, where as the home page is angular. hence cannot launch browser with url using NgWebDriver, probably since its looking for angular. tried angular.ignoreSynchronization="false". But same issue. If I use angDriver.WrappedDriver.FindElement to cross login, the angular objects in home page are not recognized-Asynchronous script error-timeout.
driver = new ChromeDriver("C:\\FTWork\\DriverFiles\\chromedriver_win32\\");
driver.Manage().Timeouts().SetScriptTimeout(TimeSpan.FromSeconds(20));
angDriver = new NgWebDriver(driver,"[ng-app='Phoenix']");
string root=angDriver.RootElement;
angDriver.WrappedDriver.Navigate().GoToUrl(url);
angDriver.WrappedDriver.Manage().Window.Maximize();
driver = angDriver.WrappedDriver;
driver.FindElement(By.Id("UserID")).Clear();
driver.FindElement(By.Id("UserID")).SendKeys("");
driver.FindElement(By.Id("Password")).SendKeys("");
driver.FindElement(By.Id("searchsubmit")).Click();
System.Threading.Thread.Sleep(10000);
string dolAmt = angDriver.FindElement(NgBy.Binding("activeValue")).Text;

I am hoping this will do it.
_driver = new ChromeDriver("C:\\FTWork\\DriverFiles\\chromedriver_win32\\");
_driver.Manage().Timeouts().SetScriptTimeout(TimeSpan.FromSeconds(10));;
//Do whatever for log in with chrome driver
string url = "url for angular page";
_ngWebDriver = new NgWebDriver(_driver, "[ng-app='Phoenix']");
//You have to naviagate to url in order the _ngWebDriver to know the angular page NOT click and go to angular page
_ngWebDriver.Navigate().GoToUrl(url);
_ngWebDriver.Manage().Window.Maximize();
//The script timeout is almost essential since most of protractor mechanism are dependent of client side script.
//start finding elements with NgBy class
NgWebElement ngElement = _ngWebDriver.FindElement(NgBy.Model("model"));
ngElement.Clear();
EDIT
driver = new ChromeDriver("C:\\FTWork\\DriverFiles\\chromedriver_win32\\");
driver.Manage().Timeouts().SetScriptTimeout(TimeSpan.FromSeconds(20));
driver.FindElement(By.Id("UserID")).Clear();
driver.FindElement(By.Id("UserID")).SendKeys("");
driver.FindElement(By.Id("Password")).SendKeys("");
driver.FindElement(By.Id("searchsubmit")).Click();
// Phoenix is the ng-app of the coming angular page
string url = "url for angular page containing [ng-app='Phoenix']"
NgWebDriver angDriver = new NgWebDriver(driver,"[ng-app='Phoenix']");
// don't switch to wrapper driver
angDriver.Navigate().GoToUrl(url);
angDriver.Manage().Window.Maximize();
driver = angDriver.WrappedDriver;
string dolAmt = angDriver.FindElement(NgBy.Binding("activeValue")).Text;

Related

React Native fetch wait for dynamic content

I have a React Native app that uses the dom-parser module to extract relevant pieces of information from a website, which is not owned by me. The information that I need is loaded dynamically in the page after it finishes loading in the browser. Is there a way to get this in a react native app using fetch()? I don't want the users to see the website open up in the app.
What I've tried:
const html = (await (await fetch(this.search_url)).text()); //get the document
var dom = parser.parseFromString(html); //parse it
var json = dom.getElementsByTagName("script")[5].innerHTML //this is the element that I need
console.log(json)
fetch(this.search_url).then((response)=>response.json()).then((json)=>{
var dom = parser.parseFromString(html);
var json = dom.getElementsByTagName("script")[5].innerHTML
console.log(json)
})
Both of these return a blank response as output. However, when I looked up the source of this.search_url in a browser, it is loading the data after a few seconds of loading the page. Is there a way to get this data in the app? Maybe some trick to make fetch() wait for a few seconds before writing the response?

JSON data post to the URL opened in CefSharp browser with example

I am opening a webpage in the cefsharp browser and trying to send a set of JSON data to my website's .aspx page along with query string. While the query string is not an issue but sending the JSON data to the same URL is what I am trying to fix. Earlier I was using Window's native WebBrowser control's Navigate method where I was passing the URL along with query string as well as a byte array. But, I don't find any such method to post the data. Various discussion and posts regarding the same don't have a clear example. Can you provide a sample code/example to show how to achieve that? Here is the code I've been using:
ChromiumWebBrowser browser = new ChromiumWebBrowser();
browser.Address = "https://webhook.site";
browser.Width = System.Windows.SystemParameters.PrimaryScreenWidth;
browser.Height = System.Windows.SystemParameters.PrimaryScreenHeight;
browser.RequestHandler = this;
browser.IsBrowserInitializedChanged += (sender, args) =>
{
if (browser.IsBrowserInitialized)
{
browser.LoadUrlWithPostData("https://webhook.site/#/cba9d04b-01ff-40ef-b223-0917d127ecbe/6ce82e34-28df-4900-88ef-c932a446c6b0/1", Encoding.UTF8.GetBytes("test=123&data=456"));
}
};

pass relative url in signalR hub connection

I am trying to implement signalR in angularJS,
I want to pass relative url to hub connection, but it's making current url (on which my angular application is hosted)
My API base url : http://localhost:81/NrsService/api/TestSignal
My angular application running at
http://localhost:81
Here is my signalR setup :
$.connection.hub.url = "/NrsService/api/TestSignal";
//Getting the connection object
connection = $.hubConnection();
Like it is sending request at http://localhost:81/signalr/negotiate? but I want it to be http://localhost:81/NrsService/api/TestSignal/negotiate?
You have to edit the generated JavaScript code where the client proxy is defined. As of SignalR 2.4.0 there is a createHubProxies function defined where you should find this line of code:
signalR.hub = $.hubConnection("/signalr", { useDefaultPath: false });
Change it to the following to prevent the "/signalr" ending in your requests:
signalR.hub = $.hubConnection("", { useDefaultPath: false });
After that, you can simply change the url which should be called the way you provided in your question, e.g.:
$.connection.hub.url = "/NrsService/api/TestSignal";
If you also want to change this Url dynamically, you can use the document.location properties. In my case, I did something like this:
var subPath = document.location.pathname.substr(0, document.location.pathname.lastIndexOf("/"));
$.connection.hub.url = subPath; // subpath equals to "/NrsService/api"
Hope this helps.

ASP.NET MVC bundle won't update if loaded dynamically

I have an Angular.js application, and, because it is a single page application, I'm loading some scripts dynamically, depending on the user navigation, so I don't get an overload.
The problem is, some of these scripts are uglified and minified in a ASP.NET MVC Bundle, and when I update a source script, the imported bundle never gets updated.
Why that happens, and what can I do to force an update?
Why that happens
The ASP.NET bundle comes with a caching mechanism. When you add the bundle to the page using Scripts.Render, the engine automatically puts a v query string into the bundle URL.
#Scripts.Render("~/bundles/commands")
produces something like:
<script src="/bundles/commands?v=eiR2xO-xX5H5Jbn3dKjSxW7hNCH9DfgZHqGApCP3ARM1"></script>
If this parameter is not provided, the cached result will be returned. If you add the script tag manually, without it, you can face the same caching issue.
Info about the v query string is provided here ("Bundle Caching"), but is not very helpful.
What can I do
You can still load the bundled scripts dynamically, but you will have to add the v parameter. Note that it doesn't work if you try a randomly generated hash (I tried). Thanks to Frison B Alexander, this is possible using this approach:
private static string GetHashByBundlePath(string bundlePath)
{
BundleContext bundleContext = new BundleContext(new HttpContextWrapper(System.Web.HttpContext.Current), BundleTable.Bundles, bundlePath);
Bundle bundle = BundleTable.Bundles.GetBundleFor(bundlePath);
BundleResponse bundleResponse = bundle.GenerateBundleResponse(bundleContext);
Type bundleReflection = bundleResponse.GetType();
MethodInfo method = bundleReflection.GetMethod("GetContentHashCode", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance);
object contentHash = method.Invoke(bundleResponse, null);
return contentHash.ToString();
}
So what you can do is: Return the bundle hash from the ASP.NET view and get it when you need to load the script.
I my application, I created a JS object specific to it:
var appBundles = {
commands: "/bundles/commands?v=eiR2xO-xX5H5Jbn3dKjSxW7hNCH9DfgZHqGApCP3ARM1"
};
Hope this helps!
I had this problem with bundles not updating when I was loading bundles from one MVC app in another MVC app using GTM (sound messed up, but it actually makes sense in the context of multiple MVC apps sharing code between).
What I came up with is what Marcos Lima wrote in his answer, but taken a step further.
I've added a Bundle controller with following code:
public class BundleController : Controller
{
private static string GetHashByBundlePath(string bundlePath)
{
BundleContext bundleContext = new BundleContext(new HttpContextWrapper(System.Web.HttpContext.Current), BundleTable.Bundles, bundlePath);
Bundle bundle = BundleTable.Bundles.GetBundleFor(bundlePath);
BundleResponse bundleResponse = bundle.GenerateBundleResponse(bundleContext);
Type bundleReflection = bundleResponse.GetType();
MethodInfo method = bundleReflection.GetMethod("GetContentHashCode", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance);
object contentHash = method.Invoke(bundleResponse, null);
return contentHash.ToString();
}
public ActionResult Index(string bundleName)
{
string bundlePath = "~/bundles/" + bundleName;
var hash = GetHashByBundlePath(bundlePath);
return RedirectPermanent(bundlePath + "?v=" + hash);
}
}
Then I've added this route:
routes.MapRoute(
name: "Bundle",
url: "Bundle/{bundleName}",
defaults: new { controller = "Bundle", action = "Index" }
);
The end result is that I request the bundles through the controller, but because I do a 301 redirect the Index action is run only once per user and it returns the current version of the bundle and the bundle is then served from browser cache afterwards. When I actually update the bundle I add some query parameter in the request url (in GTM) and all users now get the updated bundle.
Of course, I assume that bundles are placed in ~/bundles/ path, but that should be easy enough to change if yours are placed elsewhere. In fact the route isn't even necessary.

Dynamic content Single Page Application SEO

I am new to SEO and just want to get the idea about how it works for Single Page Application with dynamic content.
In my case, I have a single page application (powered by AngularJS, using router to show different state) that provides some location-based search functionalities, similar to Zillow, Redfin, or Yelp. On mt site, user can type in a location name, and the site will return some results based on the location.
I am trying to figure out a way to make it work well with Google. For example, if I type in "Apartment San Francisco" in Google, the results will be:
And when user click on these links, the sites will display the correct result. I am thinking about having similar SEO like these for my site.
The question is, the page content is purely depending on user's query. User can search by city name, state name, zip code, etc, to show different results, and it's not possible to put them all into sitemap. How google can crawl the content for these kind of dynamic page results?
I don't have experience with SEO and not sure how to do it for my site. Please share some experience or pointers to help me get started. Thanks a lot!
===========
Follow up question:
I saw Googlebot can now run Javascript. I want to understand a bit more of this. When a specific url of my SPA app is opened, it will do some network query (XHR request) for a few seconds and then the page content will be displayed. In this case, will GoogleBot wait for the http response?
I saw some tutorial says we need to prepare static html specifically for Search Engines. If I only want to deal with Google, does it mean I don't have to serve static html anymore because Google can run Javascript?
Thanks again.
If a search engine should come across your JavaScript application then we have the permission to redirect the search engine to another URL that serves the fully rendered version of the page.
For this job
You can either use this tool by Thomas Davis available on github
SEOSERVER
Or
you can use the code below which does the same job as above this code is also available here
Implementation using Phantom.js
We can setup a node.js server that given a URL, it will fully render the page content. Then we will redirect bots to this server to retrieve the correct content.
We will need to install node.js and phantom.js onto a box. Then start up this server below. There are two files, one which is the web server and the other is a phantomjs script that renders the page.
// web.js
// Express is our web server that can handle request
var express = require('express');
var app = express();
var getContent = function(url, callback) {
var content = '';
// Here we spawn a phantom.js process, the first element of the
// array is our phantomjs script and the second element is our url
var phantom = require('child_process').spawn('phantomjs',['phantom-server.js', url]);
phantom.stdout.setEncoding('utf8');
// Our phantom.js script is simply logging the output and
// we access it here through stdout
phantom.stdout.on('data', function(data) {
content += data.toString();
});
phantom.on('exit', function(code) {
if (code !== 0) {
console.log('We have an error');
} else {
// once our phantom.js script exits, let's call out call back
// which outputs the contents to the page
callback(content);
}
});
};
var respond = function (req, res) {
// Because we use [P] in htaccess we have access to this header
url = 'http://' + req.headers['x-forwarded-host'] + req.params[0];
getContent(url, function (content) {
res.send(content);
});
}
app.get(/(.*)/, respond);
app.listen(3000);
The script below is phantom-server.js and will be in charge of fully rendering the content. We don't return the content until the page is fully rendered. We hook into the resources listener to do this.
var page = require('webpage').create();
var system = require('system');
var lastReceived = new Date().getTime();
var requestCount = 0;
var responseCount = 0;
var requestIds = [];
var startTime = new Date().getTime();
page.onResourceReceived = function (response) {
if(requestIds.indexOf(response.id) !== -1) {
lastReceived = new Date().getTime();
responseCount++;
requestIds[requestIds.indexOf(response.id)] = null;
}
};
page.onResourceRequested = function (request) {
if(requestIds.indexOf(request.id) === -1) {
requestIds.push(request.id);
requestCount++;
}
};
// Open the page
page.open(system.args[1], function () {});
var checkComplete = function () {
// We don't allow it to take longer than 5 seconds but
// don't return until all requests are finished
if((new Date().getTime() - lastReceived > 300 && requestCount === responseCount) || new Date().getTime() - startTime > 5000) {
clearInterval(checkCompleteInterval);
console.log(page.content);
phantom.exit();
}
}
// Let us check to see if the page is finished rendering
var checkCompleteInterval = setInterval(checkComplete, 1);
Once we have this server up and running we just redirect bots to the server in our client's web server configuration.
Redirecting bots
If you are using apache we can edit out .htaccess such that Google requests are proxied to our middle man phantom.js server.
RewriteEngine on
RewriteCond %{QUERY_STRING} ^_escaped_fragment_=(.*)$
RewriteRule (.*) http://webserver:3000/%1? [P]
We could also include other RewriteCond, such as user agent to redirect other search engines we wish to be indexed on.
Though Google won't use _escaped_fragment_ unless we tell it to by either including a meta tag; <meta name="fragment" content="!">or using #! URLs in our links.
You will most likely have to use both.
This has been tested with Google Webmasters fetch tool. Make sure you include #! on your URLs when using the fetch tool.

Resources