How do I save a screenshot when an MSpec/Selenium test fails? - selenium-webdriver

I'm using MSpec to drive some automated UI tests using Selenium WebDriver. Much like the examples I found online. I'm having problems getting it to take screenshot when the test fails.
I saw a comment on another issue where it works because they have a ResultSupplementer in the sample web specs. However, ResultSupplementer does not seem to exist in the latest version of Mspec (0.9.1).
Is there a different way to do this in the latest version of mspec? Ultimately, I'm going to generate HTML reports as TeamCity artifacts and include the screenshot on any failing specs.

I've updated the samples for the latest version of MSpec (in short, you need to implement ISupplementSpecificationResults yourself).
I've also merged the solutions and converted the MVC project to Nancy. You'll find that there's a bit more infrastructure-related code that grew over the last couple of years and works around various things, like
status codes 4xx and 5xx logged by IIS Express
IIS and Chrome Driver ports bound by other processes
page objects access the web driver with a high-level API
I use Paket for dependency management because it's far more powerful than plain NuGet
All that said, you need to run msbuild.exe mspec-samples.sln and then All-Specs.cmd. I've also checked that a TeamCity build creates screenshots.

Related

Google App Engine Error: DNS lookup failed for URL: http://metadata.google.internal

I've been working on a small Google App Engine (standard environment) project that uses Cloud Endpoints v2. My code is largely based on the quickstart provided by Google.
Everything was working fine, but I re-deployed today after having not looked at it for a few weeks, and I'm getting the following error when I attempt to call the endpoint:
error: An error occured while connecting to the server: DNS lookup failed for URL: metadata.google.internal
This wasn't happening before. It seems to be happening when the endpoints package is being imported by Python.
My endpoint doesn't do anything fancy - I haven't changed the source from the sample EchoApi. The error ends up in the GCP Logging console no matter if I try to access the API through the API Explorer or via Curl.
I don't get any errors during deployment.
Edit #1
Some further information:
The error originates from within Google's code that is included with the google-endpoints package which I've included in my lib folder, per
the documentation. Specifically, the error occurs on line 54 of google/api/control/wsgi.py.
Basically, it's making a request to metadata.google.internal using urllib2.
I'm guessing this address is only available from within the Google Cloud, and that for whatever reason, the instance that's hosting my app can't do a DNS lookup on it.
Edit: #2
Dug a bit further.
It seems that the error originates in the google-endpoints-api-management package. Changes committed to that package on October 19th seem to have introduced additional platform reporting. metadata.google.internal is queried to check if the code is running within the Google Container Engine, then it blows up, because the metadata address doesn't resolve.
Here's the commit:
https://github.com/cloudendpoints/endpoints-management-python/commit/0a37d0e443091053ed03e455e06d3a0ae770999f
The google-endpoints package only requires google-endpoints-api-management >=1.0.0b1. On my end, things were working fine on version 1.0.0b2, but then I built a new lib folder, which brought down 1.0.0b5, and things went sideways. Required packages haven't changed between b2 and b5, so I'm thinking I may be able to just downgrade back to b2 for the time being. Haven't tried it yet.
Sent the Google Dev an email. Perhaps he'll chime in with further tips.
Edit: 2016-11-07
Tested downgrading the google-endpoints-api-management package to 1.0.0b2. Seems to be working, kludgy a fix as it is. If you're using the lib folder, the following will scrub the newer error-prone wsgi.py file and put back the older one:
pip install -t lib google-endpoints-api-management==1.0.0b2 --upgrade
Not pretty, but it may just get you back in business.
On a side note, the Google engineer promptly replied saying that he would take a look at this issue soon. With luck, endpoints v2 will eventually come out of beta, 'cause I'm really liking it so far.
This will be fixed in an upcoming patch to the google-endpoints-api-management package (which will be 1.0.0b6). It will probably be released sometime on Monday, 11/6.
If you'd like to continue testing right away and this error is blocking you, you can go back to 1.0.0b4 until 1.0.0b6 comes out. Everything should still work as normal with that version.
Thanks for bringing this to our attention! We're doing our best to iron out all of these wrinkles now during beta in preparation for our first general release.
EDIT: 1.0.0b6 has been released and resolves this issue. Thanks for your patience during our beta phase!
(Posted solution on behalf of the OP).
Google has released version 1.0.0b6 of the google-endpoints-api-management package to address this issue. It solved the problem for me. For anyone who is encountering this problem, clean out your lib folder and re-install the google-endpoints package. This will bring down the new google-endpoints-api-management package with it.
Thanks to Brad at Google for really quick action on this.

Bluemix Monitoring and Analytics: Resource Monitoring - JsonSender request error

I am having problems with the Bluemix Monitoring and Analytics service.
I have 2 applications with bindings to a single Monitoring and Analytics service. Every ~1 minute I get the following log line in both apps:
ERR [Resource Monitoring][ERROR]: JsonSender request error: Error: unsupported certificate purpose
When I remove the bindings, the log message does not appear. I also greped my code for anything related to "JsonSender" or "Resource Monitoring" and did not find anything.
I am doing some major refactoring work on our server, which might have broken things. However, our code does not use the Monitoring service directly (we don't have a package that connects to the monitoring server or something like that) - so I will be very surprised if the problem is due to the refactoring changes. I did not check the logs before doing the changes.
Any ideas will help.
Bluemix have 3 production environments: ng, eu-gb, au-syd, and I tested with ng, and eu-gb, both using 2 applications with same M&A service, and tested with multiple instances. They are all work fine.
Meanwhile, I received a similar problem that claim they are using Node.js 4.2.6.
So there are some more information we need to know to identify the problem:
1. Which version of Node.js are you using (Bluemix Default or any other one)
2. Which production environment are you using? (ng, eu-gb, au-syd)
3. Is there any environment variables are you using in your application?
(either the creating in code one, or the one using USER-DEFINED Variables)
4. One more thing, could you please try to delete the M&A service, and create it again, in case we are trapped in a previous fault of M&A.
cf ds <your M&A service name>
cf cs MonitoringAndAnalytics <plan> <your M&A service name>
NodeJS versions 4.4.* all appear to work
NodeJS uses openssl and apparently did/does not like how one of the M&A server certificates were constructed.
Unfortunately NodeJS does not expose the openssl verify purpose API.
Please consider upgrading to 4.4 while we consider how to change the server's certificates in the least disruptive manner as there are other application types that do not have an issue with them (e.g. Liberty and Ruby)
setting node js version 4.2.4 in package.json worked for me, however this is an alternative by-passing solution. Actual fix is being handled by core team. Thanks.

gwt sample project rpc call failure

When i try to run gwt sample project it gives "RPC call failure.An error occurred while attempting to contact the server. Please check your network connection and try again". It was running good before but after i update programs and libraries it gives that error. Which update causes this error or there is other things?
Appengine version:1.7.0
GWT version:2.4.0
Eclipse version:4.2(juno)
JDK version:1.7.0_05
This may not be the problem you are facing but it seems to be the most frequent problem.
Let me predict - you last tried GWT four years ago and you hoarded the sample project hoping to pull it out one day.
And yesterday, you did pull it out. It kinda worked and then you decided to upgrade to the latest 2.4.0. (Actually the latest is 2.5.0-rc1).
Ooops. The web-inf/lib of your project is still faithfully using pre-version 2.2 gwt-servlet jar.
Nope, you can't do that. GWT-RPC data transfer format is version-unstable. Not guaranteed to be compatible from one version to the next.
Simple solution - recreate a new GWT project using the new Google plugin.
Then copy the src and web.xml of your project into the new project.
Or replace the gwt-servlet.jar with the latest. And if you usegwt-servlet-deps.jar, you would need to upgrade that too (but I doubt so, because if you did use gwt-servlet.deps.jar you wouldn't be asking this question).
But why would you keep the gwt sample from an old project?
The samples have remained quite the same over the years. Why not use the samples from the new GWT 2.4.0 download. You don't have to keep the samples. You should try to construct the projects for the samples afresh.
The GWT directory is found under Eclipse's plugins directory, under a long name. Like
plugins/com.google.gwt.eclipse.sdkbundle_2.4.0.v201205091048-rel-r37/gwt-2.4.0.
In which you will find the samples directory.

push deployment with test automation

We are developing some testing infrastructure and I have hit a coders block (lack of sleep?)...this seems like it would be a solved problem but I haven't found what I'm looking for via google.
I would like to automatically push builds from our CI server (TeamCity) to a number of machines (growing, but currently 30). These are several WinForms apps and a number of dlls. Once deployed, I would like to kick off tests (NUnit, for both unit and integration tests) and report all results (back to CI? or somewhere else? Not sure).
The target machines are a number of platforms (Win7,Vista, XP, Server 2k8, Server 2k3, Ubuntu, Fedora, Suse, x64, x86, maybe macs down the line)
This gets me part way there (the actual push). But I can't find existing solutions for 'push starting' the tests and reporting back. So far I am thinking of combining the link (or similar) with custom code running on each client machine that watches the deploy directory, runs the tests and reports the results.
Does anyone know of existing solutions?
Links?
Done something similar and care to share?
Edit
If possible, we prefer .net based solutions, but it isn't strictly necessary. I would have tagged the question as such, but ran out of tags :)
You could use KwateeSDCM to both push and start on all the platforms you mention, including mac. However, you'll have to do some coding to get reports out. I'm not familiar with TeamCity but maybe you could push a script along with your application which could then transfer the test results via ftp to a server accessible by TeamCity.
Have a look at: STAF (Software test Automation Framework)
The Software Testing Automation Framework (STAF) is an open source, multi-platform, multi-language framework designed around the idea of reusable components, called services (such as process invocation, resource management, logging, and monitoring).
Which includes STAX:
STAX is an execution engine which can help you thoroughly automate the distribution, execution, and results analysis of your testcases.
And there's an article here:
http://agiletesting.blogspot.com/2004/12/stafstax-tutorial.html
Assuming you have the push part done already, and you don't mind using a TeamCity license, you can create a TeamCity Command Line Runner build configuration or NUnit test configuration that kicks off the tests on a properly configured agent. The build trigger for this test config would be successful completion of the application build.
So far I have ended up using a seperate build step in TeamCity that executes a bat script that in turn fires of tasks to the list of machines using PsExec. So far my trial runs it is working ok, though I now need to parallelize the copying of build output...
Thanks for the input to those who have provided it.

Custom dll in Silverlight app not getting updated on one client

I'll admit up front my experience with Silverlight is minimal.
I'm working on a SL app that also supports shipping custom screens in the form of a custom DLL. I made a change to said DLL and it's gone through build and deployment to a QA box for internal testing. When one points IE to the app, for three of us the updated version of the custom screen is coming up, but for a fourth person the older version of the DLL is somehow being referenced.
We've done some usual suspects (closed sessions, cleared his browser's cache, even installed Firefox) but he's still referencing the older version of the DLL. We have verified that the DLL on the server we're all hitting is the newer version (but that's no surprise since the rest of us are getting it).
There seems to be something specific to the client machine (or maybe that guy himself is just unlucky). Where would you look next?
First step is to install fiddler on the "fourth" persons machine. Examine the request to server for the new dll.

Resources