Selenium Allure report data to show on Grafana dashboard - jenkins-plugins

I am trying to display the Allure report data on Grafana dashboard for my pytest (python + selenium) automation project. I am generating a allure report by given a run by jenkins. Need some heads up on how to show my jenkins run report to grafana. Is there any API/Plugin to send allure results to any time-series database (Influxdb or Prometheus)?

Allure generates influxdb and prometheus files in export directory. I am about to write pytest 'plugin' that feeds this file to influxDB database. After that you set Grafana to pull from that database and show data.
Edit: I dont use selenium or jenkins, so for console pytest run I simply added this code (pytest hook) to conftest.py file:
from influxdb_client import InfluxDBClient
from influxdb_client.client.write_api import SYNCHRONOUS
def pytest_terminal_summary(terminalreporter, exitstatus, config):
influx_db_file = open('allure-html/export/influxDbData.txt', 'r')
lines = influx_db_file.readlines()
token = "MyToken"
org = "MyOrg"
bucket = "MyBucket"
client = InfluxDBClient(url="http://localhost:8086", token=token)
write_api = client.write_api(write_options=SYNCHRONOUS)
for line in lines:
write_api.write(bucket, org, line.strip())
This hook is executed when the pytest is finished. You have to pay attention to your influx url, and also your token, org and bucket values.

Related

React SPA dynamic environment configuration

I'm building React SPA application from ground up using create-react-app and setting up uri address for API server of my SPA. According to official documentation suggested way is to create environment .env files for such kind of needs. I'm using a continuous delivery as part of development workflow. After deployment React SPA application goes in one Docker container and API goes to another. Those containers are deployed in separate servers and I do not know exactly what uri for API will be, so there is no way to create separate .env file for each deployment. Is there any "right way" to provide dynamic configuration for my SPA application so I can easily change environment parameters
API URI examples in SPA
// api.config.js
export const uriToApi1 = process.env.REACT_APP_API1_URI;
export const uriToApi2 = process.env.REACT_APP_API2_URI;
// in App.js
import { uriToApi1, uriToApi2 } from '../components/config/api.config.js';
/* More code */
<DataForm apiDataUri={`${uriToApi1}/BasicService/GetData`} />
/* More code */
<DataForm apiDataUri={`${uriToApi2}/ComplexService/UpdateData`} />
Let's imagine that you build your frontend code in some dist folder that will be packed by Docker in the image. You need to create config folder in your project that also will be added in dist folder (and obvious, will be packed in Docker image). In this folder, you will store some config files with some server-specific data. And you need to load these files when your react application starts.
The flow will be like that:
User opens your app.
Your App shows some loader and fetches config file (e.g. ./config/api-config.json)
Then your app reads this config and continues its work.
You need to setup Docker Volumes in your Docker config file and connect config folder in Docker container with some config folder on your server. Then you will be able to substitute config files in a docker container by files on your server. This will help you to override config on each server.

UI selenium Test in Azure devops Release

I am trying to create a release pipeline with IIS deployment and after that, there is UI Selenium test project.
Everything is working fine except the file uploader. How to handle file uploader?
VSTS Agent is not allowing to get the file from the project folder.
I tried to put the file in Sharepoint and get the stream from there and try to save in the project directory but still, it is not allowing to save file.
var directory = Assembly.GetAssembly(typeof(HelperClass)).Location;
Console.WriteLine(directory); var newDir =
directory.Replace("\\bin\\Release\\MarloTest.dll", "\\bin\\Release");
Console.WriteLine(newDir);
if (Directory.Exists(newDir)) return newDir;
(var file= new FileStream(newPath, FileMode.Create))
{
var streamFile = stream; streamFile.CopyTo(file);
}
Thanks in Advance
What agents are you using to fire your UI tests from your release pipeline? If its Hosted agent, I doubt it works. Try installing the agent of your own and trigger a release. For running Selenium UI tests, you can refer here.

Build React app with express backend for domain http://example.com

I have a web application in React that I needed to implement a contact form. The application is created using create-react-app and the server folder added. For the form I used sendgrid mail. Does the server work on port 4567, how do the app build to work on the domain? It is a one-page application.
Thx, it is important.
When running in production, a React app is simple HTML, CSS, and JavaScript. These files are sent from your server to a client when requested in the same way that requests/responses are handled for any web page. There are a few steps that need to be done before your React app is ready for production
1: Create a Production Build
First you need to create a production build of your app. This process takes all of your separate .js or .jsx files and puts them together into a single minified file, and the same for .css. Then your index.html is updated to include a link to the CSS and script to the JS. This is done so that only three files will need to be sent rather than the 10s or 100s that exist in development.
If you used create-react-app to start your application, you can use the command:
npm run build
to do this. Otherwise, you need to have webpack installed, and then run:
node_modules/.bin/webpack --config webpack.prod.js --mode production
(which you might want to add as a script to package.json).
See React: Optimizing Performance for more.
2. Serve your Application
Now your server should have a route for your application and when it receives a request on that route, the server should respond by sending index.html from your client/build/ directory (where client/ is the directory of the React app).
Here is an example with Node/Express as the server (in app.js):
const path = require('path');
app.get('*', (req, res) => {
res.sendFile(path.join(__dirname), 'client', 'build', 'index.html');
});
Note that this is just the way to send a static file using Node and can easily be done with any server.
Additional
You mentioned you want to submit forms with your application. If your routes for receiving POST requests match the routes that the forms are on (e.g. form is on /form and server listens for POST on /form) you can just use the default HTML form submission. However this is not a great way to do things when using React because then routing will be controlled by your server rather than by React. Instead you should use some sort of AJAX method to submit the form.
Since your server is now serving your React app (rather than React serving itself as in development), you can just make relative requests and those requests will be made to your server. For example the request (using the fetch API):
const models = await fetch('/api/models');
Will be made to your_host/api/models by default.
in the package.json add
"proxy": "http://localhost:4567"

How can I import data from a REST call, json feed into a SQLite database on a MAC?

I have data that's on my server in a SQL Server database. I can access that with a REST call into a C# ASP.Net Web.API that I have control over and it will return json data. Possibly I can get that to return other formats of data but I am not sure about that. I have full access to the server application and the json it creates.
On my development Mac I am using DB Browser for SQLite and Xamarin to develop a multi-platform app. I have a small SQLite database created.
How can I import/insert the JSON data from some of my tables on the server to tables in the SQLite database that I create on my MAC? I need to do this manually but I would like to automate the process of doing the import into a bash script or something similar that I can run with a command.
I have researched this but don't seem to be able to find any examples of how to do it so I opened up a bounty in the hope that someone could give an answer that would be a big help to me and to others.
Off the top of my head, you have two options that you can do:
Make a simple Xamarin.Mac app that does this. Similar to how windows folks might make a console app. Just have it have one button and call pretty much the same code in your xamarin app to download the data and dump it into a sqlite db.
A better option, would be to write a "unit test" (or integration test for those hardcore peeps) that calls the existing code in your xamarin app and writes it to a sqlite db on the file share. Then you can do this as often as you want with a run of a unit test. This could be done in your test runner in Xamarin Studio (Visual Studio or anything that has a test runner). I would use either nUnit or xUnit since they great cross-platform support.
In my previous app, I had XUnit tests that checked to make sure the API calls were working and other tests to ensure that my SQLite.Net-PCL code was working. You could combine that into a single "test" to download the data into a db3.
This assumes you abstracted your code out. If not, you could just copy and paste it. Either way, if you are using x-plat nuget packages the code will work on desktop or mobile app.
i would use node js to write a script using javascript.
install nodejs. https://nodejs.org/en/download/ or
brew install node
create a directory to work on your project
mkdir myimporter
cd myimporter
install the required libraries in the folder
npm install --save request sqlite3 moment
npm install -g nodemon
open the folder or app.js with your favorite text editor
save the following code as app.js
var request = require('request');
var sqlite3 = require("sqlite3").verbose();
var moment = require("moment");
var url = 'http://www.google.com';
var s, e;
var fs = require("fs");
var file = "test.db";
//var file = process.env.CLOUD_DIR + "/" + "test.db";
var exists = fs.existsSync(file);
var sqlite3 = require("sqlite3").verbose();
var db = new sqlite3.Database(file);
// use same exists from checking if db exists, with assumption that table would exists in a new db.
if(!exists){
db.run("CREATE TABLE Stuff (thing TEXT)");
}
function saveResultTosqlite3(message){
var stmt = db.prepare("INSERT INTO Stuff VALUES (?)");
stmt.run(message);
stmt.finalize();
}
s = moment();
request(url, function (error, response, body) {
e = moment();
var elapsed =e.diff(s,'milliseconds');
var responseCode = response.statusCode;
if (!error && response.statusCode == 200) {
console.log("successful request");
}
var message = 'request to ' + url + ' returned status code of ' + responseCode.toString() + ' in ' +elapsed.toString() + ' milliseconds' ;
console.log(message);
saveResultTosqlite3(message);
});
run the follwing terminal to run the script each time it changes, for development / testing
nodemon app.js

Invalid Credentials accessing Big Query tables from App Engine application

Could someone help me access Big Query from an App Engine application ?
I have completed the following steps -
Created an App Engine project.
Installed google-api-client, oauth2client dependencies (etc) into /lib.
Enabled the Big Query API for the App Engine project via the cloud console.
Created some 'Application Default Credentials' (a 'Service Account Key') [JSON] and saved it/them to the root of the App Engine application.
Created a 'Big Query Service Resource' as per the following -
def get_bigquery_service():
from googleapiclient.discovery import build
from oauth2client.client import GoogleCredentials
credentials=GoogleCredentials.get_application_default()
bigquery_service=build('bigquery', 'v2', credentials=credentials)
return bigquery_service
Verified that the resource exists -
<googleapiclient.discovery.Resource object at 0x7fe758496090>
Tried to query the resource with the following (ProjectId is the short name of the App Engine application) -
bigquery=get_bigquery_service()
bigquery.tables().list(projectId=#{ProjectId},
datasetId=#{DatasetId}).execute()
Returns the following -
<HttpError 401 when requesting https://www.googleapis.com/bigquery/v2/projects/#{ProjectId}/datasets/#{DatasetId}/tables?alt=json returned "Invalid Credentials">
Any ideas as to steps I might have wrong or be missing here ? The whole auth process seems a nightmare, quite at odds with the App Engine/PaaS ease-of-use ethos :-(
Thank you.
OK so despite being a Google Cloud fan in general, this is definitely the worst thing I have been unfortunate enough to have to work on in a while. Poor/inconsistent/nonexistent documentation, complexity, bugs etc. Avoid if you can!
1) Ensure your App Engine 'Default Service Account' exists
https://console.cloud.google.com/apis/dashboard?project=XXX&duration=PTH1
You get the option to create the Default Service Account only if it doesn't already exist. If you've deleted it by accident you will need a new project; you can't recreate it.
How to recover Google App Engine's "default service account"
You should probably create the default set of JSON credentials, but you won't need to include them as part of your project.
You shouldn't need to create any other Service Accounts, for Big Query or otherwise.
2) Install google-api-python-client and apply fix
pip install -t lib google-api-python-client
Assuming this installs oath2client 3.0.x, then on testing you'll get the following complaint:
File "~/oauth2client/client.py", line 1392, in _get_well_known_file
default_config_dir = os.path.join(os.path.expanduser('~'),
File "/usr/lib/python2.7/posixpath.py", line 268, in expanduser
import pwd
File "~/google_appengine-1.9.40/google/appengine/tools/devappserver2/python/sandbox.py", line 963, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named pwd
which you can fix by changing ~/oauth2client/client.py [line 1392] from:
os.path.expanduser('~')
to:
os.env("HOME")
and adding the following to app.yaml:
env_variables:
HOME: '/tmp'
Ugly but works.
3) Download GCloud SDK and login from console
https://cloud.google.com/sdk/
gcloud auth login
The issue here is that App Engine's dev_appserver.py doesn't include any Big Query replication (natch); so when you're interacting with Big Query tables it's the production data you're playing with; you need to login to get access.
Obvious in retrospect, but poorly documented.
4) Enable Big Query API in App Engine console; create a Big Query ProjectID
https://console.cloud.google.com/apis/dashboard?project=XXX&duration=PTH1
https://bigquery.cloud.google.com/welcome/XXX
5) Test
from oauth2client.client import GoogleCredentials
credentials=GoogleCredentials.get_application_default()
from googleapiclient.discovery import build
bigquery=build('bigquery', 'v2', credentials=credentials)
print bigquery.datasets().list(projectId=#{ProjectId}).execute()
[or similar]
Good luck!

Resources