How to run testcafe tests in teamcity CI? - reactjs

I want to run Testcafe E2E tests in teacity CI/CD server. Can someone please help me to understand how can we use testcafe/testcafe docker image in teamcity to run the tests?

I recommend that you refer to the following topics where you can find information on how to make it work:
Here is an article that describes how to integrate TestCafe with TeamCity.
Please also take a look at the following article: Use TestCafe Docker Image.
Feel free to contact us if you need assistance with combining these tools.
UPDATED:
TeamCity goes along with the Docker Wrapper extension for Command Line Build Step. It provides an easy way to run a custom script inside a docker container.
However, you need to take into account the following specifics:
The TestCafe Docker image comes with a special script that prepares the container environment by starting services like Xvfb and DBus. It is located in /opt/testcafe/bin/testcafe-docker.sh. The TeamCity wrapper overrides the entrypoint of the docker images and prevents execution of this script. It means that /opt/testcafe/bin/testcafe-docker.sh should be used instead of testcafe to run your test with Docker and TeamCity:
/opt/testcafe/docker/testcafe-docker.sh chromium test/e2e/**/* -r teamcity
It's better to use headless mode when testing in Docker containers, since this mode is specially designed for such environments.
If you don't use the headless mode in Chrome for some reason, you can encounter the following error: ERROR: Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure. Likely, it is caused by the following Chrome's bug: TMPDIR too long. In order to solve this problem, you need to manually set an environment variable before starting TestCafe:
export TMPDIR=/tmp
Configured Build Step may look like:
This configuration implies that you are using TestCafe with TeamCity TestCafe reporter installed as local packages. Ensure that the node_modules directory with TestCafe and plugins is a subdirectory of the test project root directory. TeamCity Docker Wrapper will mount the working directory inside the container.

Related

How to build react.js apps at visual studio code?

I have created two apps using 'Visual Studio Code' and 'node.js.' I run them using command 'npm start,' and they show in the browser. I want to build them or deploy them so they can be used by anyone. It says there to use command 'npm run build.' How to do that, and what technique you use in order to build them?
It depends on what configuration you used for building the React app. If you used create-react-app, npm run build is the correct command for building it.
If you used a different configuration (e.g. webpack), you should use the relevant command for that configuration.
Either way, deploying it will be as easy as copy/pasting the build folder's content to the server you want to host it, after running the build command.
Visual Studio Code or any other Code Editor for that matter is not relevant. You can develop, build and deploy any React app using any Code Editor you want, it's just a matter of preference.
"Building" refers to the task of preparing (transforming, minifying, compressing, etc.) all the relevant project files so that they're ready for production (assuming that your build scripts are configured to do so).
"Deploying" an app is usually a separate task that will deploy (upload) your current project build to a development platform provider like Firebase, Netlify, Azure, etc. Note that you have to register with a provider and setup a new project on their end before your deploy your project.
Which provider you use is totally up to you. Also, you have to configure your current project once you've chosen your development provider. They'll provide instructions on how to deploy your project.
On a side note, keep in mind that you can configure your own npm scripts so that they run whatever you want. More about that here

create-react-app + docker = QA and PROD Deploy

I'm using create-react-app for my projects using docker as my dev env.
Now I would like to know how is the best practice to deploy my project into AWS (I'll deploy the docker).
Maybe my question is a dummy but I'm really stuck on it.
My docker file has a command yarn start... for dev it is enough I don't need to build anything, my bundle will run in memory, but for QA or PROD I would like to build using npm run build but as I know it will create a new folder with the files that should be used on prod env.
That said, my question is: what is the best practice for this kind of situation?
Thanks.
This is what I did:
Use npm run build to build all static files.
Use _/nginx image to customize an HTTP server which serves those static files. (Dockerfile)
Upload the customized image to Amazon EC2 Container Service (ECS).
Load the image in ECS task. Then use ELBv2 to start a load balance server to forward all outside requests to ECS.
(Optional) Enable HTTPS in ELBv2.
One time things:
Figure out the mechanism of ECS. You need to create at least one host server for ECS. I used the Amazon ECS-Optimized AMI.
Create a Docker repository on ECS so you can upload your customized Docker image.
Create ECS task definition(s) for your service.
Create ECS cluster(s) and add task(s).
Configure ELBv2 so it can forward the traffic to your internal ECS dynamic port.
(Optional) Write script to automate everyday deployment.
I would get paid if someone wants me to do those things for her/him. Or you can figure it out by yourself following those clues.
However, if your website is a simple static site, I recommend to use Github pages: it's free and simple. My solution is for multiple static + dynamic applications which may involved other services (e.g. Redis, ElasticSearch) and required daily/hourly deployments.
You would have to run npm run build and then copy the resulting files into your container. You could use a separate Dockerfile.build to build the files, extract them and add them to your final container. Your final container should be able to serve the files. You can base it on nginx or another server. You can also use it as a data volume container in your existing server container.
Recent versions of Docker make this process easier by allowing you to combine the two Dockerfiles. You can have a build container and then the final container both be defined in the same file.
Here's a simple example for your use case:
FROM node:onbuild AS builder
RUN npm run build
FROM nginx:latest
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
You'd probably want to include your own nginx configuration file.
More on multistage builds here:
https://docs.docker.com/engine/userguide/eng-image/multistage-build/

Loopback 3 & Angular 2 generation of /reset-password endpoint unrealiable

We are building an application that so far has a simple user management implementation. This question relates to the built-in password resetting functionality of Loopback v3. User management is being worked on a model derived from the built-in User, and it is called MyCustomUser
Each time code changes are pushed into a GitHub repo, we have Jenkins build a Docker container, and inside of it run npm install then lb-sdk (with suitable parameters) then ng build --env=prod and finally node .. After this happens, the application runs normally, BUT:
When performing the same deployment commands locally (on my own linux laptop), the API endpoints /MyCustomUsers/reset and /MyCustomUsers/reset-password are created (i.e. they are visible and manipulable via the Strongloop Explorer)
When the deployment is run by Jenkins in the Docker container, only one of the two API endpoints is created, /MyCustomUsers/reset. God only knows where the other endpoint, /MyCustomUsers/reset-password, ends up.
Obviously, all deployments are run against the same codebase (i.e. the same commit ID of the GitHub repo). It is bewildering how the service behaves perfectly on localhost but not on the cloud-based docker container.
Sounds like you are running two different versions of the Loopback-Angular2-SDK. From what I've understood the SDK for Angular2 is still in heavy beta and not yet ready for production. However this doesn't excuse the difference, but it really sounds like two different versions.
We are using the same build-flow as you, are your package.json identical when it comes to #mean-expert/loopback-sdk-builder?
The guys working with the SDK-generator are really good at responding in their issue-section, would recommend asking there otherwise.
It turns out that the remote docker was running node 6.9.2 and npm 3.10.9, whereas I was running node 6.10.3 and npm 3.10.10. After making the docker instance run the same versions as I had locally and deploying the package.json along with its npm-shrinkwrap.json, the endpoint was correctly generated.

angular app with docker - production & development

I have a simple AngularJS application. the backend can be treated like a service (external api), so no sever side is needed at all. I would like to run it on a docker, however, i'm not sure what is the best practice here.
what i'm expecting to achieve is the following:
the docker should be able to run everything i was doing locally with nodejs - using webpack/grunt/gulp without the need to install anything on my local machine + making sure every team member is working on the same version of basically everything.
the docker should be able to be deployed to production easily and run as lightly as possible (its just static content!)
the real issue is that as far as i understand, the dev docker should be based on nodejs with a mounted volume and everything.. however, the production docker should be super simple nginx server that serves static content. so i might end up with a 2 separate dockers that use the same code base. not sure if this is the right way to go..
can anyone shed some light over this topic? thanks
Your ideas seems ok. I generally create a bash script(as for me it's flexible enough) to deploy different environments according to requirement(dev&prod).
Assumed created a bash script deployApp.sh
sh deployApp.sh `{dev or prod}`
So you can also create(or switch) Dockerfile on the fly according to your environment and build your app with this Dockerfile. So you can manage your prod environment requirements(only deploy to nginx with webpack's created bundles etc.) what you need respectively.
An example about creating deployApp.sh:
webpack `{if other required parameters here}` #created bundle.js etc.
#After webpack operations , choose Dockerfile for prod or dev :
#./prod/Dockerfile , ./dev/Dockerfile
#check if first parameter is prod or dev
docker build -f ./prod/Dockerfile #this will build nginx based container
#and copy needed files&folders
That is just an approach according to your idea, also i use like that approach. You just create that setup one time. Also you can apply another projects If it is suitable.

Docker ARG command building Google Managed VM

Is possible pass arguments building a managed vm to use 'ARG' Docker command?.
In Dockerfile sets default value...
ARG env="dev"
Building Docker container I can change this value...
docker build -t test/app --build-arg env=pr .
I have two environments and I want deploy the managed vm with different configuration files in Dockerfile build process.
Thanks.
Sadly, this isn't currently supported. All of our docker build stuff right now is kind of magically integrated. There are a few options here, though none of them are quite what you're looking for.
You can build your docker container locally, push it to gcr.io, and then use the --image-url flag on gcloud app deploy.
In the next few weeks, we're going to start using our container builder service by default for docker builds with Managed VMs. While we don't have a plan right now to expose the setting, there's a config setting that allows you to define environment variables via the container builder API. It's going to be easier to support something like this in the future.
Hope this helps!

Resources