How to update Theia version, used in the image? - eclipse-che

When building eclipse/che-theia docker image https://github.com/eclipse/che/tree/master/dockerfiles/theia, how to specify a specific version of Theia to be used ?

You need to change the value of argument THEIA_VERSION in Dockerfile.
Beware of the CQ to be created for each Theia version upgrade.
Patches are per Theia version, so no need to remove them.
Integration tests are executed by default. Upgrading Theia may require updating integration tests.

Related

How can I get a hold of Sencha GRUI?

I saw some webinar from Sencha, and I am planning to use GRUI from Sencha in my next React project, Is it available over NPM? full feature? Where to find and download?
Short Answer: Evaluation copy is available via npm, but you have to buy the full version. It is not available via download as other ExtJS products.
Please follow the following link to the documentation:
GRUI documentation
Details on GRUI can be accessed easily by visiting the Sencha GRUI page.
It is available over NPM and can be consumed like any other package.
For evaluation, all the features are available and can be checked over a development environment; but some advanced features will require license activation to be enabled in a production environment.
You can the NPM package from our npm page.

How to run testcafe tests in teamcity CI?

I want to run Testcafe E2E tests in teacity CI/CD server. Can someone please help me to understand how can we use testcafe/testcafe docker image in teamcity to run the tests?
I recommend that you refer to the following topics where you can find information on how to make it work:
Here is an article that describes how to integrate TestCafe with TeamCity.
Please also take a look at the following article: Use TestCafe Docker Image.
Feel free to contact us if you need assistance with combining these tools.
UPDATED:
TeamCity goes along with the Docker Wrapper extension for Command Line Build Step. It provides an easy way to run a custom script inside a docker container.
However, you need to take into account the following specifics:
The TestCafe Docker image comes with a special script that prepares the container environment by starting services like Xvfb and DBus. It is located in /opt/testcafe/bin/testcafe-docker.sh. The TeamCity wrapper overrides the entrypoint of the docker images and prevents execution of this script. It means that /opt/testcafe/bin/testcafe-docker.sh should be used instead of testcafe to run your test with Docker and TeamCity:
/opt/testcafe/docker/testcafe-docker.sh chromium test/e2e/**/* -r teamcity
It's better to use headless mode when testing in Docker containers, since this mode is specially designed for such environments.
If you don't use the headless mode in Chrome for some reason, you can encounter the following error: ERROR: Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure. Likely, it is caused by the following Chrome's bug: TMPDIR too long. In order to solve this problem, you need to manually set an environment variable before starting TestCafe:
export TMPDIR=/tmp
Configured Build Step may look like:
This configuration implies that you are using TestCafe with TeamCity TestCafe reporter installed as local packages. Ensure that the node_modules directory with TestCafe and plugins is a subdirectory of the test project root directory. TeamCity Docker Wrapper will mount the working directory inside the container.

Should an AngularJS + nginx codebase be dockerized

I have a AngularJS front-end project that runs on nginx and communicates to a back-end java server (separate from this codebase). I find myself running the following commands to install the package:
# make sure node, npm, and gulp are installed
npm install
gulp watch
Should the above be dockerized or is it preferred to run these projects via the commands. The code will be modified locally as we develop (so we'd probably need to configure a volume that maps to the project's directory).
What would be the advantages or disadvantages of dockerizing the above vs. just running the above commands to get the project started? The main goal here is to reduce the time it takes for a new developer to get started/comfortable with the project.
Well the only benefit I can think of right now of why you might want to dockerize this application is if you would prefer someone else to be able to deploy the application a little easier (with the only dependency being Docker and access to a repository where any built containers are being stored). i.e. they could simply issue a docker run command and reference the application / build tag, and they'd have a running containerized application.
The other possible benefit I can foresee is portability across systems that are target environments. The only dependency again is Docker.
Then you have the added benefits that come with support for automatic container builds, built in versioning to name a few.
Also note, you could set up a remote SCM to store code / Dockerfiles to automate build / deploys, if you would like to move away from local host development.
If your main goal is to is to reduce the time it takes for a new developer to get started/comfortable with the project, then the the biggest issue you will face is OS (Windows/Linux use). An alternative solution to Docker would be to use Vagrant.

Karaf : feature:install restarts previous bundles

I'm facing an irritating behavior from my karaf server: Title says it all, installed bundles get restarted when I use a feature: install command.
* Project context *
Most of the bundles I deal with are camel routes, the other ones are common tools, shared by the routes.
As a result, I have a 2 level project: a common part that is installed first, and the camel routes that all depends on the common part (dependent on Maven point of view).
* Scenario *
start a fresh instance of karaf
install the common features
install a camel route feature: no troubles so far
install a second camel route feature: the bundles from the previously installed feature will restart.
* Breakthrough made *
All the bundles declared a common config file, with the option "update-strategy=reload". This means that karaf would notify each bundle of any modification of this file, and the bundle would restart to take it into account.
As a matter of fact, when I installed a new bundle with a dependency on this file, it would be read in order to initialize the bundle's properties, and karaf considered it to be a file modification. Therefore, installing a new bundle made all the others restart.
As you expect, I dealt with that problem by removing the update-strategy option, and most of my features are now clean.
* Leftovers *
BUT, some of them still holds the bug: Installing any of those troublesome features will have all the other installed features restart. This is a ONE-WAY problem, installing a clean bundle will not have the troublesome ones restart.
I checked anyway, but no other config file could be responsible for that.
Any help or advice would be appreciated, I can also provide anonymized examples of any file that would help you understand, like an osgi-context or a feature's pom.xml
One last thing: my features regroup around 50 bundles each, therefore I can barely understand the karaf logs, and I can't pinpoint which bundle is restarted first.
Thanks for your time and attention!
I think there are some misconceptions in what you describe.
update-strategy=reload does not cause a bundle to reload. It causes a blueprint context to reload.
You should also not share the some config between bundles it is known to mess up you deployments.
There are also other reasons why a bundle may restart. A karaf feature install tries to provide the optimal set of bundles that is needed overall in karaf to satisfy the set of currently installed features.
A typical case is that you first install feature with a bundle containing an optional package import. At this moment it can not provide the package. Then you install a second feature that provides an exporter of the package. Now the optional dependency of the bundle can be satisfied and the bundle will be restarted by karaf.
You can look into such cases by using feature:install -v . This will show you which bundles are restarted and also why. So maybe this can help you to debug why the restart happens.

How to add deployment message parameter with CloudBees Deployer plugin

Is there a way to give a custom message to the deployment with the Cloudbees Deployer plugin? We are used to putting the git version to the deployment so that it's easier to see which version we have running at run#cloud.
We've tried doing the deployment with the bees maven plugin and there it's possible to get the message parameter set from a build parameter. It just would be nice to be able to use the plugin instead of having to mess with the maven plugin, which is easier to get wrong in the configuration phase.
This feature was not available prior to version 4.4 of the CloudBees Deployer plugin.
Version 4.4 adds the ability to configure the description in the "Advanced" button:
The default value for this field is ${JOB_NAME} #${BUILD_NUMBER} so as to retain the previous behaviour.
The field supports all the usual Jenkins Environment token macro expansion, so you probably want to set it to something like ${GIT_COMMIT} or maybe ${GIT_COMMIT} ${JOB_NAME} #${BUILD_NUMBER}. In any case the standard token macros should provide the flexibility you require.
Note: If you are using DEV#cloud, and the Jenkins has a pre-4.4 version of the plugin you will have to wait until after 2013-04-15T15:00Z and restart your Jenkins instance to pick up the newer version of the plugin.
Note: If you are using the CloudBees Deployer plugin via the CloudBees Free Enterprise Plugins route, you will have to force the update center metadata to update or wait until after 2013-04-16T12:30Z to see the update in the list of plugins.

Resources