AngularDart build with Docker - angularjs

I try to deploy my angular-dart app with docker but can't get it to work.
Everything works on OS X but fails inside the container.
my pubspec.yaml:
name: myapp
dependencies:
browser: any
angular: 1.0.0
transformers:
- angular
my Dockerfile:
FROM stackbrew/ubuntu:13.10
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y apt-transport-https curl git
RUN sh -c 'curl https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -'
RUN sh -c 'curl https://storage.googleapis.com/download.dartlang.org/linux/debian/dart_stable.list > /etc/apt/sources.list.d/dart_stable.list'
RUN apt-get update
RUN apt-get install dart/stable
env PATH $PATH:/usr/lib/dart/bin
ADD frontend/pubspec.yaml /container/pubspec.yaml
ADD frontend/web /container/web
WORKDIR /container
RUN pub build
Dart gets installed as expected (Dart VM version: 1.7.2)
But it fails at pub build with:
Error on line 6, column 5 of pubspec.yaml: Error loading transformer: Illegal argument(s): sdkDirectory must be provided.
- angular
^^^^^^^
I found this https://github.com/angular/angular.dart/issues/1270 which suggests to add the dartSDK path to pubspec.yaml. Which can't be the solution.
The app should be runnable on every machine.. not only on those where the dartSdk path matches with the hardcoded path in pubspec.yaml
Is there another way to fix this? or a workaround?

Update
Should be fixed in code_transformers 0.2.3+2 (see http://dartbug.com/21225)
Old
I don't know yet why this is necessary on some systems and not on others but this should fix it.
transformers:
- angular:
sdkDirectory: "/usr/lib/dart"
See also https://github.com/angular/angular.dart/issues/1270#issuecomment-64967674 for an alternative approach using symlinks.

Related

React App as a Django App in a Docker Container - connection refused when trying to access APIs on localhost:8000 urls

hope you might have some guidance for me on this.
Right now I have a React app that is part of a Django app (for the sake of ease of passing auth login tokens), which is now containerised in a single Dockerfile. Everything works as intended when it is run as a Docker instance locally, but the Docker Image is having issues, despite the fact that the webpages are visible when the Image is deployed on server.
Specifically, when the Docker image is accessed, the home page renders as expected, but then a number of fetch requests which usually go to localhost:8000/<path>/<to>/<url> return the following error:
GET http://localhost:8000/<path>/<to>/<url> net::ERR_CONNECTION_REFUSED
On a colleague's suggestion, I have tried changing localhost:8000 to the public IP address of the server the Docker Image is hosted on (eg 172.XX.XX.XXX:8000) but when I rebuild the React app, these changes do not remain, and it defaults back to localhost. Here are my questions:
Is this something I change from within the React application itself? Do I need manually assign an IP address? (This seems unlikely to me)
Or is this something to do with either the Django port settings, or the Dockerfile itself?
Here is the Dockerfile
FROM ubuntu:18.04
# ...
RUN apt-get update && apt-get install -y \
software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update && apt-get install -y \
python3.7 \
python3-pip
RUN python3.7 -m pip install pip
RUN apt-get update && apt-get install -y \
python3-distutils \
python3-setuptools
RUN python3.7 -m pip install pip --upgrade pip
# ???
ENV PYTHONUNBUFFERD 1
# copy file form local machine to container
COPY ./requirement.txt /requirement.txt
# install dependency
# RUN pip install -r /requirement.txt
RUN pip install -r /requirement.txt
# create app folder in container
RUN mkdir /app
# set default working dictionary
WORKDIR /app
# copy local app folder to container folder
COPY ./app /app
CMD ["python", "test.py"]
Multiple technologies, multiple failure points - thanks in advance!

Which ChromeDriver & Headless Chrome versions exist that are compatible with ruby 2.7?

The issue
I have a web scraper running in AWS lambda but in a few weeks AWS lambda will stop supporting Ruby 2.7. I built my scraper last year using this tutorial.
I need to find a version of chrome driver & headless chrome that is compatible with Ruby 2.7, But I don't know exactly where to start.
I have looked at the ChromeDriver's downloads portal But I don't see any indication there that Chrome driver will work for ruby 2.7 or any other specific version of ruby for that matter.
The code I have works by accessing the ChromeDriver binary and starting it inside a specific folder
I downloaded the specific binaries I am using by running these commands:
# serverless chrome
wget https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-37/stable-headless-chromium-amazonlinux-2017-03.zip
unzip stable-headless-chromium-amazonlinux-2017-03.zip -d bin/
rm stable-headless-chromium-amazonlinux-2017-03.zip
# chromedriver
wget https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip
unzip chromedriver_linux64.zip -d bin/
rm chromedriver_linux64.zip
Solution
I found the solution to this problem. Ruby 2.7 that Lambda offers by default runs on top of Amazon Linux 2 (which lacks many important libraries & dependencies), unfortunately, there's nothing you can do to change that.
However, Amazon offers you the ability to run your code in a custom docker image that can be up to 10GB in size.
I fixed this problem by creating my own image using the following Dockerfile
FROM public.ecr.aws/lambda/ruby:2.7
# Install dependencies needed to run MySQL & Chrome
RUN yum -y install libX11
RUN yum -y install dejavu-sans-fonts
RUN yum -y install procps
RUN yum -y install mysql-devel
RUN yum -y install tree
RUN mkdir /var/task/lib
RUN cp /usr/lib64/mysql/libmysqlclient.so.18 /var/task/lib
RUN gem install bundler
RUN yum -y install wget
RUN yum -y groupinstall 'Development Tools'
# Ruby Gems
ADD Gemfile ${LAMBDA_TASK_ROOT}/
ADD Gemfile.lock ${LAMBDA_TASK_ROOT}/
RUN bundle config set path 'vendor/bundle' && \
bundle install
# Install chromedriver & chromium
RUN mkdir ${LAMBDA_TASK_ROOT}/bin
# Chromium
RUN wget https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-37/stable-headless-chromium-amazonlinux-2017-03.zip
RUN unzip stable-headless-chromium-amazonlinux-2017-03.zip -d ${LAMBDA_TASK_ROOT}/bin/
RUN rm stable-headless-chromium-amazonlinux-2017-03.zip
# Chromedriver
RUN wget https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip
RUN unzip chromedriver_linux64.zip -d ${LAMBDA_TASK_ROOT}/bin/
RUN rm chromedriver_linux64.zip
# Copy function code
COPY app.rb ${LAMBDA_TASK_ROOT}
WORKDIR ${LAMBDA_TASK_ROOT}
RUN tree
RUN ls ${LAMBDA_TASK_ROOT}/bin
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handle" ]
Notes
If your code was previously deployed using a zip file you will have to either destroy the previous function or create a second function with the code update, it all comes down to how you want to handle deployment.
It is possible to automate the deployment process using the serverless framework

WebDriverError: Cannot define class using reflection

I am using gulp-angular-protractor for end-to-end testing of my web app. I am getting the following error recently but earlier it used to work perfectly.
Any help to fix this issue will be appreciated.
Details:
OS - Ubuntu 16.04,
node - v6.12.2,
npm - v3.10.10,
Vagrant - v1.9.3,
karma - v0.13.22,
gulp-angular-protractor - v1.1.1,
protractor - v5.1.2,
webdriver-manager - v12.0.6
Removing java-common and installing openjdk-8-jre fixed the issue for me.
To remove java-common and its dependencies, run below command
sudo apt-get purge --auto-remove java-common
To install openjdk-8-jre, run below command
sudo add-apt-repository ppa:openjdk-r/ppa
sudo apt-get update
sudo apt-get install openjdk-8-jdk
Additionally, if you have more than one Java versions installed on your system. Run below command set the default Java
sudo update-alternatives --config java
and type in a number to select a Java version.

Reusable docker image for AngularJS

We have an AngularJS application. We wrote a dockerfile for it so it's reusable on every system. The dockerfile isn't a best practice and it's maybe some weird build up (build and hosting in same file) for some but it's just created to run our angularjs app locally on each PC of every developer.
Dockerfile:
FROM nginx:1.10
... Steps to install nodejs-legacy + npm
RUN npm install -g gulp
RUN npm install
RUN gulp build
.. steps to move dist folder
We build our image with docker build -t myapp:latest .
Every developer is able to run our app with docker run -d -p 80:80 myapp:latest
But now we're developing other backends. So we have a backend in DEV, a backend in UAT, ...
So there are different URLS which we need to use in /config/xx.json
{
...
"service_base": "https://backend.test.xxx/",
...
}
We don't want to change that URL every time, rebuild the image and start it. We also don't want to declare some URLS (dev, uat, prod, ..) which can be used there. We want to perform our gulp build process with an environment variable instead of a hardcoded URL.
So we we can start our container like this:
docker run -d -p 80:80 --env URL=https://mybackendurl.com app:latest
Is there someone who has experience with this kind of issues? So we'll need an env variable in our json and building it and add the URL later on if that's possible.
EDIT : Better option is to use build args
Instead of passing URL at docker run command, you can use docker build args. It is better to have build related commands to be executed during docker build than docker run.
In your Dockerfile,
ARG URL
And then run
docker build --build-arg URL=<my-url> .
See this stackoverflow question for details
This was my 'solution'. I know it isn't the best docker approach but just for our developers it was a big help.
My dockerfile looks like this:
FROM nginx:1.10
RUN apt-get update && \
apt-get install -y curl
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
RUN \
apt-get clean && \
apt-get update && \
apt-get install -y nodejs-legacy && \
apt-get install -y npm
WORKDIR /home/app
COPY . /home/app
RUN npm install -g gulp
RUN npm install
COPY start.sh /
CMD ["./start.sh"]
So after the whole include of the app + npm installation inside my nginx I start my container with the start.sh script.
The content of start.sh:
#!/bin/bash
sed -i 's#my-url#'"$DATA_ACCESS_URL"'#' configs/config.json
gulp build
rm -r /usr/share/nginx/html/
//cp right folders which are created by gulp build to /usr/share/nginx/html
...
//start nginx container
/usr/sbin/nginx -g "daemon off;"
So the build will happen when my container starts. Not the best way of course but it's all for the needs of the developers. Have an easy local frontend.
The sed command will perform a replace on the config file which contains something like:
{
"service_base": "my-url",
}
So my-url will be replaced by my the content of my environment variable which I willd define in my docker run command.
Than I'm able to perform.
docker run -d -p 80:80 -e DATA_ACCESS_URL=https://mybackendurl.com app:latest
And every developer can use the frontend locally and connect with their own backend URL.

Managed VM add to PATH

In Google App Engine's Python runtime for Managed VMS, I want to install the Splinter (selenium) Chromedriver. According to the documentation for Linux, I have the following in my dockerfile:
# Dockerfile extending the generic Python image with application files for a
# single application.
FROM gcr.io/google_appengine/python-compat
RUN apt-get update && apt-get install -y apt-utils zip unzip wget
ADD requirements.txt /app/
RUN pip install -r requirements.txt
RUN cd $HOME/
RUN wget https://chromedriver.googlecode.com/files/chromedriver_linux64_20.0.1133.0.zip
RUN unzip chromedriver_linux64_20.0.1133.0.zip
RUN mkdir -p $HOME/bin
RUN mv chromedriver /bin
ENV PATH "$PATH:$HOME/bin"
ADD . /app
I can't get the web application to start Splinter with the chrome webdriver as it does not find it in the PATH.
WebDriverException: Message: 'chromedriver' executable needs to be
available in the path. Please look at
http://docs.seleniumhq.org/download/#thirdPartyDrivers
and read up at
http://code.google.com/p/selenium/wiki/ChromeDriver
And if I run docker exec -it <container id> chromedriver, as expected, it doesn't work.
Also, the environment variables printed out in Python are:
➜ ~ docker exec -it f4d9541c4ba6 python
Python 2.7.3 (default, Mar 13 2014, 11:03:55)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> print os.environ
{'GAE_MODULE_NAME': 'parsers', 'API_HOST': '10.0.2.2', 'GAE_SERVER_PORT': '8082', 'MODULE_YAML_PATH': 'parsers.yaml', 'HOSTNAME': 'f4d9541c4ba6', 'SERVER_SOFTWARE': 'Development/2.0', 'GAE_MODULE_INSTANCE': '0', 'DEBIAN_FRONTEND': 'noninteractive', 'GAE_MINOR_VERSION': '580029170989395749', 'API_PORT': '59768', 'GAE_PARTITION': 'dev', 'GAE_LONG_APP_ID': 'utix-app', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'GAE_MODULE_VERSION': 'parsers-0-0-1', 'HOME': '/root'}
What would be the correct way of making the chromedriver be in the PATH, or any workaround?
Thanks a lot
You need to check the ENTRYPOINT and CMD associated with that image (do a docker inspect on the container you launched)
If the image is set to open a new bash session, the profile or .bashrc associated with the account running that session might redefine $PATH, overriding the Dockerfile ENV PATH "$PATH:$HOME/bin" directive.
If that is the case, making sure the profile or .bashrc defines the right PATH is easier (with a COPY of a custom .bashrc for instance) that modifying the ENV.

Resources