I deployed a distributed cluster successfully with DolphinDB Docker Package. But how to check what version of DolphinDB has been installed? I wonder where to specify the version to download so that I can use an earlier version.
Here's the tutorial: https://github.com/dolphindb/Tutorials_CN/blob/master/docker_deployment.md
I have a brief check, this project hard code to use v0.95.3 version's DolphinDB. You had to modify Dockerfile of it to use old one.
Steps as next:
Download the deploy package from here, just as the readme you give said.
Unzip this package, you will find a sub-package with the name Dockerbuild, enter into this folder, use a editor to modify the Dockerfile, change all V0.95.3 to the version which you needed:
FROM centos:latest
RUN mkdir -p /data/ddb
ADD http://www.dolphindb.com/downloads/DolphinDB_Linux64_V0.95.3.zip /data/ddb/
RUN yum install -y unzip
RUN yum install -y wget
RUN (cd /data/ddb/ && unzip /data/ddb/DolphinDB_Linux64_V0.95.3.zip)
RUN rm -rf /data/ddb/DolphinDB_Linux64_V0.95.3.zip
RUN chmod 755 /data/ddb/server/dolphindb
RUN mkdir -p /data/ddb/server/config
ADD http://www.dolphindb.com/downloads/ZLIB_V0.95.0.zip /data/ddb/server/
RUN (cd /data/ddb/server/ && unzip -n /data/ddb/server/ZLIB_V0.95.0.zip)
RUN rm -rf /data/ddb/server/plugins/README.md
RUN rm -rf /data/ddb/server/ZLIB_V0.95.0.zip
ADD http://www.dolphindb.com/downloads/AWSS3_V0.95.0.zip /data/ddb/server/
RUN (cd /data/ddb/server/ && unzip -n /data/ddb/server/AWSS3_V0.95.0.zip)
RUN rm -rf /data/ddb/server/plugins/README.md
RUN rm -rf /data/ddb/server/AWSS3_V0.95.0.zip
ADD default_cmd /root/
RUN chmod 755 /root/default_cmd
ENTRYPOINT ["/root/default_cmd"]
Finally, follow the guide to build:
cd ./DolphinDB-Docker-Compose/Dockerbuild
docker build -t ddb:latest ./
execute the following code in DolphinDB GUI
version()
Related
[omm#db1 ~]$ gs_install -X /opt/software/openGauss/cluster_config.xml --gsinit-parameter="--locale=zh_CN.utf8"
Parsing the configuration file.
Check preinstall on every node.
Successfully checked preinstall on every node.
Creating the backup directory.
Successfully created the backup directory.
begin deploy..
Installing the cluster.
begin prepare Install Cluster..
Checking the installation environment on all nodes.
[GAUSS-51400] : Failed to execute the command: source /home/omm/.bashrc;python3 '/opt/huawei/install/om/script/local/CheckInstall.py' -U omm:dbgrp -R /opt/huawei/install/app -l /var/log/omm/omm/om/gs_local.log -X /opt/software/openGauss/cluster_config.xml.Error:
Checking old installation.
[GAUSS-51806] : The cluster has been installed.
I tried to remove the install directory and Downloaded the installation package to reset it, but still not resolved:
rm -rf /root/gauss_om
rm -rf /opt/huawei
rm -rf /opt/software/openGauss
mkdir -p /opt/software/openGauss
……
You can try to delete the directory first, and then delete the environment variables: rm -rf /home/omm
I have the React app. with 3 versions: for developement, testing and production.
They only differ in the URL that is used for the login (different WordPress site).
How do I make the react app agnostic/configurable at runtime
and save the need to generate 3 versions?
Just use
window.location.host // need to add http/s
to get the URL.
Many other parameters can be found using: URLSearchParams, see URLSearchParams
For those that use a Docker container, it can be done with environment variables.
My situation:
I made my react app in Visual Studio with template 'ASP.NET Core with React.js and Redux'. It is placed in a docker container which is deployed in Kubernetes.
It took me almost half a day but I managed to do it :)
First I found this post and especially the comment from Patrick Lee Scott is interesting:
https://levelup.gitconnected.com/handling-multiple-environments-in-react-with-docker-543762989783
Comment from Patrick Lee Scott:
https://patrickleet.medium.com/another-option-build-with-dummy-values-like-replace-api-url-and-then-use-an-entrypoint-sh-db053a799167
The comment is a good start but doesn't show the complete solution.
First I tested the script (and try to figure out what it was doing).
During the testing I found out that the 'cat /proc/self/environ' was not working, I replaced it with xargs -0 -L1 -a /proc/self/environ.
Second I had trouble getting the script to run via ENTRYPOINT, I figured out that the script needed to begin with: #!/bin/bash
Third, I added the original ENTRYPOINT at the bottom of the script.
Here is the modified script of Patrick Lee Scott:
appEntryPoint.sh:
#!/bin/bash
echo "Inserting env variables"
for file in ./ClientApp/build/static/js/*.js
do
echo "env sub for $file"
list="$(xargs -0 -L1 -a /proc/self/environ | awk -F= '{print $1}')"
echo "$list" | while read -r line; do
export REPLACE="REPLACE_$line"
export VALUE=$(eval "echo \"\$$line\"")
#echo "replacing ${REPLACE} with ${VALUE} in $file"
sed -i "s~${REPLACE}~${VALUE}~g" $file
unset REPLACE
unset VALUE
done
done
dotnet My.DotNet.ReactApp.dll
To make the answer complete, I will list here my Dockerfile:
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app/ClientApp
EXPOSE 80
EXPOSE 443
RUN echo "Acquire::Check-Valid-Until \"false\";\nAcquire::Check-Date \"false\";" | cat > /etc/apt/apt.conf.d/10no--check-valid-until && apt-get update -yq \
&& apt-get install -y curl \
&& apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx \
&& curl -sL https://deb.nodesource.com/setup_lts.x | bash - \
&& apt-get install -y nodejs
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN echo "Acquire::Check-Valid-Until \"false\";\nAcquire::Check-Date \"false\";" | cat > /etc/apt/apt.conf.d/10no--check-valid-until && apt-get update -yq \
&& apt-get install -y curl \
&& apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx \
&& curl -sL https://deb.nodesource.com/setup_lts.x | bash - \
&& apt-get install -y nodejs
WORKDIR /app/ClientApp
COPY /My.DotNet.ReactApp/ClientApp/package*.json ./
RUN npm install --silent
COPY /My.DotNet.ReactApp/ClientApp ./
RUN npm run build
WORKDIR /app/publish/ClientApp
RUN cp -r /app/ClientApp/build .
WORKDIR /app
COPY /My.DotNet.ReactApp ./
RUN dotnet restore "My.DotNet.ReactApp.csproj"
RUN dotnet build "My.DotNet.ReactApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "My.DotNet.ReactApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
COPY ./appEntryPoint.sh ./
RUN chmod +x appEntryPoint.sh
ENTRYPOINT ["/app/appEntryPoint.sh"]
What you now have to do is put in your .env file placeholders:
.env.production
REACT_APP_API_ENDPOINT=REPLACE_REACT_APP_API_ENDPOINT
REACT_APP_API_SOME_OTHER_URL=REPLACE_REACT_APP_API_SOME_OTHER_URL
Now you can set the real values for the react variables as environment variables on the container, the script reads the environment variables from the container and will replace all values that begin with "REPLACE_"
So in this case we need to set these environment variables on the container used for production:
REACT_APP_API_ENDPOINT=https://prod.endpoint.com
REACT_APP_API_SOME_OTHER_URL=https://prod.url.com
And for the test environment:
REACT_APP_API_ENDPOINT=https://test.endpoint.com
REACT_APP_API_SOME_OTHER_URL=https://test.url.com
Use .env file. Check out this link for installation. At the end you will have such kind of structure in you app folder
We have an AngularJS application. We wrote a dockerfile for it so it's reusable on every system. The dockerfile isn't a best practice and it's maybe some weird build up (build and hosting in same file) for some but it's just created to run our angularjs app locally on each PC of every developer.
Dockerfile:
FROM nginx:1.10
... Steps to install nodejs-legacy + npm
RUN npm install -g gulp
RUN npm install
RUN gulp build
.. steps to move dist folder
We build our image with docker build -t myapp:latest .
Every developer is able to run our app with docker run -d -p 80:80 myapp:latest
But now we're developing other backends. So we have a backend in DEV, a backend in UAT, ...
So there are different URLS which we need to use in /config/xx.json
{
...
"service_base": "https://backend.test.xxx/",
...
}
We don't want to change that URL every time, rebuild the image and start it. We also don't want to declare some URLS (dev, uat, prod, ..) which can be used there. We want to perform our gulp build process with an environment variable instead of a hardcoded URL.
So we we can start our container like this:
docker run -d -p 80:80 --env URL=https://mybackendurl.com app:latest
Is there someone who has experience with this kind of issues? So we'll need an env variable in our json and building it and add the URL later on if that's possible.
EDIT : Better option is to use build args
Instead of passing URL at docker run command, you can use docker build args. It is better to have build related commands to be executed during docker build than docker run.
In your Dockerfile,
ARG URL
And then run
docker build --build-arg URL=<my-url> .
See this stackoverflow question for details
This was my 'solution'. I know it isn't the best docker approach but just for our developers it was a big help.
My dockerfile looks like this:
FROM nginx:1.10
RUN apt-get update && \
apt-get install -y curl
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
RUN \
apt-get clean && \
apt-get update && \
apt-get install -y nodejs-legacy && \
apt-get install -y npm
WORKDIR /home/app
COPY . /home/app
RUN npm install -g gulp
RUN npm install
COPY start.sh /
CMD ["./start.sh"]
So after the whole include of the app + npm installation inside my nginx I start my container with the start.sh script.
The content of start.sh:
#!/bin/bash
sed -i 's#my-url#'"$DATA_ACCESS_URL"'#' configs/config.json
gulp build
rm -r /usr/share/nginx/html/
//cp right folders which are created by gulp build to /usr/share/nginx/html
...
//start nginx container
/usr/sbin/nginx -g "daemon off;"
So the build will happen when my container starts. Not the best way of course but it's all for the needs of the developers. Have an easy local frontend.
The sed command will perform a replace on the config file which contains something like:
{
"service_base": "my-url",
}
So my-url will be replaced by my the content of my environment variable which I willd define in my docker run command.
Than I'm able to perform.
docker run -d -p 80:80 -e DATA_ACCESS_URL=https://mybackendurl.com app:latest
And every developer can use the frontend locally and connect with their own backend URL.
I recently uninstalled postgresql from my computer. I tried to install it again but I faced some problems. I tried to fully uninstall it again like this:
I found al the packages related to postgres:
$ dpkg -l | grep postgres
Them I removed all the packages and related folders :
$ sudo apt-get --purge remove postgresql postgresql-9.3 postgresql-client-9.3 postgresql-client-common postgresql-common postgresql-contrib-9.3
$ sudo rm -rf /var/lib/postgresql/
$ sudo rm -rf /var/log/postgresql/
$ sudo rm -rf /etc/postgresql/
I've tried to install it again, but after the installation I can't access postgres user.
$ sudo apt-get install postgresql postgresql-contrib
$ sudo -i -u postgres
sudo: unable to change directory to /home/postgres: No such file or directory
If I access root I can access postgres but this is what happens:
$ sudo su -
$ su - postgres
No directory, logging in with HOME=/
postgres#rafael-pc:/$ psql
psql (9.3.9)
Type "help" for help.
postgres=# \q
could not save history to file "/home/postgres/.psql_history": No such file or directory
I have no idea what is happening. I've tried to uninstall it many times but I always have some kind of error when I install it back.
Just a guess here, but it sure looks to me like the problem is that there isn't a /home/postgres directory. I'm not sure what may have happened in your uninstall process to remove that, but it looks like that's the cause of the error in both of the steps you list.
Can you try this (or some approximation of these steps, which create that directory and make sure it's owned by the postgres user)?
# sudo mkdir /home/postgres
# sudo chown postgres /home/postgres
I have a Dockerfile to create a dev enviroment to develop a sailsJS app.
I just mount my source code into the container. I make my Git commit on my host machine but i would like to execute all my npm command in the container.
I have the following Dockerfile and i am running Docker (1.4.1) in ubuntu 14.10:
FROM ubuntu:14.04
### Utils ###
RUN apt-get update
RUN apt-get -y install build-essential git wget tar vim supervisor
### MongoDB ###
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list
RUN apt-get update
RUN apt-get install -y mongodb-org
RUN mkdir -p /data/db
### NodeJS ###
WORKDIR /tmp
RUN wget -O node http://nodejs.org/dist/v0.10.33/node-v0.10.33-linux-x64.tar.gz
RUN tar xf node
RUN mv node-v0.10.33-linux-x64 /usr/local/node
RUN ln -s /usr/local/node/bin/* /usr/local/bin
### Supervisord ###
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
### Project ###
RUN npm install -g sails bower
WORKDIR /opt/sails
CMD ["/usr/bin/supervisord"]
EXPOSE 27017 1337
I run my container with the following command :
docker run -d -ti -p 1337:1337 -p 27017:27017 -v ~/dev/pinne:/opt/sails --name test-app loikg/sailsjs-mongo
The problem is that when I use command with npm inside the container that create files like sails genearet api I don't have the writing permission on them in the host machine.
How can i solve that ?
Users and Groups do not sync from host->container.
Your services in the container are running as root (UID:0 GID:0). Any files created by root in the container will need root access on the host.
One solution is to create a UID/GID inside the container that matches the UID/GID on the host. Then all your processes inside the container need to use that UID/GID so the files have the correct ownership/permissions.
Remember, it's UserID not user name. And GroupID not group name. The names need not match, only the numeric ID's.
It's kind of a pita. You will have to change your dockerfile to add the user, make sure your processes that create files are run with the correct uid, etc.
One of the workarounds is to use overlapping volumes, e.g.
... -v ~/dev/pinne:/opt/sails:ro -v /opt/sails/node_modules ...
would allow writing to /opt/sails/node_modules. The downside is that the changes will be lost upon the container termination (unless you copy the volumes data via --volumes-from). Another caveat AFAIR is that the path (~dev/pinne/node_modules -> /opt/sails/node_modules) should exist for this technique to work.