How can I install vs-code-server manually and tell vs-code-remote? - vscode-remote

When I try use remote-ssh connect to my server to install install vs-code-server, it hangs with these message:
Install and start server if needed
bash: no job control in this shell
Installing...
Downloading with wget
It seems my server cannot use wget to download vs-code-server.
Can I install vs-code-server manually?

Download your current used version via
wget https://update.code.visualstudio.com/commit:c3f126316369cd610563c75b1b1725e0679adfb3/server-linux-x64/stable
You can check the commit id in vscode Help -> About
Copy it to your machine through ssh.
Unpack to ~/.vscode-server/bin/c3f126316369cd610563c75b1b1725e0679adfb3
And you're done

I used this bash script on my linux container and it works fine. You can try this too.
read -p 'What commit of vscode server do you wish to install? ' commit
echo ""
if [ ! -d "$HOME/.vscode-server/bin/$commit" ] ; then
mkdir -p install-vscode-server
cd install-vscode-server
wget -q https://update.code.visualstudio.com/commit:$commit/server-linux-x64/stable
tar -xf stable
mkdir -p ~/.vscode-server/bin
mv vscode-server-linux-x64 ~/.vscode-server/bin/$commit
cd ..
rm -rf install-vscode-server
echo "vscode server commit:$commit installed"
else
echo "Commit already installed"
fi
echo ""

This problem is caused by your terminal shell path isn't configured rightly.
Follow this issue
https://github.com/microsoft/vscode-remote-release/issues/220#issuecomment-490374437
Check which shell you are using: which $SHELL

Related

Postgres Docker importing SQL dump on docker build

I'm trying to get rid of Docker-In-Docker, therefore I'm replacing our Postgres images with new ones. For a use case we use a pre-filled Postgres image. The old workflow is to build the image, pull it in a pipeline and use Docker-In-Docker to fill it with data, then re-upload it to the Image registry again.
The new approach is to create the Postgres image with docker, and I've copied the .sql Dumps to /docker-entrypoint-initdb.d/. But this fills the image after the startup, I'd like to have a pre-filled image in the container registry because the filling takes up to 2 minutes.
This is my Dockerfile:
FROM postgres:11.12
LABEL maintainer="Hello Stackoverflow"
ARG POSTGRES_VERSION="11.12"
ARG TZ="Europe/Berlin"
ENV TZ ${TZ}
ENV LANG de_DE.UTF-8
ENV LANGUAGE de_DE.UTF-8
ENV LC_ALL de_DE.UTF-8
ENV POSTGRES_PASSWORD 'blabla'
ENV POSTGRES_HOST_AUTH_METHOD trust
RUN set -x && \
localedef -i de_DE -c -f UTF-8 -A /usr/share/locale/locale.alias de_DE.UTF-8
COPY test-data/. /docker-entrypoint-initdb.d/
CMD ["postgres"]
In the test-datafolder is a shell-script which executes the filling
#!/bin/sh
cd /docker-entrypoint-initdb.d
echo "read one.sql"
psql -v ON_ERROR_STOP=1 -U postgres < sql/one.sql
echo "read two.sql"
...
...
...
So the idea is to pre-fill the Postgres docker image with the schema and upload to the registry.
In theory you can rung postgres engine during docker build and execute whatever you need, here is not completely working example, i.e. postgres fails to start because there no configuration file.
if you spend more time on this i bet it should do the trick.
between your lines COPY test-data/. /docker-entrypoint-initdb.d/ and CMD ["postgres"] insert this
RUN adduser --disabled-password --gecos "" dbuser
RUN apt-get update
RUN apt-get install -y sudo
RUN echo "dbuser ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/dbuser && chmod 0440 /etc/sudoers.d/dbuser
USER dbuser:dbuser
RUN sudo chown -R dbuser:dbuser /docker-entrypoint-initdb.d
RUN sudo chown -R dbuser:dbuser /var/lib/postgresql/
RUN postgres
WORKDIR /docker-entrypoint-initdb.d
RUN psql -v ON_ERROR_STOP=1 -U postgres < sql/one.sql
at the moment that fails on RUN postgres - fails to find configuration on german, and i am not expert on postgres neither speak german so i wasn't able to solve right away.
also this part installs sudo and adds new dbuser into sudo group because postgress didn't want to start from root, so postgres runs from dbuser.
Hope this is going to help you moving into right direction :)

Zsh: Command Not found : mongo After trying to install mongodb 4.2 using brew

I have tried the following steps to install and setup mongodb in my mac from here https://docs.mongodb.com/manual/tutorial/install-mongodb-on-os-x/ but I got the following error when running the final "mongo" command in my terminal:
Error Message - Zsh: Command Not found : mongo
This error msg occurred after trying to install mongodb 4.2 using brew
sudo chown -R $(whoami) $(brew --prefix)/*
then
brew tap mongodb/brew
then
brew install mongodb-community#4.2
and
brew services start mongodb-community#4.2
or
mongod --config /usr/local/etc/mongod.conf
then
ps aux | grep -v grep | grep mongod
and
mongo
running brew services start mongodb-community#4.2 returns:
Successfully started `mongodb-community#4.2` (label: homebrew.mxcl.mongodb-community#4.2)
running ps aux | grep -v grep | grep mongod returns:
9081 0.2 0.5 5528024 41856 ?? S 3:01pm 0:01.48 /usr/local/opt/mongodb-community#4.2/bin/mongod --config /usr/local/etc/mongod.conf
7613 0.0 0.1 4298832 5600 s000 T 2:47pm 0:00.08 vim /usr/local/etc/mongod.conf
running mongod --config /usr/local/etc/mongod.conf returns:
zsh: command not found: mongod
There are also no mongo files in my /usr/local/bin directory after using these commands
I created a data/db folder in my /usr/local/bin directory using the following commands:
sudo mkdir -p /usr/local/bin/data/db
sudo chown -R `id -un` /usr/local/bin/data/db
Running "brew update" returns:
brew update
Updated 1 tap (homebrew/cask).
==> Updated Casks
brave-browser
brew install mongodb-community-shell
Fixed the problem for me.
Solved it by manually installing the mongodb community files and db tools using the website instead. Then copying them into /usr/local/bin. Then ignoring the app permissions whenever calling mongo or related commands in the terminal through System Preferences > Security & Privacy > General.
After googling I found out that mongoimport and the other features have to be installed separately: https://www.mongodb.com/try/download/database-tools
Followed by copying those bin files after extracting them into the same /usr/local/bin directory
Not sure why its' not working through homebrew though
This worked for me, I was having same issue on mongodb-community#4.4
brew reinstall mongodb-community#4.4
On terminal something like this will appear during reinstallation.
copy highlighted path with echo
echo 'export PATH="/opt/homebrew/opt/mongodb-community#4.4/bin:$PATH"' >> ~/.zshrc
Now open another terminal and start mongodb services
brew services restart mongodb/brew/mongodb-community#4.4
write mongo on terminal and here we fly
If you installed the mongodb via Homebrew. Need to add the mongo path in your bash_profile.
Edit the bash_profile vi ~/.bash_profile
Add the below line in EOF export PATH=$PATH:/usr/local/opt/mongodb-community#4.2/bin
After the edit bash_profile. Close all terminals and open them again. mongo command start works.
In addition to #ramesh-babu-t-b 's answer, https://stackoverflow.com/a/68407530/1279516, the issue could also be that your MongoDB installation did add mongod to your path, but the installation happened within the current shell session, and so your shell doesn't have the updates to the PATH variable yet.
In this case, only his last step is still necessary - Open a new console window and retry the mongod command.

Interactive Putty after remote commands & remotely sudo login

I'm trying to run commands in putty remotely & unfortunately i stuck on two problems.
Putty cli closed after running those commands.
I want sudo login remotely without prompt for password
Note: I already found solution for both problems & posting this question for future use.
First problem solution detail in here
Second problem solved by first two line of RemoteCommands.txt first line suggested here & after running that i run sudo -i to do sudo login but it didn't prompt for password. so it solve accidentally.
VBScript.vbs
Set WshShell = WScript.CreateObject("WScript.Shell")
WshShell.Exec("C:\Putty\putty.exe -ssh <username>#<ip> -pw <password> -P <port> -m ""E:\putty\RemoteCommands.txt"" -t")
RemoteCommands.txt
sudo -S <<< "<password>" ls
sudo -i
/bin/bash
BatchFile.bat to run vbscript easily
#echo off
start cmd /k "cd /d E:\putty & cscript VBScript.vbs & exit"
Edited
To run commands after sudo login you can write something like this
sudo -i -- bash -c 'cmd1; cmd2' or sudo -i -- bash -c 'cmd1 && cmd2'
I found this workaround from this link & with random tries. so i got no explanation about this :D ... if any one know detail about this please edit this answere & provide links
RemoteCommands.txt
sudo -S <<< "<password>" ls
sudo -i -- bash -c 'cd /home/shajji && npm start && /bin/bash'
/bin/bash

How to check the version of DolphinDB I am using?

I deployed a distributed cluster successfully with DolphinDB Docker Package. But how to check what version of DolphinDB has been installed? I wonder where to specify the version to download so that I can use an earlier version.
Here's the tutorial: https://github.com/dolphindb/Tutorials_CN/blob/master/docker_deployment.md
I have a brief check, this project hard code to use v0.95.3 version's DolphinDB. You had to modify Dockerfile of it to use old one.
Steps as next:
Download the deploy package from here, just as the readme you give said.
Unzip this package, you will find a sub-package with the name Dockerbuild, enter into this folder, use a editor to modify the Dockerfile, change all V0.95.3 to the version which you needed:
FROM centos:latest
RUN mkdir -p /data/ddb
ADD http://www.dolphindb.com/downloads/DolphinDB_Linux64_V0.95.3.zip /data/ddb/
RUN yum install -y unzip
RUN yum install -y wget
RUN (cd /data/ddb/ && unzip /data/ddb/DolphinDB_Linux64_V0.95.3.zip)
RUN rm -rf /data/ddb/DolphinDB_Linux64_V0.95.3.zip
RUN chmod 755 /data/ddb/server/dolphindb
RUN mkdir -p /data/ddb/server/config
ADD http://www.dolphindb.com/downloads/ZLIB_V0.95.0.zip /data/ddb/server/
RUN (cd /data/ddb/server/ && unzip -n /data/ddb/server/ZLIB_V0.95.0.zip)
RUN rm -rf /data/ddb/server/plugins/README.md
RUN rm -rf /data/ddb/server/ZLIB_V0.95.0.zip
ADD http://www.dolphindb.com/downloads/AWSS3_V0.95.0.zip /data/ddb/server/
RUN (cd /data/ddb/server/ && unzip -n /data/ddb/server/AWSS3_V0.95.0.zip)
RUN rm -rf /data/ddb/server/plugins/README.md
RUN rm -rf /data/ddb/server/AWSS3_V0.95.0.zip
ADD default_cmd /root/
RUN chmod 755 /root/default_cmd
ENTRYPOINT ["/root/default_cmd"]
Finally, follow the guide to build:
cd ./DolphinDB-Docker-Compose/Dockerbuild
docker build -t ddb:latest ./
execute the following code in DolphinDB GUI
version()

Starting and populating a Postgres container in Docker

I have a Docker container that contains my Postgres database. It's using the official Postgres image which has a CMD entry that starts the server on the main thread.
I want to populate the database by running RUN psql –U postgres postgres < /dump/dump.sql before it starts listening to queries.
I don't understand how this is possible with Docker. If I place the RUN command after CMD, it will of course never be reached because Docker has finished reading the Dockerfile. But if I place it before the CMD, it will run before psql even exists as a process.
How can I prepopulate a Postgres database in Docker?
After a lot of fighting, I have found a solution ;-)
For me was very useful a comment posted here: https://registry.hub.docker.com/_/postgres/ from "justfalter"
Anyway, I have done in this way:
# Dockerfile
FROM postgres:9.4
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
db/structure.sql is a sql dump, useful to initialize the first tablespace.
Then, the init_docker_postgres.sh
#!/bin/bash
# this script is run when the docker container is built
# it imports the base database structure and create the database for the tests
DATABASE_NAME="db_name"
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
# create default database
gosu postgres postgres --single <<EOSQL
CREATE DATABASE "$DATABASE_NAME";
GRANT ALL PRIVILEGES ON DATABASE "$DATABASE_NAME" TO postgres;
EOSQL
# clean sql_dump - because I want to have a one-line command
# remove indentation
sed "s/^[ \t]*//" -i "$DB_DUMP_LOCATION"
# remove comments
sed '/^--/ d' -i "$DB_DUMP_LOCATION"
# remove new lines
sed ':a;N;$!ba;s/\n/ /g' -i "$DB_DUMP_LOCATION"
# remove other spaces
sed 's/ */ /g' -i "$DB_DUMP_LOCATION"
# remove firsts line spaces
sed 's/^ *//' -i "$DB_DUMP_LOCATION"
# append new line at the end (suggested by #Nicola Ferraro)
sed -e '$a\' -i "$DB_DUMP_LOCATION"
# import sql_dump
gosu postgres postgres --single "$DATABASE_NAME" < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
So finally:
# no postgres is running
[myserver]# psql -h 127.0.0.1 -U postgres
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
[myserver]# docker build -t custom_psql .
[myserver]# docker run -d --name custom_psql_running -p 5432:5432 custom_psql
[myserver]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4212697372 custom_psql:latest "/docker-entrypoint. 9 minutes ago Up 9 minutes 0.0.0.0:5432->5432/tcp custom_psql_running
[myserver]# psql -h 127.0.0.1 -U postgres
psql (9.2.10, server 9.4.1)
WARNING: psql version 9.2, server version 9.4.
Some psql features might not work.
Type "help" for help.
postgres=#
# postgres is now initialized with the dump
Hope it helps!
For those who want to initialize a PostgreSQL DB with millions of records during the first run.
Import using *.sql dump
You can do simple sql dump and copy the dump.sql file into /docker-entrypoint-initdb.d/. The problem is speed. My dump.sql script is about 17MB (small DB - 10 tables with 100k rows in only one of them) and the initialization takes over a minute (!). That is unacceptable for local development / unit test, etc.
Import using binary dump
The solution is to make a binary PostgreSQL dump and use shell scripts initialization support.
Then the same DB is initialized in about 500ms instead of 1 minute.
1. Create the dump.pgdata binary dump of a DB named "my-db"
directly from within a container or your local DB
pg_dump -U postgres --format custom my-db > "dump.pgdata"
Or from host from running container (postgres-container)
docker exec postgres-container pg_dump -U postgres --format custom my-db > "dump.pgdata"
2. Create a Docker image with a given dump and initialization script
$ tree
.
├── Dockerfile
└── docker-entrypoint-initdb.d
├── 01-restore.sh
├── 02-small-updates.sql
└── dump.pgdata
$ cat Dockerfile
FROM postgres:11
COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
$ cat docker-entrypoint-initdb.d/01-restore.sh
#!/bin/bash
file="/docker-entrypoint-initdb.d/dump.pgdata"
dbname=my-db
echo "Restoring DB using $file"
pg_restore -U postgres --dbname=$dbname --verbose --single-transaction < "$file" || exit 1
$ cat docker-entrypoint-initdb.d/02-small-updates.sql
-- some updates on your DB, for example for next application version
-- this file will be executed on DB during next release
UPDATE ... ;
3. Build an image and run it
$ docker build -t db-test-img .
$ docker run -it --rm --name db-test db-test-img
Alternatively, you can just mount a volume to /docker-entrypoint-initdb.d/ that contains all your DDL scripts. You can put in *.sh, *.sql, or *.sql.gz files and it will take care of executing those on start-up.
e.g. (assuming you have your scripts in /tmp/my_scripts)
docker run -v /tmp/my_scripts:/docker-entrypoint-initdb.d postgres
There is yet another option available that utilises Flocker:
Flocker is a container data volume manager that is designed to allow databases like PostgreSQL to easily run in containers in production. When running a database in production, you have to think about things like recovering from host failure. Flocker provides tools for managing data volumes across a cluster of machines like you have in a production environment. For example, as a Postgres container is scheduled between hosts in response to server failure, Flocker can automatically move its associated data volume between hosts at the same time. This means that when your Postgres container starts up on a new host, it has its data. This operation can be accomplished manually using the Flocker API or CLI, or automatically by a container orchestration tool that Flocker is integrates with, for example Docker Swarm, Kubernetes or Mesos.
I Followed the same solution which #damoiser , The only situation which was different was I wanted to import all dump data.
Please follow the solution below.(I have not done any kind of checks)
Dockerfile
FROM postgres:9.5
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
then the init_docker_postgres.sh script
#!/bin/bash
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
psql -U postgres < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
and then you can build your image as
docker build -t abhije***/postgres-data .
docker run -d abhije***/postgres-data
My solution is inspired by Alex Dguez's answer which unfortunately doesn't work for me because:
I used pg-9.6 base image, and the RUN /docker-entrypoint.sh --help never ran through for me, which always complained with The command '/bin/sh -c /docker-entrypoint.sh -' returned a non-zero code: 1
I don't want to pollute the /docker-entrypoint-initdb.d dir
The following answer is originally from my reply in another post: https://stackoverflow.com/a/59303962/4440427. It should be noted that the solution is for restoring from a binary dump instead of from a plain SQL as asked by the OP. But it can be modified slightly to adapt to the plain SQL case
Dockerfile:
FROM postgres:9.6.16-alpine
LABEL maintainer="lu#cobrainer.com"
LABEL org="Cobrainer GmbH"
ARG PG_POSTGRES_PWD=postgres
ARG DBUSER=someuser
ARG DBUSER_PWD=P#ssw0rd
ARG DBNAME=sampledb
ARG DB_DUMP_FILE=example.pg
ENV POSTGRES_DB launchpad
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD ${PG_POSTGRES_PWD}
ENV PGDATA /pgdata
COPY wait-for-pg-isready.sh /tmp/wait-for-pg-isready.sh
COPY ${DB_DUMP_FILE} /tmp/pgdump.pg
RUN set -e && \
nohup bash -c "docker-entrypoint.sh postgres &" && \
/tmp/wait-for-pg-isready.sh && \
psql -U postgres -c "CREATE USER ${DBUSER} WITH SUPERUSER CREATEDB CREATEROLE ENCRYPTED PASSWORD '${DBUSER_PWD}';" && \
psql -U ${DBUSER} -d ${POSTGRES_DB} -c "CREATE DATABASE ${DBNAME} TEMPLATE template0;" && \
pg_restore -v --no-owner --role=${DBUSER} --exit-on-error -U ${DBUSER} -d ${DBNAME} /tmp/pgdump.pg && \
psql -U postgres -c "ALTER USER ${DBUSER} WITH NOSUPERUSER;" && \
rm -rf /tmp/pgdump.pg
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pg_isready -U postgres -d launchpad
where the wait-for-pg-isready.sh is:
#!/bin/bash
set -e
get_non_lo_ip() {
local _ip _non_lo_ip _line _nl=$'\n'
while IFS=$': \t' read -a _line ;do
[ -z "${_line%inet}" ] &&
_ip=${_line[${#_line[1]}>4?1:2]} &&
[ "${_ip#127.0.0.1}" ] && _non_lo_ip=$_ip
done< <(LANG=C /sbin/ifconfig)
printf ${1+-v} $1 "%s${_nl:0:$[${#1}>0?0:1]}" $_non_lo_ip
}
get_non_lo_ip NON_LO_IP
until pg_isready -h $NON_LO_IP -U "postgres" -d "launchpad"; do
>&2 echo "Postgres is not ready - sleeping..."
sleep 4
done
>&2 echo "Postgres is up - you can execute commands now"
The above scripts together with a more detailed README are available at https://github.com/cobrainer/pg-docker-with-restored-db
I was able to load the data in by pre-pending the run command in the docker file with /etc/init.d/postgresql. My docker file has the following line which is working for me:
RUN /etc/init.d/postgresql start && /usr/bin/psql -a < /tmp/dump.sql
We for E2E test in which we need a database with structure and data already saved in the Docker image we have done the following:
Dockerfile:
FROM postgres:9.4.24-alpine
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
ENV PGDATA /pgdata
COPY database.backup /tmp/
COPY database_restore.sh /docker-entrypoint-initdb.d/
RUN /docker-entrypoint.sh --help
RUN rm -rf /docker-entrypoint-initdb.d/database_restore.sh
RUN rm -rf /tmp/database.backup
database_restore.sh:
#!/bin/sh
set -e
pg_restore -C -d postgres /tmp/database.backup
To create the image:
docker build .
To start the container:
docker run --name docker-postgres -d -p 5432:5432 <Id-docker-image>
This does not restore the database every time the container is booted. The structure and data of the database is already contained in the created Docker image.
We have based on this article, but eliminating the multistage:
Creating Fast, Lightweight Testing Databases in Docker
Edit: With version 9.4-alpine does not work now because it does not
run the database_restore.sh scrips. Use version 9.4.24-alpine
My goal was to have an image that contains the database - i. e. saving the time to rebuild it everytime I do docker run oder docker-compose up.
We would just have to manage to get the line exec "$#" out of docker-entrypoint.sh. So I added into my Dockerfile:
#Copy my ssql scripts into the image to /docker-entrypoint-initdb.d:
COPY ./init_db /docker-entrypoint-initdb.d
#init db
RUN grep -v 'exec "$#"' /usr/local/bin/docker-entrypoint.sh > /tmp/docker-entrypoint-without-serverstart.sh && \
chmod a+x /tmp/docker-entrypoint-without-serverstart.sh && \
/tmp/docker-entrypoint-without-serverstart.sh postgres && \
rm -rf /docker-entrypoint-initdb.d/* /tmp/docker-entrypoint-without-serverstart.sh

Resources