Unable to see mssql docker image after successfully pulling from registry - sql-server

I pulled the mssql docker image using sudo docker pull mcr.microsoft.com/mssql/server:2019-latest
This returned that the image was successfully pulled.
But when I run docker images I can't see it listed
I tried viewing the images from the docker client gui and it still doesn't show up, I created a container with the image which worked, but also I'm unable to view the container on docker client gui or the command line with docker ps

Related

Mongodb running on Docker is wiping the collection after restart

I have to build a small application that reads data from MongoDB running on docker and uses it for further processes.
The problem is that after I close docker, the local instance of the database is also getting deleted. How can I stop it?
The MONGODB_URI is mongodb://localhost:27017 and what are the attributes that I should add in the docker command to avoid it. should I avoid using localhost? docker-compose seems confusing to me so I use Dockerfile.
So, what exactly can be the docker run command to avoid it? is it one of these?
Commands: docker run -d --name mongo-on-docker -p 27017:27017 mongo
docker run -d --name sample --link mongo-on-docker web app
Also to permanently save what data directory should I use?
Docker container are dead before quiting. For store data you should mount named volume, folder or file to the container.
In MongoDB case try:
docker run --rm -ti -v mongo_data:/data/db mongo bash
Where mongo_data is a special Docker entity, that can be mounted as a folder into container. Including in different containers at the same time.
Not new:
How to set docker mongo data volume

Transferring a data from a mongodb on my local machine to a docker container

I'm trying to deploy a site and I am stuck trying to get my MongoDB data into my docker container. My API seems to work just fine without the docker container but when it is run using the docker container, it throws an error. The errors are due to the database being empty. I'm looking for a way to transfer previously stored data from my local MongoDB to the MongoDB on my container. Any solutions for this.
Below is my docker-compose.yml file:
version: "2"
services:
web:
build: .
ports:
- "3030:3030"
depends_on:
- mongo
mongo:
image: mongo
ports:
- "27018:27017"
I was told using mongodump and mongorestore could be helpful but haven't had much luck with mongorestore.
Currently, I have a dump folder with the db that I'm trying to transfer on my local machine. What steps should I take next to get it into docker?
Found the issue for anyone attempting to populate their mongo database in docker.
Here are the steps I took:
First used the mongodump to copy my database into a dump file
mongodump --db
Used docker cp to copy that dump file into a docker container
docker cp ~/dump/ :/usr/
Used mongorestore inside of the docker container
Open docker mongo shell in Docker Desktop or docker exec -it bash
cd into the usr directory
mongorestore --db= --collection= ./dump//.bson

Persist database of docker container

I am using postgres docker image in my project. For initialization I am using following command to create and init my database (tables, views, data, ...)
COPY sql_dump.sql /docker-entrypoint-initdb.d/
Is possible persist these data after container is stopped and removed? For instance when I run image of postgres, it will create database with these data wihout loading script every time of container start. Just load created data of first run.
I did some research and I found VOLUME command, but I don't know how to use it for my purpose, I am new with Docker. Thanks for any help. I am using Docker For Win v18.
You can use docker named volumes more information can be found here.
this will create a named volume called postgres-data
docker volume create postgres-data
and say this is your command to create the container.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
change that to this.
docker run --name some-postgres -v postgres-data:/var/lib/postgresql -e POSTGRES_PASSWORD=mysecretpassword -d postgres
this should mount the postgres-data volume under /var/lib/postgresql. can they initialize your DB and when you stop and start the container it will contain the persisted data.
-HTH

How to connect to a Docker container with credentials using command line?

I've pulled a database image using docker and set the needed database url and credentials in my context.xml file. However, I get an access denied error when starting Tomcat. So I would like to connect to this Docker container with the help of cmd.
We suppose that your container is running.
So, use :
docker exec -t -i CONTAINER_NAME /bin/sh

New version of postgresql docker container doesn't have the new added data

I am using a postgreSQL docker image.
When I inspect the docker image running I have the following configuration:
"Volumes" : {
"var/lib/postgresql": {}
};
I want to run the docker container, make changes to the postgresql data bases and create another version of the docker container with those changes. For that I use the following command:
docker commit –m “Add data” –a “My User Name” \7b827 miguelbgouveia/postegresql:version2
The I run this new container expecting that the added data be present. Like this:
docker run -d --name db –p 5432:5432 miguelbgouveia/postgresql:version2
The problem is all the new data added is not present in the new running container. This is normal? How can I create a new container containing all the data base changes?
Volumes are outside of the Docker layered filesystem. Any changes you make there are persisted in the volume, not in the image.

Resources