How can a dockerfile expose SQL Server as localhost? - sql-server

Following the documentation on the microsoft/mssql-server-linux page, it provides the following command to get a docker container running.
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Test!234' -p 1433:1433 -d --name sqllinux microsoft/mssql-server-linux
This works fine and I'm able to open up SSMS and connect to localhost with the credentials:
username: sa
password: Test!234
What I wanted to do after that is to create a Dockerfile that will create the image that will do the same thing:
FROM microsoft/mssql-server-linux
ENV ACCEPT_EULA Y
ENV SA_PASSWORD Test!234
EXPOSE 1433 1433
I then ran docker build . -t sqltestfile followed by docker run sqltestfile.
The container seems to start just fine and through Kitematic I can see (what looks like to me) the same output as running the other image, but I'm not able to connect to this image through SSMS using localhost.
What needs to be changed about the Dockerfile to have it work the way I would expect (can connect to the container instance using SSMS through localhost)?
Any help would be greatly appreciated!

You still need to explicitly publish the port with -p.
From the docs:
The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. You can specify whether the port listens on TCP or UDP, and the default is TCP if the protocol is not specified.
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to to high-order ports.

Related

Docker container can't connect to Redis

I have a Docker container running a C application using hiredis, which should write data to a Redis server exposed at the default address and port running locally on the same Linux device at 127.0.0.1:6379 .
The Redis server is running in a different Docker container. I start this container running, exposing port 6379 as follows : sudo docker run --name redis_container -d -p 6379:6379 40c68ed3a4d2
redsi-cli can connect to this via 127.0.0.1:6379 without issues.
However, no matter what I try, my container which should write to the Redis gets a Redis connection refused error from the C code all the time. This was my last attempt at running the container : sudo docker run --expose=6379 -i 7340dfee8ea5
What exactly am I missing here? Thanks
The C client is running inside a container, that means 127.0.0.1 points to the container itself, not to your host. You should configure the redis client to redis_container:6379 as that is the name you have used when docker run the redis container. More about this here
Besides, both containers need to be inside the same docker network. Use the following command to create a simple network
docker network create my-net
and add --network my-net to both docker run commands (redis client and redis server)
You can read more about docker network here

anzograph - Cannot connect to the admin console

I set up Anzograph DB Free Edition in Docker Desktop on my Mac, and (per the commands below) ran it. But I can’t connect to the admin console.
docker pull cambridgesemantics/anzograph
docker run cambridgesemantics/anzograph
When I use the inspect feature in Docker Desktop’s Dashboard, all of the ports for the running image are “not binded”. I would have expected to connect on port 5600 but that doesn’t work – not with localhost, not with 0.0.0.0, not with 127.0.0.1 …
Am I perhaps missing some pre-requisite? I allocated 8 GB of memory to Docker.
From the information you documented, what you are seeing is true as you have not documented in your command the specific ports.
What you entered was the following,
docker run cambridgesemantics/anzograph
What you should run to address this, which is documented on the download page for Anzograph, specifying the ports to install,
docker run -d -p 80:8080 -p 443:8443 --name=anzograph cambridgesemantics/anzograph:latest
AnzoGraph frontend binds to port 8443 (https) and 8080 (http),
AnzoGraph DB binds to port 5600 (gRPC DB management) and 5700 (gRPC DB query) inside the docker container.
Docker Desktop for MAC is mapping these container internal ports to ports on localhost. If you do not tell docker how to map those ports, it uses a random strategy to allocate those ports on localhost. In specifying the mapping
docker run -d -p 80:8080 -p 443:8443 -p 5600:5600 -p 5700:5700 --name=anzograph cambridgesemantics/anzograph:2.1.1-latest
you tell docker what localhost ports to use ( -p { localhost port } : { port inside of container} )
Many users new to docker struggle, when they use for example Kitematic, or similar UIs, that make it simple to deploy a running docker container, however they face complexities understanding and determining these random ports.
So if you are new to docker, and you do not want to use kubernetes yet, please use the command line to specify the localhost ports - it ends up being easier.

connecting to Mongodb inside a docker with mongodb compass GUI

I have a mongodb database running on the default port 27017 in a docker container.
Is there a way to connect to the database with the mongodb compass GUI running natively on my ubuntu OS?
docker run -p 27018:27017 and then connect from Compass on your host with port 27018. I don't see a reason to expose all ports.
Replace localhost with your IP address in the connection string, eg, my IP address is 10.1.2.123 then I have mongodb://10.1.2.123:27017?readPreference=primary&appname=MongoDB%20Compass&ssl=false.
Saw this 👆 here: https://nickjanetakis.com/blog/docker-tip-35-connect-to-a-database-running-on-your-docker-host
With docker-compose you just have to expose the port 27017. When You hit "Connect" in the GUI it will auto-detect this connection.
version: "3"
services:
mongo-database:
container_name: mongo-database
image: mongo:4
ports:
- 27017:27017
Yes we can run
Steps:
Pull/Restart the docker container mongodb
Enter the bash shell
docker exec -it mongodb bash
Now open the mongodb compass community and with same default connection just click connect and the docker container's mongodb will be connected to compass community.
My terminal running docker:
Mongodb Compass:
Use docker inspect or docker desktop to inspect and find the exposing port
docker inspect your_container_name
and find this section
"Ports": {
"27017/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "27012"
}
]
},
and then connect using this url string
mongodb://localhost:27012/?readPreference=primary&appname=MongoDB%20Compass&ssl=false
Do not pass in replica set name if you are using one otherwise connection will fail. This is if you have deployed a replica set instead of turning your standalone to a replica set.
Leave a comment if you don't know how to deploy a replica set and I can leave a docker-compose file to set up and deploy replica set.
I could connect the compass on windows to a docker using these tags at the end:
mongodb://user:password#localhost:27017/dbname?authSource=dbname&readPreference=primary&gssapiServiceName=mongodb&appname=MongoDB%20Compass&ssl=false
Just open compass and inside connect add the credentials if you have used envs like
ME_CONFIG_MONGODB_ADMINUSERNAME=admin
and hit connect.No addition settings required.
Or you can use mongo-express which a web based UI tool for monodb.
Run command sudo docker ps
it will show docker containers you have where you can find the port number of mongodb
the run the command sudo mongodb-compass
it will open the mongodb compass
If you are connecting locally so general hostname is : localhost
and then just put the port number and click on connect.
I was also having trouble connecting to my local MongoDB using Compass, but discovered it was an SSL problem. By default, Compass sets SSL to "System CA". However, if you try that with your dockerized Mongo, your Mongo logs will show you this error:
Error receiving request from client: SSLHandshakeFailed: SSL handshake received but server is started without SSL support. Ending connection from 172.17.0.1:45902 (connection id: 12)
end connection 172.17.0.1:45902 (0 connections now open)
Therefore, to connect, I had to click "Fill in connection fields individually" then set the SSL field to "None". For reference, I ran Mongo using this:
docker run -p 27017:27017 --name some-mongo mongo:4.0. No authentication necessary.
This solution worked for me.
Run the docker container using:
docker run -d --name mongo-db -v ~/mongo/data:/data/db -p 27017:27017 mongo
-v is for mapping the local volume to the docker writable space. This will keep the data even when the container is destroyed.
MongoDB connection string Compass GUI:
mongodb://localhost:27017
Run your mongo container with 'publish-all-ports' option (docker run -P). Then you should be able to inspect the port exposed to the host via docker ps -a and connect to it from Compass (just use your Hostname: localhost and Port: <exposed port>).
Use the --net=host option for Docker container shares its network namespace with the host machine.
docker run -it --net=host -v mongo_volume:/data/db --name mongo_example4 -d mongo
So now we can connect the mongodb with compass using mongodb://localhost:27017
Other hand to connect, simply get the docker container IPAddress using the docker inspect command and use that ip address instead of localhost
mongodb://172.17.0.2:27017

Assigning Public IP to SQL Server Docker Image

I am using the latest Docker version (17 CE) on a Mac OSX and I have spun up an instance of SQL Server using the following tutorial: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-setup-docker
The server was set up successfully and I managed to connect to it from outside the container via an SQL command line utility.
The next step is that I want to be able to connect to this instance from another PC within the same local network by assigning a public IP to the instance.
I have looked through a number of tutorials and it seems that with docker 10 this functionality is now possible, so I am looking to do it the 'right' way rather than the hacky way (pre-docker 10). I have looked through a number of tutorials namely How to assign static public IP to docker container and Assign static IP to Docker container.
I was testing using the ubuntu image to stay true to the example, but it still didn't work. Although the image ran, whenever I tried to ping the assigned IP from the same computer docker is installed on, I was not receiving a request timeout. Also on Kitematic the only host under IP AND PORTS is localhost. The image is being assigned to the custom network (docker network prune while instance is running does not prune my custom network) but I can't seem to discover my instance from the outside.
Commands I am using are
$ docker network create --subnet=172.18.0.0/16 mynet123
$ docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
$ ping 172.18.0.22
and for my sql server
$ docker network create --driver=bridge --subnet=192.168.0.0/24 --gateway=192.168.0.1 mynet
$ docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=MyPassword123<>' -p 1433:1433 --ip=192.168.0.10 --net=mynet microsoft/mssql-server-linux
$ ping 192.168.0.10
What am I missing?
Any help would be appreciated.

docker linked container all ports closed

i try to connect from one docker container to another.
Container A has a Derby DB installed and started, listening on port 3301
Container B should connect to Container A
the docker files look like:
Container A
FROM java:8
# Install Derby
COPY db-derby-10.12.1.1-bin.tar.gz db-derby-10.12.1.1-bin.tar.gz
RUN mkdir /opt/Apache
RUN cp db-derby-10.12.1.1-bin.tar.gz /opt/Apache
RUN tar xzvf /opt/Apache/db-derby-10.12.1.1-bin.tar.gz
EXPOSE 3301
CMD ["/db-derby-10.12.1.1-bin/bin/startNetworkServer", "-p 3301"]
Container B
FROM java:8
# Install nmap
RUN apt-get update
RUN apt-get install -y nmap
COPY db-derby-10.12.1.1-bin.tar.gz db-derby-10.12.1.1-bin.tar.gz
RUN mkdir /opt/Apache
RUN cp db-derby-10.12.1.1-bin.tar.gz /opt/Apache
RUN tar xzvf /opt/Apache/db-derby-10.12.1.1-bin.tar.gz
EXPOSE 9080
I start both container and give them names
Container A
docker run -it --name derby <image>
Container B
docker run -it --link derby:derby <image> /bin/bash
Then i attach container B and
ping derby or ping 172.17.0.2
which is succesfull. But when i try to connect to the derby database via cli tool and giving a jdbc url like
connect 'jdbc:derby://172.17.0.2:3301/testdb;create=true';
I get an connection refused error.
Using nmap to scan the ports of container A results in "All ports are closed"
which is confusing because Docker References states:
So what does linking the containers actually do? You’ve learned that a
link allows a source container to provide information about itself to
a recipient container. In our example, the recipient, web, can access
information about the source db. To do this, Docker creates a secure
tunnel between the containers that doesn’t need to expose any ports
externally on the container; you’ll note when we started the db
container we did not use either the -P or -p flags. That’s a big
benefit of linking: we don’t need to expose the source container, here
the PostgreSQL database, to the network.
Has anybody maybe a hint or solution for me?
Regards
Allright i got it solved. For all others who might be interessted, you need to start Derby with command
startNetworkServer -h 0.0.0.0
that way you tell Derby to accept all connections from outside, restrict it if you need but the parameter has to be present otherwise connections are refused.
Regards
The port is not open as you have not told docker to open it.
EXPOSE in the Dockerfile instructs docker that the container listens on the specified port, but it does NOT open that port or expose it to the host.
--link doesn't open ports either, that is the benefit of link, you can securely connect 2 containers without having to expose any ports to the host.
So, to connect to the derby BD with your command line tool you have 2 options.
1) Open the port - (possible security implications?)
When you run the container, you specificy the port to open.
docker run -it --name derby -p 3301:3301 <image>
The above will map the 3301 port on the container to the 3301 port on the host.
Then you can use the host IP and port 3301 to connect to that container.
2) Connect to the container directly
You can effectively ssh in to the container and run the command on the container itself...
$ sudo docker exec -it derby bash
And then you have a bash session on the derby container directly, and can run commands on it.
UPDATE
To connect from one container to another over the link you can use the ENV vars that docker exposes on the container about the link. http://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#environment-variables
So, on container B you will have ENV vars containing the link name DERBY.
So, DERBY_PORT will be a IP and PORT to connect to the derby container.
However, if the "derby" container is restarted, the IP in the ENV var will be out of date. So it is better to connect to it by its link name.
Docker also sets up host names about the links, so you can connect to the derby container with
http://derby:3301
from within container B
So you could try..
connect 'jdbc:derby://derby:3301/testdb;create=true';

Resources