After deploying Metabase in Gcloud, GAE app url shows error page.
I followed all the instructions on this link https://www.cloudbooklet.com/install-metabase-on-google-cloud-with-docker-app-engine/ to deploy metabase on GAE.
I have tried with both mysql and Postgres db but the result is always an error page
Here is my App.yaml code.
env: flex
manual_scaling:
instances: 1
env_variables:
MB_JETTY_PORT: 8080
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: root
MB_DB_PASS: password
MB_DB_HOST: 127.0.0.1
beta_settings:
cloud_sql_instances: <sql_instance>=tcp:5432
Here is my dockerfile
FROM gcr.io/google-appengine/openjdk
EXPOSE 8080
ENV PORT 8080
ENV MB_PORT 8080
ENV MB_JETTY_PORT 8080
ENV MB_DB_PORT 5432
ENV METABASE_SQL_INSTANCE <sql_instance>=tcp:5432
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 ./cloud_sql_proxy
ADD http://downloads.metabase.com/v0.33.2/metabase.jar /metabase.jar
RUN chmod +x ./cloud_sql_proxy
CMD ./cloud_sql_proxy -instances=$METABASE_SQL_INSTANCE=tcp:$MB_DB_PORT & java -jar ./metabase.jar
Following is the error I get on console log
INFO metabase.driver :: Registered abstract driver :sql ?
Also the error message on App engine URL says the following,
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
I tried all options I could find, please help me with a working solution.
First, start by following the instructions on the Connecting from App Engine page. Make sure that the SQL Admin API is enabled, and that the service account being used has the Cloud SQL Connect IAM role.
Second, you don't need to run the proxy in the docker container. When you specify it in the app.yml, it allows you to access it on 172.17.0.1:<PORT>. (Although if you are using a container, I would highly suggest you try Cloud Run instead).
Finally, according to this Metabase setup instructions, you need to provide the environment variables to the container to specify what database you want it to use. These env vars are all in format MB_DB_*.
Here is what a dockerfile without the proxy might look like:
FROM gcr.io/google-appengine/openjdk
ENV MB_JETTY_PORT 8080
ENV MB_DB_TYPE postgres
ENV MB_DB_HOST 172.17.0.1
ENV MB_DB_PORT 5432
ENV MB_DB_USER <your-username>
ENV MB_DB_PASS <your-password>
ENV MB_DB_DBNAME <your-database>
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ENTRYPOINT java -jar ./metabase.jar
For bonus points, you might consider using the distroless container (gcr.io/distroless/java:11) as a base instead (especially if you switch to Cloud Run).
Related
I have an issue, where inside my docker container of the react app, my env variables are not working (got undefined).
My Dockerfile:
FROM <my nginx image>
COPY build/. /usr/share/nginx/html
COPY config/nginx.conf /etc/nginx/nginx.conf
EXPOSE 8080 80
My .env file (in the root of the project):
REACT_APP_VAR=HELLO
And in my code, I access that env variable through process.env.REACT_APP_VAR.
However, when I execute inside my production Linux server the command docker exec client -e, I do get all the env variables, including REACT_APP_VAR, PATH, HOSTNAME and etc.
Important to say, this issue is only in the docker (in the prod server), in my windows development station it works fine (without docker).
Also, I can't add ENV inside my Dockerfile, and I rather not use the docker yaml's files.
Is there a way to run a list of linux commands on the docker after deployement finish automatically, like lifecycle (valable for Kubernetes ) on the yaml file?
I do not want to have to ssh to the instance and run my command.
I need to install ssh-client and some time vim and other.
For those who are looking for a solution to this problem.
App Engine with runtime: python or other default source in the app.yaml, there won't be too much customization.
To be able to create your own build you have to use runtime: custom and add Dockerfile file in same directory (root).
This is what it looks like:
app.yaml:
Only the first line change.
runtime: custom
# the PROJECT-DIRECTORY is the one with settings.py and wsgi.py
entrypoint: gunicorn -b :$PORT mysite.wsgi # specific to a GUnicorn HTTP server deployment
env: flex # for Google Cloud Flexible App Engine
# any environment variables you want to pass to your application.
# accessible through os.environ['VARIABLE_NAME']
env_variables:
# the secret key used for the Django app (from PROJECT-DIRECTORY/settings.py)
SECRET_KEY: 'lkfjop8Z8rXWbrtdVCwZ2fMWTDTCuETbvhaw3lhwqiepwsfghfhlrgldf'
DEBUG: 'False' # always False for deployment
DB_HOST: '/cloudsql/app-example:us-central1:example-postgres'
DB_PORT: '5432' # PostgreSQL port
DB_NAME: 'example-db'
DB_USER: 'mysusername' # either 'postgres' (default) or one you created on the PostgreSQL instance page
DB_PASSWORD: 'sgvdsgbgjhrhytriuuyokkuuywrtwerwednHUQ'
STATIC_URL: 'https://storage.googleapis.com/example/static/' # this is the url that you sync static files to
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
handlers:
- url: /static
static_dir: static
beta_settings:
cloud_sql_instances: app-example:us-central1:example-postgres
runtime_config:
python_version: 3 # enter your Python version BASE ONLY here. Enter 2 for 2.7.9 or 3 for 3.6.4
Dockerfile:
FROM gcr.io/google-appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
RUN virtualenv /env -p python3
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Install custom linux packages (python-qt4 for open-cv)
RUN apt-get update -y && apt-get install vim python-qt4 ssh-client git -y
# Add the application source code and install all dependencies into the virtualenv
ADD . /app
RUN pip install -r /app/requirements.txt
# add my ssh key for github
RUN mv /app/.ssh / && \
chmod 600 /.ssh/*
# Run a WSGI server to serve the application.
EXPOSE 8080
# gunicorn must be declared as a dependency in requirements.txt.
WORKDIR /app
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 Blacks.wsgi:application --timeout 0 --preload && \
python3 manage.py makemigrations && \
python3 manage.py migrate
App Engine is Serverless solution. In product description you may find:
Fully managed
A fully managed environment lets you focus on code while
App Engine manages infrastructure concerns.
This means that, if you choose App Engine you don't have to care about server. So as design, it's for those who do not want to have any access to server, but focus on code leaving all server maintenance to GCP. I think the main feature is automatic scaling of the app.
We do not know what do you intend to do, you can review the app.yaml reference to find all features available. The configuration is different depending of the language environment you want to use.
If you want to have access to environment you should use Kubernetes solutions or even Compute engine.
I hope it will help somehow!
Another simple workaround would be to create a separate URL handler that will run your shell script. For example /migrate. And after deployment, you may use curl to trigger that URL.
Please note that in such case anybody, who would try to find some secret URLs of your backend may find it and trigger it as many times, as they want. So if you need to ensure only trusted people can trigger it - you should either:
come up with more secret URL than just /migrate
check permissions inside this view (but in such case it will be more difficult to call it via curl, cause you'll also need to pass some auth data)
Example basic view (using python + django rest framework):
from io import StringIO
from django.core.management import call_command
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.response import Response
from rest_framework.views import APIView
class MigrateView(APIView):
permission_classes = ()
renderer_classes = [StaticHTMLRenderer]
def post(self, request, *args, **kwargs):
out = StringIO()
# --no-color because ANSI symbols (used for colors)
# render incorrectly in browser/curl
call_command('migrate', '--no-color', stdout=out)
return Response(out.getvalue())
I have generated a docker file for asp.net core API with a single page application thanks to Visual studio. After some research on the web I correct differents trouble about SPA in this docker file.
Finnaly my trouble is the connexion with our database server.
When I tried to connect, I've got a
Microsoft.Data.SqlClient.SqlException : A network-related or instance-specific error occurred while establishing a connection to SQL Server.
It seems that it appears because my container could not acces to the server, after hour and hour of google search, I only found solution with a SQL hosted in docker image.
How to all my docker image of wab app accessing the entire company network to access different server ? I use computer name ant not IP to match company requirement.
Thanks for all
Versions :
.net core api : 3.1
I'm using docker for Windows
docker use linux container
Here is my docker file
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& apt-get install nodejs -yq
WORKDIR /src
COPY ["Company.Dtm.WebApi.AppWebApi/Company.Dtm.WebApi.AppWebApi.csproj", "Company.Dtm.WebApi.AppWebApi/"]
COPY ["CompanyFramework/Company.Framework.WebApi/Company.Framework.WebApi.csproj", "CompanyFramework/Company.Framework.WebApi/"]
COPY ["CompanyFramework/Company.Framework.Model/Company.Framework.Model.csproj", "CompanyFramework/Company.Framework.Model/"]
COPY ["CompanyFramework/Company.Framework.Tools/Company.Framework.Tools.csproj", "CompanyFramework/Company.Framework.Tools/"]
COPY ["AppLib/Company.Dtm.Lib.AppLib/Company.Dtm.Lib.AppLib.csproj", "AppLib/Company.Dtm.Lib.AppLib/"]
RUN dotnet restore "Company.Dtm.WebApi.AppWebApi/Company.Dtm.WebApi.AppWebApi.csproj"
COPY . .
WORKDIR "/src/Company.Dtm.WebApi.AppWebApi"
RUN dotnet build "Company.Dtm.WebApi.AppWebApi.csproj" -c Debug -o /app/build
FROM build AS publish
RUN dotnet publish "Company.Dtm.WebApi.AppWebApi.csproj" -c Debug -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Company.Dtm.WebApi.AppWebApi.dll"]
Here is my docker-compose
version: '3'
services:
webapp:
build: .
network_mode: "bridge"
ports:
- "8880:80"
I also had this issue, just trying to connect to my localhost development SQL Server.
What ended up working was to add the normal SQL Server ports to my Dockerfile:
EXPOSE 1433
EXPOSE 5000 [or whatever other ports you may be using.]
Then set up a firewall Inbound Rule to allow those ports.
You cannot use 'localhost' obviously since the 'localhost' is the host the container it is running in, but I did find with Windows at least, that I can simply use my dev machine name as the server, so it seems that DNS works across the nat. I would think you should be able to access any network resource at that point, but I would say your firewall[s] might be a place to start. Your Docker container acts like an external network and therefore generally un-trusted.
I also found that I did not have a 'bridge' network. Maybe you get that with the Linux container.
My >docker network ls command revealed a "Default Switch" network, but no "bridge". Because this is Docker for Windows, there is no 'host' option.
That was all there was to it for me. I see a lot of other posts talking about a lot of other things, but honestly, just opening up the firewall is what did the trick. Good luck!
You need to add another service for your db in your compose file.
Something like this:
version: "3"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
make sure to replace the password in the SA_PASSWORD environment variable under db.
I'm trying to dockerise a react app. I'm using the following Dockerfile to achieve this.
# base image
FROM node:9.4
# set working directory
WORKDIR /usr/src/app
# install and cache app dependencies
COPY package*.json ./
ADD package.json /usr/src/app/package.json
RUN npm install
# Bundle app source
COPY . .
# Specify port
EXPOSE 8081
# start app
CMD ["npm", "start"]
Also, in my package.json the start script is defined as
"scripts": {
"start": "webpack-dev-server --mode development --open",
....
}
I build the image as:
docker build . -t myimage
And I finally run the image, as
docker run IMAGE_ID
This command then runs the image, however when I go to localhost:8080 or localhost:8081 I dont see anything.
However, when I go into the docker container for myimage, and do curl -X GET http:localhost:8080 I'm able to access my react app.
I also deployed this on google-kubernetes and exposed a load-balancer service on this. However, the same thing happened, I cannot access the react-app on the exposed endpoint, but when I logged into the container, and made curl request, I was getting back the index.html.
So, how do I run the image of this docker image so that I could access the application through a browser.
When you use EXPOSE in Dockerfile it simply states that the service is listening on the specified port (in your case 8081), but it does not actually create any port forwarding.
To actually forward traffic from host machine to the service you must use the -p flag to specify port mapping
For example:
docker run -d -p 80:8080 myimage would start a container and forward requests to localhost:80 to the containers port 8080
More about EXPOSE here https://docs.docker.com/engine/reference/builder/#expose
UPDATE
So usually when you are developing node applications locally and run webpack dev-server it will listen on 127.0.0.1 which is fine since you intend to visit the site from the same machine as it is hosted. But since in docker the container can be thought of as a separate instance that means you need to be able to access it from the "outside" world which means that it is necessary to reconfigure the dev-server to listen on 0.0.0.0 (which basically means all IP addresses assigned to the "instance")
So by updating the dev-server config to listen on 0.0.0.0 you should be able to visit your application from your host machine.
Link to documentation: https://webpack.js.org/configuration/dev-server/#devserverhost
I have a mongodb database running on the default port 27017 in a docker container.
Is there a way to connect to the database with the mongodb compass GUI running natively on my ubuntu OS?
docker run -p 27018:27017 and then connect from Compass on your host with port 27018. I don't see a reason to expose all ports.
Replace localhost with your IP address in the connection string, eg, my IP address is 10.1.2.123 then I have mongodb://10.1.2.123:27017?readPreference=primary&appname=MongoDB%20Compass&ssl=false.
Saw this 👆 here: https://nickjanetakis.com/blog/docker-tip-35-connect-to-a-database-running-on-your-docker-host
With docker-compose you just have to expose the port 27017. When You hit "Connect" in the GUI it will auto-detect this connection.
version: "3"
services:
mongo-database:
container_name: mongo-database
image: mongo:4
ports:
- 27017:27017
Yes we can run
Steps:
Pull/Restart the docker container mongodb
Enter the bash shell
docker exec -it mongodb bash
Now open the mongodb compass community and with same default connection just click connect and the docker container's mongodb will be connected to compass community.
My terminal running docker:
Mongodb Compass:
Use docker inspect or docker desktop to inspect and find the exposing port
docker inspect your_container_name
and find this section
"Ports": {
"27017/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "27012"
}
]
},
and then connect using this url string
mongodb://localhost:27012/?readPreference=primary&appname=MongoDB%20Compass&ssl=false
Do not pass in replica set name if you are using one otherwise connection will fail. This is if you have deployed a replica set instead of turning your standalone to a replica set.
Leave a comment if you don't know how to deploy a replica set and I can leave a docker-compose file to set up and deploy replica set.
I could connect the compass on windows to a docker using these tags at the end:
mongodb://user:password#localhost:27017/dbname?authSource=dbname&readPreference=primary&gssapiServiceName=mongodb&appname=MongoDB%20Compass&ssl=false
Just open compass and inside connect add the credentials if you have used envs like
ME_CONFIG_MONGODB_ADMINUSERNAME=admin
and hit connect.No addition settings required.
Or you can use mongo-express which a web based UI tool for monodb.
Run command sudo docker ps
it will show docker containers you have where you can find the port number of mongodb
the run the command sudo mongodb-compass
it will open the mongodb compass
If you are connecting locally so general hostname is : localhost
and then just put the port number and click on connect.
I was also having trouble connecting to my local MongoDB using Compass, but discovered it was an SSL problem. By default, Compass sets SSL to "System CA". However, if you try that with your dockerized Mongo, your Mongo logs will show you this error:
Error receiving request from client: SSLHandshakeFailed: SSL handshake received but server is started without SSL support. Ending connection from 172.17.0.1:45902 (connection id: 12)
end connection 172.17.0.1:45902 (0 connections now open)
Therefore, to connect, I had to click "Fill in connection fields individually" then set the SSL field to "None". For reference, I ran Mongo using this:
docker run -p 27017:27017 --name some-mongo mongo:4.0. No authentication necessary.
This solution worked for me.
Run the docker container using:
docker run -d --name mongo-db -v ~/mongo/data:/data/db -p 27017:27017 mongo
-v is for mapping the local volume to the docker writable space. This will keep the data even when the container is destroyed.
MongoDB connection string Compass GUI:
mongodb://localhost:27017
Run your mongo container with 'publish-all-ports' option (docker run -P). Then you should be able to inspect the port exposed to the host via docker ps -a and connect to it from Compass (just use your Hostname: localhost and Port: <exposed port>).
Use the --net=host option for Docker container shares its network namespace with the host machine.
docker run -it --net=host -v mongo_volume:/data/db --name mongo_example4 -d mongo
So now we can connect the mongodb with compass using mongodb://localhost:27017
Other hand to connect, simply get the docker container IPAddress using the docker inspect command and use that ip address instead of localhost
mongodb://172.17.0.2:27017