Set up Jaeger with OpenTelemetry - telemetry

I have built a sample app to understand the trace and span using OpenTelemetry. I want to see them in Jaeger UI. How to set up Jaeger with my application which uses OpenTelemetry for tracing?

To start a jaeger container:
docker run --rm --name jaeger -d -p 16686:16686 -p 6831:6831/udp jaegertracing/all-in-one
Then you should by able to access to the Jaeger UI at http://localhost:16686
Once you've have a Jaeger up and running, you need to configure a Jaeger exporter to forward spans to Jaeger. This will depends of the language used.
Here is the straightforward documentation to do so in python.

Related

Add Flink Job Jar in Docker Setup and run Job via Flink Rest API

We're running Flink in Cluster Session mode and automatically add Jars in the Dockerfile:
ADD pipeline-fat.jar /opt/flink/usrlib/pipeline-fat.jar
So that we can run this Jar via the Flink Rest API without the need to upload the Jar in advance:
POST http://localhost:8081/:jarid/run
But the "static" Jar is now shown, to get the :jarid:
GET http://localhost:8081/jars
So my question is:
Is it possible to run a userlib jar using the Flink Rest API?
Or can you only reference such jars via
CLI flink run -d -c ${JOB_CLASS_NAME} /job.jar
and standalone-job --job-classname com.job.ClassName Mode?
My alternative approach (workaround) would be to upload the jar in the Docker entrypoint.sh of the jobmanager container:
curl -X POST http://localhost:8084/jars/upload \
-H "Expect:" \
-F "jarfile=#./pipeline-fat.jar"
I believe that it is unfortunately not possible to currently start a flink cluster in session mode with a jar pre-baked in the docker image, and then start the job using the REST API commands (as you showed).
However your workaround approach seems like a good idea to me. I would be curious to see if it worked for you in practice.
I managed to run a userlib jar using the command line interface.
I edited docker compose to run custom docker-entrypoint.sh.
I've add to original docker-entrypoint.sh
run_user_jars() {
echo "Starting user jars"
exec ./bin/flink run /opt/flink/usrlib/my-job-0.1.jar & }
run_user_jars
...
And edit original entrypoint for jobmanager in docker-compose.yml file
entrypoint: ["bash", "/opt/flink/usrlib/custom-docker-entrypoint.sh"]

Mongodb running on Docker is wiping the collection after restart

I have to build a small application that reads data from MongoDB running on docker and uses it for further processes.
The problem is that after I close docker, the local instance of the database is also getting deleted. How can I stop it?
The MONGODB_URI is mongodb://localhost:27017 and what are the attributes that I should add in the docker command to avoid it. should I avoid using localhost? docker-compose seems confusing to me so I use Dockerfile.
So, what exactly can be the docker run command to avoid it? is it one of these?
Commands: docker run -d --name mongo-on-docker -p 27017:27017 mongo
docker run -d --name sample --link mongo-on-docker web app
Also to permanently save what data directory should I use?
Docker container are dead before quiting. For store data you should mount named volume, folder or file to the container.
In MongoDB case try:
docker run --rm -ti -v mongo_data:/data/db mongo bash
Where mongo_data is a special Docker entity, that can be mounted as a folder into container. Including in different containers at the same time.
Not new:
How to set docker mongo data volume

Docker - SQL service wont run when cloned

I'm trying to add a volume to a docker container but when I commit it and run with the new volume none of the sql services run on this copy?? Why would that be.
https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker?view=sql-server-ver15&pivots=cs1-powershell
I am adding the initial one as above and it works.All fine. Services running. I can connect to it, run SQL but I need to share a drive.
Seems I cant add one directly to an existing instance??
docker commit 5a8f89adeead newimagename
docker run -ti -v "C:/dir1":/dir1 newimagename /bin/bash
I do the above to clone it and add a volume. WORKS. But the sql services just arent running on this new one. Ill accept it either way I just want SQL running and a share in there.
Can anyone help.
Manged it:
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Pa55word1" `
-v C:/db:/dir1 `
-p 1433:1433 --name sql3 `
-d mcr.microsoft.com/mssql/server:2019-CU3-ubuntu-18.04
Had an issue with having no drive or no services but this has done it.

Docker Keep Exiting (Deploying MS SQL on MAC osx)

I'm trying to deploy an MS SQL server on my MAC. There are several alternatives for that.
Here, I'm using docker: I've checked the MS-SQL website, and I executed this code:
docker run -e
'ACCEPT_EULA=Y'
-e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433
-d microsoft/mssql-server-linux
However, The container keeps stopping by itself.
Did I miss something here?
The docker Version I'm using:
Version 1.13.0 (15072)
I had a similar problem. I finally realized the issue was that I was using a dummy password for local dev that didn't adhere to SQL Server's password policy. I used a more complex password and that fixed it up.
I faced this issue recently on Windows. Changing the ' quotes to " fixed the issue.
If you are using MacOS Ventura and/or using a Mac with M1/M2 (Apple Silicon) chip, you will need to enable the Rosetta Emulation to get this to work.
Go to Docker > Settings > Features in development and enable the option 'Use Rosetta for x86/amd64' emulation on Apple Silicon' and restart Docker.
Also, make sure the password obeys the Password Policy set by Microsoft and create a strong password.
Delete the container and re-run the docker run command. An example is below:
docker run -d -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Ithink%Th5r5f0re$Iam' --name sql_server --platform linux/amd64 -p 1433:1433 mcr.microsoft.com/mssql/server:2022-latest
This should get you run the container without the Exited(1) error.
This link explains the details:
https://devblogs.microsoft.com/azure-sql/development-with-sql-in-containers-on-macos/
When running this on Mac you need to bump up your Docker for Mac's RAM. SQL Server needs minimum 4GB RAM, Docker for Mac by default only allocates about 1-2GB for all containers.
To increase Docker for Mac's RAM:
Open Docker for Mac's preferences
Click "Resources"
Move the RAM slider up, in my case I moved it to 6GB (4GB for SQL Server and 2GB for everything else)
You also need to allocate 4GB to the container when starting it up:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' --memory=4096M -p 1433:1433 -d microsoft/mssql-server-linux
To confirm you memory limits were applied run: docker stats. The MEM USAGE / LIMIT column for the SQL Server container should have output similar to: 536.7MiB / 4GiB
The other thing to watch out for on Mac is that you cannot mount volumes this will cause issues.

Database migrations in docker swarm mode

I have an application that consists of simple Node app and Mongo db. I wonder, how could I run database migrations in docker swarm mode?
Without swarm mode I run migrations by stopping first the old version of application, running one-off migration command with new version of application and then finally starting a new version of app:
# Setup is roughly the following
$ docker network create appnet
$ docker run -d --name db --net appnet db:1
$ docker run -d --name app --net appnet -p 80:80 app:1
# Update process
$ docker stop app && docker rm app
$ docker run --rm --net appnet app:2 npm run migrate
$ docker run -d --name app --net appnet -p 80:80 app:2
Now I'm testing the setup in docker swarm mode so that I could easily scale app. The problem is that in swarm mode one can't start containers in swarm network and thus I can't reach the db to run migrations:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
6jtmtihmrcjl appnet overlay swarm
# Trying to replicate the manual migration process in swarm mode
$ docker service scale app=0
$ docker run --rm --net appnet app:2 npm run migrate
docker: Error response from daemon: swarm-scoped network (appnet) is not compatible with `docker create` or `docker run`. This network can only be used by a docker service.
I don't want to run the migration command during app startup either, as there might be several instances launching and that would potentially screw the database. Automatic migrations are scary, so I want to avoid them at all costs.
Do you have any idea how to implement manual migration step in docker swarm mode?
Edit
I found out a dirty hack that allows to replicate the original workflow. Idea is to create a new service with custom command and remove it when one of its tasks is finished. This is far from pleasant usage, better alternatives are more than welcome!
$ docker service scale app=0
$ docker service create --name app-migrator --network appnet app:2 npm run migrate
# Check when the first app-migrator task is finished and check its output
$ docker service ps app-migrator
$ docker logs <container id from app-migrator>
$ docker service rm app-migrator
# Ready to update the app
$ docker service update --image app:2 --replicas 2 app
I believe you can fix this problem by making your overlay network, appnet, attachable. This can be accomplished with the following command:
docker network create --driver overlay --attachable appnet
This should fix the swarm-scoped network error and and allow you to run migrations
This is indeed tricky situation, though I think running the migration during startup might be the the final piece of the puzzle.
The way I do it right now (though not very elegant, it works), is using a message queue (I'm using redis), on app startup, it will send a message to a the queue, informing it that the migration task needs to be run. At the other end of the queue, I have a listener app that will process the queue and run the migration task. The migration task would only run once, since there's only a single instance of the listener running it sequentially. So essentially I'm just using the queue & the listener app to make sure that the migration task runs only once.

Resources