I am trying to convert the geoJson files into vector files using mapbox/tippecanoe. I have build the tippecanoe image as mention in the document. But when I run the below command nothing happens.
docker run -it --rm \
-v /tiledata:/data \
tippecanoe:latest \
tippecanoe --output=/data/output.mbtiles /data/example.geojson
It shows me the messages like
For layer 0, using name "example"
/data/example.geojson: No such file or directory
0 features, 10 bytes of geometry, 4 bytes of seperate metadata, 0 bytes of string pool.
Did not read any valid geometries
In my data folder there is example.geojson file exists but it still not able to find end.
I running this on Ubuntu 14 machine.
Can anybody help me out with this? Thanks in advance.
Ran into this question while trying to convert a geojson file with an existing Docker image from Klokantech.
docker run --rm -it -v $(pwd):/osm klokantech/tippecanoe tippecanoe /osm/netherlands-rcn.geojson -o /osm/netherlands-rcn.mbtiles
Using this image converting went really easy.
Related
I'm running a localhost database through Docker on MAC. I have an assignment that requires me to hand in the .bak file along with the program I wrote. I'm using Azure Data Studio as DBMS. I can't find these anywhere and I've tried to google the matter but it doesn't seem as a common issue for other mac users.
How do i access these from Finder? Or is there another way to do this?
Accessing Docker Container File system from Mac OS host through this tutorial.
To access the file system of a particular Container, first, let us get the Container ID using the inspect command on the Docker Host.
docker inspect --format <Container Name>
Use the Alpine Docker image and mount your Host file system to the container
docker run --rm -it -v /:/vm-root alpine:edge sh
We need the ID of this container. So, you could combine the step 1 and 2 with the following
docker run --rm -it -e CONTAINER_ID=$(docker inspect --format <Container Name>) -v /:/vm-root alpine:edge sh
Now we have the CONTAINER_ID set as an environment variable in the alpine container.
Once you are in the alpine container, you can visit the following directory
cd /vm-root/var/lib/docker
Inside this directory, you will be able to access all the familiar files that you are used to when administering Docker
Now, we need to find the mount-id for the selected container to access the file system directories. We will use the CONTAINER_ID environment variable obtained in step 2. I have AUFS as my file system driver for the this example. To do that, use the following command.
MOUNT_ID=$(cat /vm-root/var/lib/docker/image/aufs/layerdb/mounts/$CONTAINER_ID/mount-id)
The above step will give you the mount-id. Now you can access the file system of the container under mnt directory with the mount-id
ls -ltr /vm-root/var/lib/docker/aufs/mnt/$MOUNT_ID
I hosted a redis database with docker and now I want to know if it is possible to load the database scheme with a config file I will save in my repository?
Thanks for your help
If the image you use has the ability to load external file, then it will be a matter to provide the file to the container when you create container.
The official redis image allows you to give your own config file as well as mount a file storage area.
What you need to do is:
Create folder in your repository ex. redis
Create configuration and data folders under it ex redis/config and redis/data
Create the redis.conf in redis/config to enable the persistence as follows:
SAVE 60 1
dbfilename initial_file.rdb
dir /data
Start container with redis as follows
$ docker run -v <repo folder>/redis/conf:/usr/local/etc/redis -v <repo folder>/redis/data:/data -d --name some-redis redis redis-server /usr/local/etc/redis/redis.conf
Enter in to the container and initialize the database
$ docker exec -ti some-redis /bin/bash
root#4ec2d5dc5082:/data# redis-cli
127.0.0.1:6379> set test 1
OK
127.0.0.1:6379> save
OK
you can initialize the database with file as well, just copy the file to the container and use the cli tools to load the file.
Stop the container and clean
$ docker stop some-redis
$ docker rm some-redis
Add the new files to your repository and commit the changes
After this you will have the database files in your repository. Whenever you want to use them, you can start a container as in step 4
How do you take a screenshot via ADB for Android Things? I have tried:
adb shell screencap -p /sdcard/screen.png
adb pull /sdcard/screen.png
adb shell rm /sdcard/screen.png
and
adb shell screencap -p | perl -pe 's/\x0D\x0A/\x0A/g' > screen.png
I couldn't make screepcap work in Android Things Developer Preview. The command results in a 0-size file.
That said, I recommend the following two options: either use the framebuffer or record a video (screenrecord seems to work) and convert it to an image later on by proper tool. I'll consider the first option, so the steps would be:
Pull the framebuffer to the host machine. Note that you need to start adbd as root in order to pass a permission check:
adb root
adb pull /dev/graphics/fb0 screenshot
Convert the raw binary to image by the tool you prefer. I'm using ffmpeg. The command below might not work for you due to different screen resolution or pixel format. If so, make proper changes.
ffmpeg -f rawvideo -pix_fmt rgb565 -s 800x480 -i screenshot screenshot.png
Seems, because of old limited OpenGL version in Android Things, described by Tatsuhiko Arai here there is no possibility to get screenshot via ADB, but You can record video (e.g. from Android Studio, or via ADB commands) and than grab frame from it, for example via ffmpeg:
ffmpeg -i device-2017-01-23-193539.mp4 -r 1 screen-%04d.png
where device-2017-01-23-193539.mp4 - name of recorded (via Android Studio) file .
I've tried exactly this code with a little bit change like below (but no matter) and it works well. The image is in my platform-tools directory now.
adb shell screencap -p /sdcard/screen.png
adb pull /sdcard/screen.png screen.png
adb shell rm /sdcard/screen.png
I have a slight problem, I've been trying to get some persistent data with ArangoDB and Docker. I passed the argument to Docker which attaches the hosts directory to a path within Docker. It's all fine to this point, but I'm stuck in an enigma where the hell this directory is.
1) This is a sample command which resembles mine:
docker run -e ARANGO_ROOT_PASSWORD='mypass' -p 80:8529 -d -v myhostfolder:/var/lib/arangodb3 arangodb
So, the problem is i can't find myhostfolder anywhere on my host machine which runs docker. The data within it is persistent and I can access it, but only through the docker container. I think that the data is somewhere on my host machine, I've been trying to pass a couple of these "relative" folders and they all keep persistent data so I doubt that the data is in the actual docker container.
2) If I do something like this (providing an absolute path)
docker run -e ARANGO_ROOT_PASSWORD='mypass' -p 80:8529 -d -v /home/myhostfolder:/var/lib/arangodb3 arangodb
then I have no issues with locating the /home/myhostfolder.
So my question is, where on my OS X 10.12 is the myhost folder from example 1)?
Thanks for your help!
The host-dir can either be an absolute path or a name value. If you supply an absolute path for the host-dir, Docker bind-mounts to the path you specify. If you supply a name, Docker creates a named volume by that name.
Refer https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume.
In your case, as myhostfolder is a name, docker creates a named volume. Execute the below command which lists the volumes. A volume with name myhostfolder will be shown.
docker volume ls
I have to import data files from a user local file C:/users/saad/bdd to a docker container (cassandra), I didn't find how to proceed using docker commands.
I'm working on windows 7.
Use docker cp.
docker cp c:\path\to\local\file container_name:/path/to/target/dir/
If you don't know what's the name of the container, you can find it using:
docker ps --format "{{.Names}}"
When using docker toolbox, there seems to be another issue related to absolute paths.
I am communicating with the containers using the "Docker Quickstart Teminal" which essentially is a MINGW64 environment.
If I try to copy a file with an absolute path to a container, I receive the error message.
$ docker cp /d/Temp/my-super-file.txt container-name:/tmp/
copying between containers is not supported
If I use a relative path, it simply works.
$ cd /d/
$ docker cp Temp/my-super-file.txt container-name:/tmp/
P.S.: I am posting this as an answer because of missing reputation for a comment.
Simple way:
From DockerContainer To LocalMachine
$docker cp containerId:/sourceFilePath/someFile.txt C:/localMachineDestinationFolder
From LocalMachine To DockerContainer
$docker cp C:/localMachineSourceFolder/someFile.txt containerId:/containerDestinationFolder
It is not as straight-forward when using docker toolbox. Because docker toolbox has only access to C:\Users\ folder and there is a Oracle Virtual Box Manager in between, when you do get to copy the folder it is not directly copied to the container but instead to a mounted volume handle by Oracle VM machine. Like so:
/mnt/sda1/var/lib/docker/volumes/19b65e5d9f607607441818d3923e5133c9a96cc91206be1239059400fa317611/_data
How I got around this is just editing my DockerFile:
FROM cassandra:latest
ADD cassandra.yml /etc/cassandra/
ADD import.csv /var/lib/cassandra/
EXPOSE 9042
And building it.
If you are using docker-toolbox on windows, use the following syntax
docker cp /C/Users/Saad/bdd-restaurants cassandra:/var/lib/docker/containers
Use this command will help to copy files from host machine to docker container.
docker cp c:\abc.doc <containerid> :C:\inetpub\wwwroot\abc.doc
if you are trying to copy file from windows to an EC2 instance use the following in cmd (Putty enabled):
pscp -i "D:\path_to_ppk_key" c:\file_name ubuntu#**.***.**.*:/home/ubuntu/file
Then you can copy to docker in EC2 using
docker cp /home/ubuntu/file_name Docker_name:/home/
For those who are using WSL (Windows Subsystem for Linux), Docker, and DevContainers from VSCode (Visual Studio Code), I was able to make this work by using the WSL command line.
docker cp "/mnt/<drive letter>/source/My First Copy Command" <container id>:/workspace/destination/path
I also wrote it up in more detail.
You can also use volume to mount the file to container on the run:
docker run -v /users/saad/bdd:/myfiles/tmp/