I want to make a clone/duplicate of a database I have in ArangoDB. This https://stackoverflow.com/a/27827457 is one way I saw to do it, but it doesn't work for me because I can't run arangodump or any of the other Arango commands (like arangosh, arangorestore, etc.).
Also, why can't I run arangodump? This answer https://stackoverflow.com/a/63074313 says to "Open terminal and use cd to go to the directory in which arangoimport.exe is stored", but I can't find arangoimport.exe anywhere.
I looked on the ArangoDB website already, but I couldn't find any info.
If you don't have access to arangodump and arangorestore on server, then easiest way to invoke them is via docker and access your server by adding --server.endpoint option, you'll need to map some volume/directory to container to preserve dumped data for restoring them in other container, something like this:
#dump data to /tmp/dump at your host
docker run -it --rm -v /tmp/dump:/dump arangodb/arangodb:3.7.6 arangodump --server.endpoint http+tcp://192.168.1.2:8529
#restore data from /tmp/dump at your host
docker run -it --rm -v /tmp/dump:/dump arangodb/arangodb:3.7.6 arangorestore --server.endpoint http+tcp://192.168.1.2:8529
documentation of all available options, including examples are here for arangodump and here for arangorestore
other option is to write your own implementation of dump and restore utilizing ArangoDB REST APIs, but that's hefty and error prone task comparing to installing docker and then running provided dump and restore tools
Related
I am using the cloned dspace 6-x branch and installed it via docker. Can someone help me with the backup of my local database (Communities, collections, items)to a remote database?
According to the documentation we need to use the command:
dspace packager -s -t AIP -e eperson -p parent-handle file-path
But it returns an error: dspace is not a command
Anyone could help me transfer my local database to my remote repo?
Thanks!
Moving publications to a new repository will be a more substantial undertaking!
But your recent problem seems just that you are either not on the right container or in the right directory for executing the dspace command. Thus it is "not found". Make sure to execute dspace on the dspace container and specify the right/complete path. The dspace command is located in
/path/to/your/dspace-deployement-directory/bin.
I installed oracle db version 12c in my docker environment.
I used the following command:
docker run -d --name oracle -p 8080:8080 -p 1521:1521 quay.io/maksymbilenko/oracle-12c
I connected to the DB and everything went well but I wanted to enable unified audit.
In order to do that, at first you must shutdown the Database and in all the instructions that I found it says to use sqlplus as following:
sqlplus / as sysoper
SQL> shutdown immediate
SQL> exit
I connected successfully to the DB using the next command:
docker exec -it oracle "bash"
and then I ran the sqlplus command and I received "command not found"
[root#f30cc670f85f /]# sqlplus / as sysoper
bash: sqlplus: command not found
Am I doing it wrong?
What should I do in order to have sqlplus on my oracle DB?
I looked for it and didn't find anything that helped me.
I have mac if its relevant
I think that Docker image is just the database and enough of the OS to run the database. I don't think it includes client software such as SQL*Plus.
You need to have SQL*Plus installed on your Mac. If you haven't already, download the Oracle Instant Client for MacOS including the SQL*Plus extension. Or why not treat yourself and install the new-fangled sqlCL tool? It is easier to install and has all the SQL*Plus capabilities and a whole bunch more features. Find it here.
Whatever client you choose, once it's installed on your Mac you run it like any other app: when prompted for connection you give the string Maksym provides:
system/oracle#//localhost:1521/xe
If you need to connect as sys that would look like this:
sys/oracle#//localhost:1521/xe as sysdba
Sourcing the .bashrc should work to connect to sqlplus as sysdba.
docker-compose exec db bash -c "source /home/oracle/.bashrc; sqlplus sys/Oradoc_db1#ORCLCDB as sysdba;"
with this, you enter the image:
docker exec -it oracle /bin/bash
after that, you can use:
sqlplus sys as sysdba
When using the docker image store/oracle/database-enterprise:12.2.0.1-slim sqlplus and sqlldr tools are only available after the container has started.
You can't do the following in a Dockerfile:
RUN sqlplus sys/password AS SYSDBA #create_database.sql
The container images can be configured to run scripts after setup and on startup. Currently sh and sql extensions are supported.
In your Dockerfile, copy the SQL script into the startup directory:
COPY create_database.sql /opt/oracle/scripts/setup/01_create_database.sql
The database will be created on first startup of the container.
I don't have any experience with docker, but it looks for all the world like you are getting to a bash environment, so there we are on solid ground. The returned error ("bash: sqlplus: command not found") simply means that the executable (sqlplus) was not found in any directory listed in your PATH environment variable, as it exists within your shell environment. You actually need to set three variables: ORACLE_SID needs to be set to the value of your database name. ORACLE_HOME needs to be set to the value of the directory where your oracle binaries are installed. And PATH needs to have $ORACLE_HOME/bin added to it:
export PATH=$ORACLE_HOME/bin:$PATH
Obviously, since you are using the value of ORACLE_HOME in setting PATH, ORACLE_HOME needs to be set first.
For Windows OS:
Type docker ps in command line to show running containers and check container id.
Type docker exec -it container_id //bin/bash
Login via sqlplus command
Or the simplest way
docker exec -it container_id bash -c "source /home/oracle/.bashrc; sqlplus sys/Oradoc_db1#ORCLCDB as sysdba;"
More info is here: https://hub.docker.com/u/cgmmathaw/content/sub-90f0c051-b514-4b7b-a0fe-fc9d6f2172fa
I've created a docker container that contains a mssql Database. On the command line ip a gives an ip address for the container, however trying to ssh into it username#docker_ip_address yields ssh: connect to host ip_address port 22: Connection refused. So I'm wondering if I am even able to ssh into the container so I don't have to always be using the docker tool docker exec .... and if so how would I go about doing that?
To ssh into container you should full-fill followings
SSH server(Openssh) should be installed within the container and ssh service should be running
Port 22 should be published from container (when you run the container).more info here > Publish ports on Docker
docker ps command should display mapped ports 22
Hope above information helps for you to understand the situation...
If your container contains a database server, the normal way to interact with will be through an SQL client that connects to it; Google suggests SQL Server Management Studio and that connector libraries exist for popular languages. I'm not clear what you would do given a shell in the container, and my main recommendation here would be to focus on working with the server in the normal way.
Docker containers normally run a single process, and that's normally the main server process. In this case, the container runs only SQL Server. As some other answers here suggest, you'd need to significantly rearchitect the container to even have it be possible to run an ssh daemon, at which point you need to worry about a bunch of other things like ssh host keys and user accounts and passwords that a typical Docker image doesn't think about at all.
Also note that the Docker-internal IP address (what you got from ip addr; what docker inspect might tell you) is essentially useless. There are always better ways to reach a container (using inter-container DNS to communicate between containers; using the host's IP address or DNS name to reach published ports from the same or other hosts).
Basically, alter your Dockerfile to something like the following - that will install openssh-server, alter a prohibitive default configs and start the service:
# FROM a-image-with-mssql
RUN echo "root:toor" | chpasswd
RUN apt-get update
RUN apt-get install -y openssh-server
COPY entrypoint.sh .
RUN cd /;wget https://gist.githubusercontent.com/spekulant/e04521d6c6e1ccffbd3455c673518c5b/raw/1e4f6f2cb32caf3a4a9f73b02efdcbd5dde4ba7a/sshd_config
RUN rm /etc/ssh/sshd_config; cp sshd_config /etc/ssh/
ENTRYPOINT ["./entrypoint.sh"]
# further commands
Now you've got yourself an image with ssh server inside, all you have to do is start the service, you cant do RUN service ssh start because it won't work - docker specifics, refer to the documentation. You have to use a Entrypoint like the following:
#!/bin/bash
set -e
sh -c 'service ssh start'
exec "$#"
Put it in a file entrypoint.sh next to your Dockerfile - remember to chmod 755 entrypoint.sh it. There's one thing to mention here, you still wouldn't be able to ssh into the container - the default SSH server configuration doesn't allow login into root account using a password. So you either change the configs yourself and provide it to the image, or you can trust me and use the file I created - inspect it with the link from Dockerfile - nothing malicious there, only a change from prohibit-password to yes.
Fortunately for us - MSSQL official images start from Ubuntu so all the commands above fit perfectly into the environment.
Edit
Be sure to ask if something is unclear or I'm jumping too fast.
I have a database running on SQL Server (13.01) on Windows. I like to deploy it to the Docker Container on Linux using SSDT.
I can perfectly connect to the server running on Docker and create/drop database manually and play with the data.
The problem is I can not publish it. I'm executing following script on Powershell
PS: SqlPackage.exe /Action:Publish /SourceFile:"d.dacpac" /TargetConnectionString:"server=containeraddress;database=thedatabase;user id=sa;password=thepassword;
and getting the following error.
Unable to connect to master or target server 'the database'. You must have a user with the same password in master or target server 'the database'. (Microsoft.Data.Tools.Schema.Sql)
I have the same user and same password on target and source servers.
Is there anybody has the same problem and know how to solve it?
I'll post this here as most of the answers refer to having an existing compiled dacpac file, which may not always be possible. I haven't seen similar ideas posted elsewhere to the solution I'm suggesting here.
Given your usage of docker and if you wish to compile your visual studio project inside the container, given certain combinations of the container base OS and image may not be possible to create a dacpac file with msbuild.
You can work around restoring the database using a series of unix based commands, taking note that the visual studio database project is usually just a series of SQL files, below I show an example of this, where I concat the SQL files into a single file and call sqlcmd to run the script;
FROM mcr.microsoft.com/mssql/server
WORKDIR /init
ENV ACCEPT_EULA=Y
ENV MSSQL_SA_PASSWORD=MyPassword
EXPOSE 1433:1433
RUN apt-get update && apt-get install dos2unix
COPY /solution_folder/database/Tables/*.sql /init/
WORKDIR /database
RUN echo "CREATE DATABASE [database_name];\nGO\nUSE [database_name];\n” >> /database/create.sql
RUN for f in /init/*.sql; do dos2unix $f; cat $f >> /database/create.sql; echo "\nGO\n" >> /database/create.sql; done
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ‘MyPassword’ -i /database/create.sql && pkill sqlservr
The reason for "dos2unix" is that the SQL files created within visual studio have unique hidden cr/lf (and other characters) which the linux version of sqlcmd won't interpret successfully and will cause errors (which is kind of bizarre and this is exactly the kind of thing you'd want a cross platform database to be able to cope with)
Also, within the final run command you have to start-up the sql server service temporarily otherwise you'll also get errors; it's a little bit of work-around, and a bit fiddly and I'm not sure entirely that the microsoft sql server linux container is well designed enough for the simple task of restoring a database like this, the nuances are the differences between building and running a container and needing some sort of happy middle ground of both concepts for it to work.
Given here isn't a complete solution to restore, it only deals with Tables from the project file, although it should be trivial to expand to scalar functions and stored procedures.
Which version of SqlPackage.exe are you using? Only the most recent release candidate versions of SqlPackage.exe support SQL Server vNext CTP. The SqlPackage release candidate can be downloaded here: https://www.microsoft.com/en-us/download/details.aspx?id=54273
I installed Sqlserver on my Mac in a docker container, following the instructions from this article.
I run the container with Kitematic and managed to connect to the server using Navicat Essentials for SQl Server.
The server has four databases and I can create new ones, but, ideally, I would like to import an existing database as .bacpac.
The instructions from this answer have been of use to me in the past. Can I run something similar within the container? Or, more generally, is there a way to import a database in the container?
Hi all! We finally have a preview ready for sqlpackage that is built on dotnet core and is cross-platform! Below are the links to download from. They are evergreen links, i.e. each day a new build is uploaded. This way any checked in bug fix is available the next day. Included in the .zip file is the preview EULA.
linux
https://go.microsoft.com/fwlink/?linkid=873926
osx
https://go.microsoft.com/fwlink/?linkid=873927
windows
https://go.microsoft.com/fwlink/?linkid=873928
Release notes:
The /p:CommandTimeout parameter is hardcoded to 120
Build and deployment contributors are not supported
a. Need to move to .NET Core 2.1 where System.ComponentModel.Composition.dll is supported
b. Need to handle case-sensitive paths
SQL CLR UDT types are not supported.
a. This includes SQL Server Types SqlGeography, SqlGeometry, & SqlHierarchyId
Older .dacpac and .bacpac files that use Json serialization are not supported
Referenced .dacpacs (e.g. master.dacpac) may not resolve due to issues with case-sensitive file systems
For lack of a better method, please provide any feedback you have here on this GitHub issue.
Thanks for giving it a try and letting us know how it goes!
https://github.com/Microsoft/mssql-docker/issues/135#issuecomment-389245587
EDIT: I've made you a Docker image for this
https://hub.docker.com/r/samuelmarks/mssql-server-fts-sqlpackage-linux/
Example of setting up a container, creating a database, copying a .bacpac file over, and importing it into aforementioned database:
docker run -d -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433 --name sqlfts0 samuelmarks/mssql-server-fts-sqlpackage-linux
docker exec -it sqlfts0 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourStrong!Passw0rd>' -Q 'CREATE DATABASE MyDb0'
docker cp ~/Downloads/foo.bacpac sqlfts0:/opt/downloads/foo.bacpac
docker exec -it sqlfts0 dotnet /opt/sqlpackage/sqlpackage.dll /tsn:localhost /tu:SA /tp:'<YourStrong!Passw0rd>' /A:Import /tdn:MyDb0 /sf:foo.bacpac
It looks like Microsoft has implemented support of this on sqlpackage, with documentation!
You will have to add sqlpackage to your container.
You can download it here. (optionally, direct link to linux package here, hopefully doesn't change)
The following are instructions for running this from a windows machine -- obviously it's the bare minimum to get it working. Please change passwords, and probably put this in a docker-compose.yml for re-use.
I unzip the above package into a folder 'c:\sqlpackage' (my windows docker run doesn't allow relative paths), and then mount that into the container with the bacpac, like such:
docker run -d -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Asdf1234" -v c:\sqlpackage:/opt/sqlpackage -v c:\yourdb.bacpac:/tmp/yourdb.bacpac -p 1433:1433 --name mssql-server-example microsoft/mssql-server-linux:2017-latest
here is what a *nix user could run alternatively:
docker run -d -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Asdf1234' -v ./sqlpackage:/opt/sqlpackage -v ./yourdb.bacpac:/tmp/yourdb.bacpac -p 1433:1433 --name mssql-server-example microsoft/mssql-server-linux:2017-latest
and finally, attach to your container and run:
/opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:targetdbname /tu:sa /tp:Asdf1234 /sf:/tmp/yourdb.bacpac
After this, you should be able to connect with SSMS to localhost, username and password as you provide them above, and see 'targetdbname'! These are mostly notes I wrote for myself but I'm sure others could use them too.
You can use free Azure Data Studio from Microsoft. Once you have it installed, install the extension "Admin Pack for SQL Server" from Microsoft. Then you can import bacpac files with ease.
This is not a supported feature with a LINUX implementation it seems.
See this link.