How to update database with script - database

I need to create Postgre DB in Windows. So I've downloaded the windows tools from the official site, created server and the problem appeared. When I try to create db in pgAdmin III I'm getting syntax errors while copying data. So I need to run the whole thing in the console. But pgAdmin only allows console mode for db already created and when I run it I have shell for my empty database:
dbname->
How can I now run my script ?

Have you tried using psql.exe? Given psql.exe is on your path try:
c:\> psql.exe -h localhost -U username -f c:\mysqlscript.sql database_name

In windows try this:
\i c:/somedir/script2.sql

Related

cx_Oracle in Azure Databricks

I am unable to establish connection to my Oracle database from Azure Databricks although it works in ADF where I am able to query the table. But ADF takes time to filter the records so I am still trying to connect from Databricks.
I followed the steps from this Microsoft link, both manually and using init-script but error seems to persist.
When I looked into my cluster event log it says the init-script execution was successfully.
Error message when I tried to establish the connection:
DPI-1047: Cannot locate a 64-bit Oracle Client library: "/databricks/driver/oracle_ctl//lib/libclntsh.so: cannot open shared object file: No such file or directory".
When I executed the following command
dbutils.fs.ls("/databricks/driver/")
there was no such directory
This triggered me to post some questions here:
Does this mean the init-script did not perform its job?
Is /databricks/driver/oracle_ctl a hidden directory for dbutils.fs.ls?
Error message points to /databricks/driver/oracle_ctl//lib/libclntsh.so, when I manually inspected the downloaded oracle client, there is no such folder called lib although libclntsh.so exists in the main directory. Is there a problem that databricks is checking the wrong directory for the libclntsh.so?
Does this connections still works for others?
Syntax for connection: cx_Oracle.connect(user= user_name, password= password,dsn= IP+':'+Port+'/'+DB_name)
Above syntax works fine when connected from inside a on-premises machine.
Try installing the latest major release of cx_Oracle - which got renamed to python-oracledb, see the release announcement.
This version doesn't need Oracle Instant Client. The API is the same as cx_Oracle, although obviously the name is different.
If I understand the instructions, your init script would do something like:
/databricks/python/bin/pip install oracledb
Application code would be like:
import oracledb
connection = oracledb.connect(user='scott', password=mypw, dsn='yourdbhostname/yourdbservicename')
with connection.cursor() as cursor:
for row in cursor.execute('select city from locations'):
print(row)
Resources:
Home page: oracle.github.io/python-oracledb/
Quick start: Quick Start python-oracledb Installation
Documentation: python-oracle.readthedocs.io/en/latest/index.html
PyPI: pypi.org/project/oracledb/
Source: github.com/oracle/python-oracledb
Upgrading: Upgrading from cx_Oracle 8.3 to python-oracledb
Changed the path from "/databricks/driver/oracle_ctl/" to "/databricks/driver/oracle_ctl/instantclient" in the init-script and that error does not appear anymore.
Please use the following init script instead
dbutils.fs.put("dbfs:/databricks/<init-script-folder-name>/oracle_ctl.sh","""
#!/bin/bash
sudo apt-get install libaio1
wget --quiet -O /tmp/instantclient-basiclite-linuxx64.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip
unzip /tmp/instantclient-basiclite-linuxx64.zip -d /databricks/driver/oracle_ctl/
mv /databricks/driver/oracle_ctl/instantclient* /databricks/driver/oracle_ctl/instantclient
sudo echo 'export LD_LIBRARY_PATH="/databricks/driver/oracle_ctl/instantclient/"' >> /databricks/spark/conf/spark-env.sh
sudo echo 'export ORACLE_HOME="/databricks/driver/oracle_ctl/instantclient/"' >> /databricks/spark/conf/spark-env.sh
""", True)
Notes:
The above init-script was advised by a databricks employee and can be found here.
As mentioned by Christopher Jones in one of the comments, cx_Oracle has been recently upgraded to oracledb with a thin and thick version.
You will get the above error if you don’t have Oracle instant client in your Cluster.
To resolve above error in azure databricks, please follow this code:
%sh
mkdir -p /opt/oracle
cd /opt/oracle
wget https://download.oracle.com/otn_software/nt/instantclient/19600/instantclient-basic-windows.x64-19.6.0.0.0dbru.zip
unzip instantclient-basic-windows.x64-19.6.0.0.0dbru.zip
set ORACLE_HOME=%ORABAS%\instantclient_19_3
set TNS_ADMIN=%ORACLE_HOME%
set PATH=%ORACLE_HOME%;%PATH%
To create init script, use the following code:
As per official doc,
dbutils.fs.put("dbfs:/databricks/<init-script-folder>/oracle_ctl.sh","""
#!/bin/bash
wget --quiet -O /tmp/instantclient-basiclite-linuxx64.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip
unzip /tmp/instantclient-basiclite-linuxx64.zip -d /databricks/driver/oracle_ctl/
sudo echo 'export LD_LIBRARY_PATH="/databricks/driver/oracle_ctl/"' >> /databricks/spark/conf/spark-env.sh
sudo echo 'export ORACLE_HOME="/databricks/driver/oracle_ctl/"' >> /databricks/spark/conf/spark-env.sh
""", True)
To read data from oracle database in PySpark follow this article by Emrah Mete
For more information refer this official document:
https://docs.databricks.com/data/data-sources/oracle.html#oracle

How to use sqlplus on Oracle database inside a docker container?

I installed oracle db version 12c in my docker environment.
I used the following command:
docker run -d --name oracle -p 8080:8080 -p 1521:1521 quay.io/maksymbilenko/oracle-12c
I connected to the DB and everything went well but I wanted to enable unified audit.
In order to do that, at first you must shutdown the Database and in all the instructions that I found it says to use sqlplus as following:
sqlplus / as sysoper
SQL> shutdown immediate
SQL> exit
I connected successfully to the DB using the next command:
docker exec -it oracle "bash"
and then I ran the sqlplus command and I received "command not found"
[root#f30cc670f85f /]# sqlplus / as sysoper
bash: sqlplus: command not found
Am I doing it wrong?
What should I do in order to have sqlplus on my oracle DB?
I looked for it and didn't find anything that helped me.
I have mac if its relevant
I think that Docker image is just the database and enough of the OS to run the database. I don't think it includes client software such as SQL*Plus.
You need to have SQL*Plus installed on your Mac. If you haven't already, download the Oracle Instant Client for MacOS including the SQL*Plus extension. Or why not treat yourself and install the new-fangled sqlCL tool? It is easier to install and has all the SQL*Plus capabilities and a whole bunch more features. Find it here.
Whatever client you choose, once it's installed on your Mac you run it like any other app: when prompted for connection you give the string Maksym provides:
system/oracle#//localhost:1521/xe
If you need to connect as sys that would look like this:
sys/oracle#//localhost:1521/xe as sysdba
Sourcing the .bashrc should work to connect to sqlplus as sysdba.
docker-compose exec db bash -c "source /home/oracle/.bashrc; sqlplus sys/Oradoc_db1#ORCLCDB as sysdba;"
with this, you enter the image:
docker exec -it oracle /bin/bash
after that, you can use:
sqlplus sys as sysdba
When using the docker image store/oracle/database-enterprise:12.2.0.1-slim sqlplus and sqlldr tools are only available after the container has started.
You can't do the following in a Dockerfile:
RUN sqlplus sys/password AS SYSDBA #create_database.sql
The container images can be configured to run scripts after setup and on startup. Currently sh and sql extensions are supported.
In your Dockerfile, copy the SQL script into the startup directory:
COPY create_database.sql /opt/oracle/scripts/setup/01_create_database.sql
The database will be created on first startup of the container.
I don't have any experience with docker, but it looks for all the world like you are getting to a bash environment, so there we are on solid ground. The returned error ("bash: sqlplus: command not found") simply means that the executable (sqlplus) was not found in any directory listed in your PATH environment variable, as it exists within your shell environment. You actually need to set three variables: ORACLE_SID needs to be set to the value of your database name. ORACLE_HOME needs to be set to the value of the directory where your oracle binaries are installed. And PATH needs to have $ORACLE_HOME/bin added to it:
export PATH=$ORACLE_HOME/bin:$PATH
Obviously, since you are using the value of ORACLE_HOME in setting PATH, ORACLE_HOME needs to be set first.
For Windows OS:
Type docker ps in command line to show running containers and check container id.
Type docker exec -it container_id //bin/bash
Login via sqlplus command
Or the simplest way
docker exec -it container_id bash -c "source /home/oracle/.bashrc; sqlplus sys/Oradoc_db1#ORCLCDB as sysdba;"
More info is here: https://hub.docker.com/u/cgmmathaw/content/sub-90f0c051-b514-4b7b-a0fe-fc9d6f2172fa

Isql command error while executing query on Sybase database

I would like to create a Windows batch file to execute the query on Sybase database running on Linux.
Batch file:
plink.exe -ssh sybase#<IP> -pw <PW> -m C:\scripts\script1.bat -t > C:\scripts\testing.log
script1.bat:
echo --- query 1 -----
cd /sybase/OCS-15_0/bin/
isql -Uimaldb_bkp -Pstart_bkp -SLinux1 -Dimaldb
sp_helpsegment
go
exit
It works fine until isql command and gives output in testing.log:
--- query 1 ----- bash: isql: command not found bash: sp_helpsegment: command not found bash: go: command not found
Please advice.
'isql' will work once the required environment variables for Sybase ASE are set in the database server (Linux server as mentioned by you). Please check the link below for more details:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc35823.1570/html/uconfig/X30690.htm
On a simpler note, there should be a file named 'SYBASE.sh' inside '/sybase' (I guess Sybase has been installed in this directory from your sample code). You need to source this file by editing the '.bashrc' file present inside the home directory of the user you are using to connect to the Linux server.
For the sql to work, you will need a flag to indicate the start and end of sql block in the script. Please try the following:
isql -Uimaldb_bkp -Pstart_bkp -SLinux1 -Dimaldb <<EOF
sp_helpsegment
go
EOF
You can use any other word instead of 'EOF'
I too face the same issue.
You need to insert your sybase environment variables, like below:
. /sybase/ABD/SYBASE.sh
isql -U**** -P****** -S<SID> -X <<EOF
It will work fine. Please try this.
If you don't have permission to change de server PATH, you can find the path of isql and execute this way:
/sybase/OCS-15_0/bin/isql -Uimaldb_bkp -Pstart_bkp -SLinux1 -Dimaldb
Putting the prefix PATH of isql installation (it depends of your version and path)

Using psexec.exe in jenkins, handle is invalid

I am using Jenkins on a Windows7 system. I would like to use it to execute a batch script on a remote Windows system. The batch script will be used to flash a development board and run some tests. I came across psexec.exe. That works well through a command prompt window--I can connect and run the script without any issues, but when I try to have Jenkins do it, I get the following output:
PsExec v2.11 - Execute processes remotely
Copyright (C) 2001-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
The handle is invalid.
Connecting to ABCDEFG...
Couldn't access ABCDEFG:
Connecting to ABCDEFG...
Build step 'Execute Windows batch command' marked build as failure
The command I am using in both cases is:
psexec.exe \\ABCDEFG -u "DOMAIN\username" -p "password" "C:\test.bat"
The user associated with username has administrator privileges on the remote system (ABCDEFG is not the real name of the system).
Can anyone help me figure out why it is not working through Jenkins? Or, is there an easier/better way to execute a batch script on a remote Windows system through Jenkins?
Thanks to all your help, especially Technext, I have a solution.
I needed run "services.msc", find "Jenkins", right click on it, and go to "Properties". Once the properties windows appeared, I had to click the "Stop" button to stop Jenkins, open the "Log On" tab, enter in my username and password (the username I used when running through command prompt), and start Jenkins again. That got rid of the "handle is invalid" message in Jenkins.
Update:
A better solution was to go onto the system I was using psexec.exe to get onto, go to Control Panel > User Accounts > Give other users access to this computer. Click on "Add..." and type in the username and domain Jenkins uses to run its commands (to find this, open your Jenkins in a browser window, go to Manage Jenkins > System Information and look for USERNAME and USERDOMAIN under Environment Variables). Make sure you give it Administrator rights. Then click ok. Now psexec.exe shouldn't have the "handle is invalid" issue.
Sorry, I don't have enough reputation for comments, but is the single \ a typo? Since
The handle is invalid.
probably means that the computer address is invalid. Try
psexec.exe \\ABCDEFG -u "DOMAIN\username" -p "password" "C:\test.bat"
Notice the two backslashes to access a locally mapped computer.
otherwise if that does not work i recommend the # tag
psexec.exe #servername.txt -u "DOMAIN\username" -p "password" "C:\test.bat"
where #servername.txt is a textfile containing only the servernames, one per line. The file parameter handles the formatting of \
ex servername.txt
ABCDEFG
COMPUTER2
EDIT: also found after some quick googling that it can be related to windows security.
Check out that a simple restart of the remote machine doesn't solve the problem. Also, adding parameters -h and -accepteula may help. Modified command:
psexec.exe \\ABCDEFG -u "DOMAIN\username" -p "password" -h -accepteula "C:\test.bat"
I execute below code from Jenkins pipeline groovy script to connect dynamically created VM as a resource on Jenkins master. Below code connect dynamically created VM as resource on Jenkins master with 4 executors. You can change the number of executors based on your requirement.
bat label: 'ConnectResource', script: """
#echo OFF
C:\\apps\\tools\\psexec \\\\${machine_ip} -u ${machine_ip}\\${machine_username} -p ${machine_password} -accepteula -d -h -i 1 cmd.exe /c "cd C:\\apps\\jenkins\\ & java -jar C:\\apps\\jenkins\\swarm.jar -master http://pnlv6s540:8080 -username ${jenkins_user_name} -password ${jenkins_user_password} -name ${machine_ip}_${BUILD_NUMBER} -labels ${machine_ip}_${BUILD_NUMBER} -deleteExistingClients -disableClientsUniqueId -executors 4" & ping 127.0.0.1 -n 60 > nul
"""

Import SQL dump into PostgreSQL database

We are switching hosts and the old one provided a SQL dump of the PostgreSQL database of our site.
Now, I'm trying to set this up on a local WAMP server to test this.
The only problem is that I don't have an idea how to import this database in the PostgreSQL 9 that I have set up.
I tried pgAdmin III but I can't seem to find an 'import' function. So I just opened the SQL editor and pasted the contents of the dump there and executed it, it creates the tables but it keeps giving me errors when it tries to put the data in it.
ERROR: syntax error at or near "t"
LINE 474: t 2011-05-24 16:45:01.768633 2011-05-24 16:45:01.768633 view...
The lines:
COPY tb_abilities (active, creation, modtime, id, lang, title, description) FROM stdin;
t 2011-05-24 16:45:01.768633 2011-05-24 16:45:01.768633 view nl ...
I've also tried to do this with the command prompt but I can't find the command that I need.
If I do
psql mydatabase < C:/database/db-backup.sql;
I get the error
ERROR: syntax error at or near "psql"
LINE 1: psql mydatabase < C:/database/db-backu...
^
What's the best way to import the database?
psql databasename < data_base_dump
That's the command you are looking for.
Beware: databasename must be created before importing.
Have a look at the PostgreSQL Docs Chapter 23. Backup and Restore.
Here is the command you are looking for.
psql -h hostname -d databasename -U username -f file.sql
I believe that you want to run in psql:
\i C:/database/db-backup.sql
That worked for me:
sudo -u postgres psql db_name < 'file_path'
I'm not sure if this works for the OP's situation, but I found that running the following command in the interactive console was the most flexible solution for me:
\i 'path/to/file.sql'
Just make sure you're already connected to the correct database. This command executes all of the SQL commands in the specified file.
Works pretty well, in command line, all arguments are required, -W is for password
psql -h localhost -U user -W -d database_name -f path/to/file.sql
Just for funsies, if your dump is compressed you can do something like
gunzip -c filename.gz | psql dbname
As Jacob mentioned, the PostgreSQL docs describe all this quite well.
make sure the database you want to import to is created, then you can import the dump with
sudo -u postgres -i psql testdatabase < db-structure.sql
If you want to overwrite the whole database, first drop the database
# be sure you drop the right database !!!
#sudo -u postgres -i psql -c "drop database testdatabase;"
and then recreate it with
sudo -u postgres -i psql -c "create database testdatabase;"
Follow the steps:
Go to the psql shell
\c db_name
\i path_of_dump [eg:-C:/db_name.pgsql]
I tried many different solutions for restoring my postgres backup. I ran into permission denied problems on MacOS, no solutions seemed to work.
Here's how I got it to work:
Postgres comes with Pgadmin4. If you use macOS you can press CMD+SPACE and type pgadmin4 to run it. This will open up a browser tab in chrome.
If you run into errors getting pgadmin4 to work, try killall pgAdmin4 in your terminal, then try again.
Steps to getting pgadmin4 + backup/restore
1. Create the backup
Do this by rightclicking the database -> "backup"
2. Give the file a name.
Like test12345. Click backup. This creates a binary file dump, it's not in a .sql format
3. See where it downloaded
There should be a popup at the bottomright of your screen. Click the "more details" page to see where your backup downloaded to
4. Find the location of downloaded file
In this case, it's /users/vincenttang
5. Restore the backup from pgadmin
Assuming you did steps 1 to 4 correctly, you'll have a restore binary file. There might come a time your coworker wants to use your restore file on their local machine. Have said person go to pgadmin and restore
Do this by rightclicking the database -> "restore"
6. Select file finder
Make sure to select the file location manually, DO NOT drag and drop a file onto the uploader fields in pgadmin. Because you will run into error permissions. Instead, find the file you just created:
7. Find said file
You might have to change the filter at bottomright to "All files". Find the file thereafter, from step 4. Now hit the bottomright "Select" button to confirm
8. Restore said file
You'll see this page again, with the location of the file selected. Go ahead and restore it
9. Success
If all is good, the bottom right should popup an indicator showing a successful restore. You can navigate over to your tables to see if the data has been restored propery on each table.
10. If it wasn't successful:
Should step 9 fail, try deleting your old public schema on your database. Go to "Query Tool"
Execute this code block:
DROP SCHEMA public CASCADE; CREATE SCHEMA public;
Now try steps 5 to 9 again, it should work out
Summary
This is how I had to backup/restore my backup on Postgres, when I had error permission issues and could not log in as a superuser. Or set credentials for read/write using chmod for folders. This workflow works for a binary file dump default of "Custom" from pgadmin. I assume .sql is the same way, but I have not yet tested that
I use:
cat /home/path/to/dump/file | psql -h localhost -U <user_name> -d <db_name>
Hope this will help someone.
If you are using a file with .dump extension use:
pg_restore -h hostname -d dbname -U username filename.dump
I noticed that many examples are overcomplicated for localhost where just postgres user without password exist in many cases:
psql -d db_name -f dump.sql
You can do it in pgadmin3. Drop the schema(s) that your dump contains. Then right-click on the database and choose Restore. Then you can browse for the dump file.
I used this
psql -d dbName -U username -f /home/sample.sql
Postgresql12
from sql file:
pg_restore -d database < file.sql
from custom format file:
pg_restore -Fc database < file.dump
I had more than 100MB data, therefore I could not restore database using Pgadmin4.
I used simply postgres client, and write below command.
postgres#khan:/$ pg_restore -d database_name /home/khan/Downloads/dump.sql
It worked fine and took few seconds.You can see below link for more information.
https://www.postgresql.org/docs/8.1/app-pgrestore.html

Resources