Oracle XE Database Configuration failed - database

I am trying to create an oracle xe database in my vps.
VPS OS : Cent OS.
When try to run
/etc/init.d/oracle-xe configure
it throws an error Database confiration failed and to check the logs but logs just shows
ORA-01034: ORACLE not available
Below is the history...
[root#vmcx-43 Disk1]# rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
Preparing... ########################################### [100%]
/var/tmp/rpm-tmp.51363: line 186: bc: command not found
1:oracle-xe /var/tmp/rpm-tmp.51363: line 186: bc: command not fo und########################################### [100%]
Executing post-install steps...
/var/tmp/rpm-tmp.97984: line 76: bc: command not found
/var/tmp/rpm-tmp.97984: line 77: bc: command not found
/var/tmp/rpm-tmp.97984: line 78: [: -gt: unary operator expected
/var/tmp/rpm-tmp.97984: line 82: bc: command not found
You must run '/etc/init.d/oracle-xe configure' as the root user to configure the database.
[root#vmcx-43 Disk1]# /etc/init.d/oracle-xe configure
Oracle Database 11g Express Edition Configuration
-------------------------------------------------
This will configure on-boot properties of Oracle Database 11g Express
Edition. The following questions will determine whether the database should
be starting upon system boot, the ports it will use, and the passwords that
will be used for database accounts. Press <Enter> to accept the defaults.
Ctrl-C will abort.
Specify the HTTP port that will be used for Oracle Application Express [8080]:
Specify a port that will be used for the database listener [1521]:
Specify a password to be used for database accounts. Note that the same
password will be used for SYS and SYSTEM. Oracle recommends the use of
different passwords for each database account. This can be done after
initial configuration:
Password can't be null. Enter password:
Password can't be null. Enter password:
Confirm the password:
Do you want Oracle Database 11g Express Edition to be started on boot (y/n) [y]: n
Starting Oracle Net Listener...Done
Configuring database...
Database Configuration failed. Look into /u01/app/oracle/product/11.2.0/xe/config/log for details
[root#vmcx-43 Disk1]# cd /u01/app/oracle/product/11.2.0/xe/config/log
[root#vmcx-43 log]# ls
CloneRmanRestore.log cloneDBCreation.log postDBCreation.log postScripts.log
[root#vmcx-43 log]# tail postScripts.log
commit
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
[root#vmcx-43 log]# tail CloneRmanRestore.log
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

Add your servers name and IP to the /etc/hosts file

I had same issues.
I uninstalled oracle-xe. See How to reconfigure Oracle 10g xe on Linux
Then followed
yum install bc
rpm -i oracle-xe.rpm
/etc/init.d/oracle-xe configure
Everything went fine.

yum install bc
Then try again.

Ok the solution may sound weird but today i got the exactly same error while installing Oracle Xe on centos. I struggled a lot to find the answer but in the end the problem was the way i was installing the rpm.
Initailly i used the command
$rpm -ivh oracle-xe.rpm
and somehow it was giving the same error which you are getting.
After that i tried
$rpm -i oracle-xe.rpm
and it worked for me. Not very sure why will the "h" flag, which is the hash flag cause an issue but it worked for me.

for debian ... how to install oracle-XE from rpm
Configuring database...
Database Configuration failed. Look into /u01/app/oracle/product/11.2.0/xe/config/log for details
nano /u01/app/oracle/product/11.2.0/xe/config/scripts/init.ora
comment # memory_target=100663296
/etc/init.d/oracle-xe configure // will work

I too faced the similar issue on Linux Mint 17.3. Fortunately, I found the solution sooner. The issue is simply that your shared memory file is not where Oracle expects it to be i.e. /dev/shm but you'd be having it at /run/shm with /dev/shm linking to it.
So, to resolve this issue, before configuring the database, you must perform the below steps in order
$ sudo rm -rf /dev/shm
$ sudo mkdir /dev/shm
$ sudo mount -t tmpfs shmfs -o size=2048m /dev/shm
I have tested it, works perfect.

After googling 'oracle sucks' in frustration over the lack of logging from the installation I managed to resolve the issue that caused the configuration to fail on a docker container running the Hortonworks HDP 2.6 Sandbox:
Oracle XE requires 1 Gb of shared memory and fails otherwise (I didn't try 512 mb) according to https://blogs.oracle.com/oraclewebcentersuite/implement-oracle-database-xe-as-docker-containers.
vi /etc/fstab
change/add the line to:
tmpfs /dev/shm tmpfs defaults,size=1024m 0 0
Then reload the configuration by:
mount -a
Keep in mind that if you later restart the docker container you might have to do 'mount -a' once more as it starts with the default set on the container ~ 65 mb.
Normally the failed configuration will have succeeded in creating a listener and you will have to kill this before rerunning configuration.
ps -aux | grep tnslsnr
kill {process identified in the step above}

Lost a full day to this one as none of the other answers on this page worked for me (Ubuntu).
Proper instructions where here
The main trick missing from other tutorials was to execute
sed -i 's,/var/lock/subsys,/var/lock,' /etc/init.d/oracle-xe
before
/etc/init.d/oracle-xe configure

check the permissions for: /u01/
In my case these were set to root:root I changed this to oracle:dba and it worked for me.
But before that I tried the following:
Setting up the IP/hostname in the /etc/hosts
installing bc and reinstalling oracle
both the steps did not work for me but I uninstalled and reinstalled oracle-xe, changed permissions and then ran the command for configure.

Related

cx_Oracle in Azure Databricks

I am unable to establish connection to my Oracle database from Azure Databricks although it works in ADF where I am able to query the table. But ADF takes time to filter the records so I am still trying to connect from Databricks.
I followed the steps from this Microsoft link, both manually and using init-script but error seems to persist.
When I looked into my cluster event log it says the init-script execution was successfully.
Error message when I tried to establish the connection:
DPI-1047: Cannot locate a 64-bit Oracle Client library: "/databricks/driver/oracle_ctl//lib/libclntsh.so: cannot open shared object file: No such file or directory".
When I executed the following command
dbutils.fs.ls("/databricks/driver/")
there was no such directory
This triggered me to post some questions here:
Does this mean the init-script did not perform its job?
Is /databricks/driver/oracle_ctl a hidden directory for dbutils.fs.ls?
Error message points to /databricks/driver/oracle_ctl//lib/libclntsh.so, when I manually inspected the downloaded oracle client, there is no such folder called lib although libclntsh.so exists in the main directory. Is there a problem that databricks is checking the wrong directory for the libclntsh.so?
Does this connections still works for others?
Syntax for connection: cx_Oracle.connect(user= user_name, password= password,dsn= IP+':'+Port+'/'+DB_name)
Above syntax works fine when connected from inside a on-premises machine.
Try installing the latest major release of cx_Oracle - which got renamed to python-oracledb, see the release announcement.
This version doesn't need Oracle Instant Client. The API is the same as cx_Oracle, although obviously the name is different.
If I understand the instructions, your init script would do something like:
/databricks/python/bin/pip install oracledb
Application code would be like:
import oracledb
connection = oracledb.connect(user='scott', password=mypw, dsn='yourdbhostname/yourdbservicename')
with connection.cursor() as cursor:
for row in cursor.execute('select city from locations'):
print(row)
Resources:
Home page: oracle.github.io/python-oracledb/
Quick start: Quick Start python-oracledb Installation
Documentation: python-oracle.readthedocs.io/en/latest/index.html
PyPI: pypi.org/project/oracledb/
Source: github.com/oracle/python-oracledb
Upgrading: Upgrading from cx_Oracle 8.3 to python-oracledb
Changed the path from "/databricks/driver/oracle_ctl/" to "/databricks/driver/oracle_ctl/instantclient" in the init-script and that error does not appear anymore.
Please use the following init script instead
dbutils.fs.put("dbfs:/databricks/<init-script-folder-name>/oracle_ctl.sh","""
#!/bin/bash
sudo apt-get install libaio1
wget --quiet -O /tmp/instantclient-basiclite-linuxx64.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip
unzip /tmp/instantclient-basiclite-linuxx64.zip -d /databricks/driver/oracle_ctl/
mv /databricks/driver/oracle_ctl/instantclient* /databricks/driver/oracle_ctl/instantclient
sudo echo 'export LD_LIBRARY_PATH="/databricks/driver/oracle_ctl/instantclient/"' >> /databricks/spark/conf/spark-env.sh
sudo echo 'export ORACLE_HOME="/databricks/driver/oracle_ctl/instantclient/"' >> /databricks/spark/conf/spark-env.sh
""", True)
Notes:
The above init-script was advised by a databricks employee and can be found here.
As mentioned by Christopher Jones in one of the comments, cx_Oracle has been recently upgraded to oracledb with a thin and thick version.
You will get the above error if you don’t have Oracle instant client in your Cluster.
To resolve above error in azure databricks, please follow this code:
%sh
mkdir -p /opt/oracle
cd /opt/oracle
wget https://download.oracle.com/otn_software/nt/instantclient/19600/instantclient-basic-windows.x64-19.6.0.0.0dbru.zip
unzip instantclient-basic-windows.x64-19.6.0.0.0dbru.zip
set ORACLE_HOME=%ORABAS%\instantclient_19_3
set TNS_ADMIN=%ORACLE_HOME%
set PATH=%ORACLE_HOME%;%PATH%
To create init script, use the following code:
As per official doc,
dbutils.fs.put("dbfs:/databricks/<init-script-folder>/oracle_ctl.sh","""
#!/bin/bash
wget --quiet -O /tmp/instantclient-basiclite-linuxx64.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip
unzip /tmp/instantclient-basiclite-linuxx64.zip -d /databricks/driver/oracle_ctl/
sudo echo 'export LD_LIBRARY_PATH="/databricks/driver/oracle_ctl/"' >> /databricks/spark/conf/spark-env.sh
sudo echo 'export ORACLE_HOME="/databricks/driver/oracle_ctl/"' >> /databricks/spark/conf/spark-env.sh
""", True)
To read data from oracle database in PySpark follow this article by Emrah Mete
For more information refer this official document:
https://docs.databricks.com/data/data-sources/oracle.html#oracle

How do i resolve "Failed to parse remote port from server"

I'm setting up a new remote host and every time i initiate it i get the following error output: Any feedback or direction on how to resolve this issue?
Pseudo-terminal will not be allocated because stdin is not a terminal.
Linux Destiny 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1 (2019-04-12) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
mesg: ttyname failed: Inappropriate ioctl for device
bash: cannot set terminal process group (3202): Inappropriate ioctl for device
bash: no job control in this shell
mesg: ttyname failed: Inappropriate ioctl for device
Installing...
Downloading with wget
WARNING: tar exited with non-0 exit code
Found running server...
*
* Reminder: You may only use this software with Visual Studio family products,
* as described in the license (https://go.microsoft.com/fwlink/?linkid=2077057)
*
cat: /root/.vscode-remote/.473af338e1bd9ad4d9853933da1cd9d5d9e07dc9.log: No such
file or directory
Server did not start successfully. Full server log:
cat: /root/.vscode-remote/.X.log51ec4692-
4da4-4ec0-b613-5a3563034cf1====
: No such file or directory
"install" terminal command done
Received install output: : No such file or directory
Failed to parse remote port from server output: : No such file or directory
If the server fails to shut down properly, sometimes it leaves dangling lockfiles. This can cause startup to fail and produce the "Failed to parse remote port from server output" error message. In this case the solution is to simply to delete the lockfiles:
.vscode-server/bin/[:xdigit:]*/vscode-remote-lock.*
Fixed the issue. It appears I had 2 other server agents running incorrectly. I killed both server agents using kill (PID) and removed ".vscode_remote" directory from user home directory. Then i reinitialized remote-ssh from vscode. Successfully connected!
On remote machine you do not have a tar installed. It's in log output
Installing... Downloading with wget
WARNING: tar exited with non-0 exit code
so under a root run:
apt-get install tar
or with sudo, if you have a user with sudoers configured:
sudo apt-get install tar
I also got the same issue and my workaround was to provide proper rights to the home or user folder, so vscode can create a remote folder and do the required installation on it.
Example :
sudo chmod -R 777 home/
In this case, I have provided all rights to my home folder and It worked like a charm for all the users.
I ssh'd onto the remote server (linux) and then deleted both directories as follows:
$ rm -r .vscode-server.backup2022-04-03T16:20:18-05:00
$ rm -r .vscode-server
In case someone else encounters the same issue - I had an instance where remote target had no space left on device. After extending root volume of target machine, connection worked fine.
I had the same issue because vscode was looking for my .vscode-server directory in the wrong location (it was in a custom location due to restrictions on where files can be saved). This can be fixed by using How to change vscode-server directory. Specifically add:
"remote.SSH.lockfilesInTmp": true,
"remote.SSH.serverInstallPath":{
"hostname":"/path/to/.vscode-server/.."
}
To your settings.json
In my case, it wasn't working because of server asking for new password when starting a session. What I did was to open a new default terminal (not VSC terminal but your OS default terminal like ZSH, CMD, and so on). And I used the ssh command to login. I logged in successfully and changed the password. Then I tried connecting with the new password and it worked because the server didn't asked for password change now.
Command:
ssh username#IP
Enter password and you'll get asked to change the password. Change the password and try connecting again with new password using SSH VSC extension.
If yout authorize by ssh-key - also check the value of User parameter in VsCode ssh config. User must have matching key in ~/.ssh/authorized_keys on remote host.
#Sachin's answer directed me in the right direction, VSCode needs permissions in order to create some files, but instead of giving 777 permissions to your home folder (which can be dangerous) you can just chown the user that wants to log in (the user for me was ubuntu):
sudo chown -R ubuntu /home
I also got the same issue and my workaround was to provide proper rights to the home or user folder, so vscode can create a remote folder and do the required installation on it.
Step 1: Add port to your config file :
Host hostname
Port 22
User username
Step 2 : Go to File->Prefrences ->Open settings.json fle
Search for lockfilesInTmp
and check the box next to that

MSSQL: The configuration file '/var/opt/mssql/mssql.conf' failed to load (Ubuntu)

I have an issue to install MSSQL on my Linux(Ubuntu 16.04) Server.
I have used the manual from Microsoft but I always fail on the same stage.
Actually, Docker is not an alternative due to Kernal issues.
After:
sudo apt-get install -y mssql-server
I'm supposed to do
sudo /opt/mssql/bin/mssql-conf setup.
This returns after answering all questions:
sqlservr[8383]: sqlservr: The configuration file '/var/opt/mssql/mssql.conf' failed to load (error: The INI file could not be opened. Errno [2] Filename [mssql.conf]).
I can access the config file and it seems to be usable by the script as well.
My Linux skills are not good enough to resolve this issue.
To answer some questions that were raised:
I tried sudo
cat the file returns the content as expected
Try running mssql server as root first (stop the service, run it as root, with /opt/mssql/bin/sqlservr) and see if it works that way. If it does, stop the mssql server and fix the ownership of the mssql dir with sudo chown -R mssql:mssql /var/opt/mssql). More info on this answer.

postgresql error connecting after moving data directory

EDIT-2
I found out that the database doesn't even start after making the file location change.
This is with the default file location:
$pg_isready
/var/run/postgresql:5432 - accepting connections
$pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
9.5 main 5432 online postgres /var/lib/postgresql/9.5/main /var/log/postgresql/postgresql-9.5-main.log
pg_lsclusters output is green.
After the file location has changed on postgresql.conf:
$pg_isready
/var/run/postgresql:5432 - no response
$pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
9.5 main 5432 down root /mnt/Data/postgresdb/postgresql/9.5/main /var/log/postgresql/postgresql-9.5-main.log
Here the output is red.
Following this post here, I tried to start the cluster manually:
$pg_ctlcluster 9.5 main start
Warning: the cluster will not be running as a systemd service. Consider using systemctl:
sudo systemctl start postgresql#9.5-main
Error: You must run this program as the cluster owner (root) or root
I tried the same command with sudo:
Error: Config owner (postgres:124) and data owner (root:0) do not match, and config owner is not root
Which again makes me think the problem might lie with permissions of the directory. The directory is owned by root whose ownership I am unable to change.
EDIT-1
I've been working on this and I'd like to distill this post further to give more specifics. This is my current situation:
I installed postgres: sudo apt-get install postgresql and postgresql-contrib
I used sudo -U postgres psql to get into the postgres shell (I'm not sure if this is what I need to do)
show data_directory returns: /var/lib/postgresql/9.5/main
The data directory is located in Ubuntu ext4 formatted hard drive. I also have a 1 TB NTFS formatted hard disk mounted on /mnt/Data (which is mounted automatically on boot). What I tried:
Stop the postgres service: sudo systemctl stop postgresql
Create a new directory /mnt/Data/postgresdb and copy contents of the previous main to this which gives me a full path of /mnt/Data/postgresdb/postgresql/9.5/main using: sudo rsync -av /var/lib/postgresql/ /mnt/Data/postgresdb/postgresql/
Edit /etc/postgresql/9.5/main/postgresql.conf to change data_directory from the path mentioned above to /mnt/Data/postgresdb/postgresql/9.5/main
Start the postgres service: sudo systemctl start postgresl
Run sudo -U postgres psql but get the error that was mentioned in the original post.
These are the permissions on the respective main directories:
ls -l /var/lib/postgresql/9.5/
total 4.0K drwx------ 19 postgres postgres 4.0K Jan 16 12:40 main
ls -l /mnt/Data/postgresdb/postgresql/9.5/
total 4.0K drwxrwxrwx 1 root root 4.0K Jan 16 12:13 main
From the looks of it, the default directory is owned by "postgres" and the new directory is owned by root. However, when I try to change ownership to postgres: chown -R postgres main, it doesn't output any error, but the ownership doesn't change. I'm curious whether this is because this drive is NTFS formatted and is mounted.
Here is my /etc/fstab:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda5 during installation
UUID=3f5a9875-89a3-4ce5-b778-9d7aaf148ed6 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda6 during installation
UUID=85c3f4d4-e450-435b-8dd6-cf1b2cbd8fc2 none swap sw 0 0
/dev/disk/by-label/Data /mnt/Data auto nosuid,nodev,nofail,x-gvfs-show 0 0
Any ideas on how I can go about fixing this?
ORIGINAL POST
Recently, I installed Postgresql for storing some data for my research. The dataset came with instructions on how to setup the data on a Postgresql database (if interested, more info on that here and here). I installed Postgresql and set up a "role" and used the script that was provided for loading the database. It worked but I underestimated the size of the dataset and the script quit saying there was no more space.
I have two drives on my computer a 250G SSD drive with Windows and Ubuntu installed (125G each). And a 1TB HDD NTFS formatted where I store my data. So I thought moving the database to a folder on the other drive would be helpful. I purged all the data and the database to start afresh and followed the instructions here to move the database directory. However, after moving the directory, when I try to connect using psql I get the following error:
~ psql -U username -d postgres 14:48:33
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
How can I fix this? I am running 64-bit Ubuntu 16.04 with Postgresql-9.5. As mentioned earlier, I moved the DB directory a NTFS formatted filesystem (not sure if that cause any problems).
Thanks.
As mentioned in the comments the NTFS was the problem. I ended up resizing my bigger hard drive with 100GB formatted as ext4 and was able to launch postgres with the new data directory without any problems.

Unable to mount data directory in MSSQL RC1 server setup in linux

I'm trying to change the default data directory for MSSQL Server RC1 2017 after installation and setup in linux (Ubuntu-16.10).
I used the following command to set the default data directory. Then restarted the mssql server.
sudo /opt/mssql/bin/mssql-conf set filelocation.defaultdatadir /mnt/var/opt/mssql/data/
systemctl restart mssql-server.service
After this I tried to create a simple database "test"
sqlcmd -s localhost -U sa -P "someStrongPassword" -Q "CREATE DATABASE test"
The error returned is as follows:
MODIFY FILE encountered operating system error 31(A device attached to
the system is not functioning.) while attempting to expand the
physical file '/mnt/var/opt/mssql/data/test.mdf'.
CREATE DATABASE
failed. Some file names listed could not be created. Check related
errors.
The error log indicates an OS error:
/mnt/var/opt/mssql/data/test.mdf: Operating system error 31(A device
attached to the system is not functioning.) encountered.
I cannot mount the data directory by any means. The permissions to "/mnt" directory are set to 777 too. Changing the default data directory to any other folder, works perfectly fine. Is this a known or recent bug with mssql server?
Yes, there is an issue with using remote storage through NFS and SMB that came up in CTP 2.1 and was not yet fixed in RC1. See release notes: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-release-notes#a-idrc1-rc1-july-2017-a
The only workarounds are to use local storage or CTP 2.0. We are working on a fix. Release ETA is TBD ATM.

Resources