Unable to mount data directory in MSSQL RC1 server setup in linux - sql-server

I'm trying to change the default data directory for MSSQL Server RC1 2017 after installation and setup in linux (Ubuntu-16.10).
I used the following command to set the default data directory. Then restarted the mssql server.
sudo /opt/mssql/bin/mssql-conf set filelocation.defaultdatadir /mnt/var/opt/mssql/data/
systemctl restart mssql-server.service
After this I tried to create a simple database "test"
sqlcmd -s localhost -U sa -P "someStrongPassword" -Q "CREATE DATABASE test"
The error returned is as follows:
MODIFY FILE encountered operating system error 31(A device attached to
the system is not functioning.) while attempting to expand the
physical file '/mnt/var/opt/mssql/data/test.mdf'.
CREATE DATABASE
failed. Some file names listed could not be created. Check related
errors.
The error log indicates an OS error:
/mnt/var/opt/mssql/data/test.mdf: Operating system error 31(A device
attached to the system is not functioning.) encountered.
I cannot mount the data directory by any means. The permissions to "/mnt" directory are set to 777 too. Changing the default data directory to any other folder, works perfectly fine. Is this a known or recent bug with mssql server?

Yes, there is an issue with using remote storage through NFS and SMB that came up in CTP 2.1 and was not yet fixed in RC1. See release notes: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-release-notes#a-idrc1-rc1-july-2017-a
The only workarounds are to use local storage or CTP 2.0. We are working on a fix. Release ETA is TBD ATM.

Related

How to change file ownership of DB backup in SQL Server container

I'm following this guide to restore a database backup
https://learn.microsoft.com/en-us/sql/linux/tutorial-restore-backup-in-sql-server-container?view=sql-server-ver15
I used docker cp command to copy the DB backup files to the container
docker exec -it SQLContainer mkdir /var/opt/mssql/backup
docker cp MyDb.bak SQLContainer:/var/opt/mssql/backup/
However when trying to restore the DB by running the following query in SSMS, an error message is shown
RESTORE DATABASE MyDB FROM DISK='/var/opt/mssql/backup/MyDB.bak'
Operating system error 5(Access is denied.).
I tried copying using docker cp -a, which sets file ownership to same as destination, but I got this error.
docker cp -a MyDb.bak SQLContainer:/var/opt/mssql/backup/
Error response from daemon: getent unable to find entry "mssql" in passwd database
I'm using Microsoft's image and I don't know the password for root user, the container runs using mssql user, so chown doesn't work either. How can I change the file permissions so DB restore works?
Turns out when I copied the database backup files from the Windows host to the Ubuntu machine, the files were owned by root user and all other users didn't have read permission.
Adding read permission to the file before copying to the docker container works and the server was able to read the files.
sudo chmod a+r MyDb.bak
sudo docker cp MyDb.bak SQLContainer:/var/opt/mssql/backup/
I had a similar issue just today, about the same time as this incident, just attaching MDF and LDF files.
By doing the chmod go+w before copying files without the -a I was able to get SQL Server to treat them as writeable. Prior to that I was getting error messages whenever any update was performed.

How do i resolve "Failed to parse remote port from server"

I'm setting up a new remote host and every time i initiate it i get the following error output: Any feedback or direction on how to resolve this issue?
Pseudo-terminal will not be allocated because stdin is not a terminal.
Linux Destiny 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1 (2019-04-12) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
mesg: ttyname failed: Inappropriate ioctl for device
bash: cannot set terminal process group (3202): Inappropriate ioctl for device
bash: no job control in this shell
mesg: ttyname failed: Inappropriate ioctl for device
Installing...
Downloading with wget
WARNING: tar exited with non-0 exit code
Found running server...
*
* Reminder: You may only use this software with Visual Studio family products,
* as described in the license (https://go.microsoft.com/fwlink/?linkid=2077057)
*
cat: /root/.vscode-remote/.473af338e1bd9ad4d9853933da1cd9d5d9e07dc9.log: No such
file or directory
Server did not start successfully. Full server log:
cat: /root/.vscode-remote/.X.log51ec4692-
4da4-4ec0-b613-5a3563034cf1====
: No such file or directory
"install" terminal command done
Received install output: : No such file or directory
Failed to parse remote port from server output: : No such file or directory
If the server fails to shut down properly, sometimes it leaves dangling lockfiles. This can cause startup to fail and produce the "Failed to parse remote port from server output" error message. In this case the solution is to simply to delete the lockfiles:
.vscode-server/bin/[:xdigit:]*/vscode-remote-lock.*
Fixed the issue. It appears I had 2 other server agents running incorrectly. I killed both server agents using kill (PID) and removed ".vscode_remote" directory from user home directory. Then i reinitialized remote-ssh from vscode. Successfully connected!
On remote machine you do not have a tar installed. It's in log output
Installing... Downloading with wget
WARNING: tar exited with non-0 exit code
so under a root run:
apt-get install tar
or with sudo, if you have a user with sudoers configured:
sudo apt-get install tar
I also got the same issue and my workaround was to provide proper rights to the home or user folder, so vscode can create a remote folder and do the required installation on it.
Example :
sudo chmod -R 777 home/
In this case, I have provided all rights to my home folder and It worked like a charm for all the users.
I ssh'd onto the remote server (linux) and then deleted both directories as follows:
$ rm -r .vscode-server.backup2022-04-03T16:20:18-05:00
$ rm -r .vscode-server
In case someone else encounters the same issue - I had an instance where remote target had no space left on device. After extending root volume of target machine, connection worked fine.
I had the same issue because vscode was looking for my .vscode-server directory in the wrong location (it was in a custom location due to restrictions on where files can be saved). This can be fixed by using How to change vscode-server directory. Specifically add:
"remote.SSH.lockfilesInTmp": true,
"remote.SSH.serverInstallPath":{
"hostname":"/path/to/.vscode-server/.."
}
To your settings.json
In my case, it wasn't working because of server asking for new password when starting a session. What I did was to open a new default terminal (not VSC terminal but your OS default terminal like ZSH, CMD, and so on). And I used the ssh command to login. I logged in successfully and changed the password. Then I tried connecting with the new password and it worked because the server didn't asked for password change now.
Command:
ssh username#IP
Enter password and you'll get asked to change the password. Change the password and try connecting again with new password using SSH VSC extension.
If yout authorize by ssh-key - also check the value of User parameter in VsCode ssh config. User must have matching key in ~/.ssh/authorized_keys on remote host.
#Sachin's answer directed me in the right direction, VSCode needs permissions in order to create some files, but instead of giving 777 permissions to your home folder (which can be dangerous) you can just chown the user that wants to log in (the user for me was ubuntu):
sudo chown -R ubuntu /home
I also got the same issue and my workaround was to provide proper rights to the home or user folder, so vscode can create a remote folder and do the required installation on it.
Step 1: Add port to your config file :
Host hostname
Port 22
User username
Step 2 : Go to File->Prefrences ->Open settings.json fle
Search for lockfilesInTmp
and check the box next to that

Deploy the database to Docker Container microsoft/mssql-server-linux

I have a database running on SQL Server (13.01) on Windows. I like to deploy it to the Docker Container on Linux using SSDT.
I can perfectly connect to the server running on Docker and create/drop database manually and play with the data.
The problem is I can not publish it. I'm executing following script on Powershell
PS: SqlPackage.exe /Action:Publish /SourceFile:"d.dacpac" /TargetConnectionString:"server=containeraddress;database=thedatabase;user id=sa;password=thepassword;
and getting the following error.
Unable to connect to master or target server 'the database'. You must have a user with the same password in master or target server 'the database'. (Microsoft.Data.Tools.Schema.Sql)
I have the same user and same password on target and source servers.
Is there anybody has the same problem and know how to solve it?
I'll post this here as most of the answers refer to having an existing compiled dacpac file, which may not always be possible. I haven't seen similar ideas posted elsewhere to the solution I'm suggesting here.
Given your usage of docker and if you wish to compile your visual studio project inside the container, given certain combinations of the container base OS and image may not be possible to create a dacpac file with msbuild.
You can work around restoring the database using a series of unix based commands, taking note that the visual studio database project is usually just a series of SQL files, below I show an example of this, where I concat the SQL files into a single file and call sqlcmd to run the script;
FROM mcr.microsoft.com/mssql/server
WORKDIR /init
ENV ACCEPT_EULA=Y
ENV MSSQL_SA_PASSWORD=MyPassword
EXPOSE 1433:1433
RUN apt-get update && apt-get install dos2unix
COPY /solution_folder/database/Tables/*.sql /init/
WORKDIR /database
RUN echo "CREATE DATABASE [database_name];\nGO\nUSE [database_name];\n” >> /database/create.sql
RUN for f in /init/*.sql; do dos2unix $f; cat $f >> /database/create.sql; echo "\nGO\n" >> /database/create.sql; done
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ‘MyPassword’ -i /database/create.sql && pkill sqlservr
The reason for "dos2unix" is that the SQL files created within visual studio have unique hidden cr/lf (and other characters) which the linux version of sqlcmd won't interpret successfully and will cause errors (which is kind of bizarre and this is exactly the kind of thing you'd want a cross platform database to be able to cope with)
Also, within the final run command you have to start-up the sql server service temporarily otherwise you'll also get errors; it's a little bit of work-around, and a bit fiddly and I'm not sure entirely that the microsoft sql server linux container is well designed enough for the simple task of restoring a database like this, the nuances are the differences between building and running a container and needing some sort of happy middle ground of both concepts for it to work.
Given here isn't a complete solution to restore, it only deals with Tables from the project file, although it should be trivial to expand to scalar functions and stored procedures.
Which version of SqlPackage.exe are you using? Only the most recent release candidate versions of SqlPackage.exe support SQL Server vNext CTP. The SqlPackage release candidate can be downloaded here: https://www.microsoft.com/en-us/download/details.aspx?id=54273

Oracle XE Database Configuration failed

I am trying to create an oracle xe database in my vps.
VPS OS : Cent OS.
When try to run
/etc/init.d/oracle-xe configure
it throws an error Database confiration failed and to check the logs but logs just shows
ORA-01034: ORACLE not available
Below is the history...
[root#vmcx-43 Disk1]# rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
Preparing... ########################################### [100%]
/var/tmp/rpm-tmp.51363: line 186: bc: command not found
1:oracle-xe /var/tmp/rpm-tmp.51363: line 186: bc: command not fo und########################################### [100%]
Executing post-install steps...
/var/tmp/rpm-tmp.97984: line 76: bc: command not found
/var/tmp/rpm-tmp.97984: line 77: bc: command not found
/var/tmp/rpm-tmp.97984: line 78: [: -gt: unary operator expected
/var/tmp/rpm-tmp.97984: line 82: bc: command not found
You must run '/etc/init.d/oracle-xe configure' as the root user to configure the database.
[root#vmcx-43 Disk1]# /etc/init.d/oracle-xe configure
Oracle Database 11g Express Edition Configuration
-------------------------------------------------
This will configure on-boot properties of Oracle Database 11g Express
Edition. The following questions will determine whether the database should
be starting upon system boot, the ports it will use, and the passwords that
will be used for database accounts. Press <Enter> to accept the defaults.
Ctrl-C will abort.
Specify the HTTP port that will be used for Oracle Application Express [8080]:
Specify a port that will be used for the database listener [1521]:
Specify a password to be used for database accounts. Note that the same
password will be used for SYS and SYSTEM. Oracle recommends the use of
different passwords for each database account. This can be done after
initial configuration:
Password can't be null. Enter password:
Password can't be null. Enter password:
Confirm the password:
Do you want Oracle Database 11g Express Edition to be started on boot (y/n) [y]: n
Starting Oracle Net Listener...Done
Configuring database...
Database Configuration failed. Look into /u01/app/oracle/product/11.2.0/xe/config/log for details
[root#vmcx-43 Disk1]# cd /u01/app/oracle/product/11.2.0/xe/config/log
[root#vmcx-43 log]# ls
CloneRmanRestore.log cloneDBCreation.log postDBCreation.log postScripts.log
[root#vmcx-43 log]# tail postScripts.log
commit
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
[root#vmcx-43 log]# tail CloneRmanRestore.log
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
Add your servers name and IP to the /etc/hosts file
I had same issues.
I uninstalled oracle-xe. See How to reconfigure Oracle 10g xe on Linux
Then followed
yum install bc
rpm -i oracle-xe.rpm
/etc/init.d/oracle-xe configure
Everything went fine.
yum install bc
Then try again.
Ok the solution may sound weird but today i got the exactly same error while installing Oracle Xe on centos. I struggled a lot to find the answer but in the end the problem was the way i was installing the rpm.
Initailly i used the command
$rpm -ivh oracle-xe.rpm
and somehow it was giving the same error which you are getting.
After that i tried
$rpm -i oracle-xe.rpm
and it worked for me. Not very sure why will the "h" flag, which is the hash flag cause an issue but it worked for me.
for debian ... how to install oracle-XE from rpm
Configuring database...
Database Configuration failed. Look into /u01/app/oracle/product/11.2.0/xe/config/log for details
nano /u01/app/oracle/product/11.2.0/xe/config/scripts/init.ora
comment # memory_target=100663296
/etc/init.d/oracle-xe configure // will work
I too faced the similar issue on Linux Mint 17.3. Fortunately, I found the solution sooner. The issue is simply that your shared memory file is not where Oracle expects it to be i.e. /dev/shm but you'd be having it at /run/shm with /dev/shm linking to it.
So, to resolve this issue, before configuring the database, you must perform the below steps in order
$ sudo rm -rf /dev/shm
$ sudo mkdir /dev/shm
$ sudo mount -t tmpfs shmfs -o size=2048m /dev/shm
I have tested it, works perfect.
After googling 'oracle sucks' in frustration over the lack of logging from the installation I managed to resolve the issue that caused the configuration to fail on a docker container running the Hortonworks HDP 2.6 Sandbox:
Oracle XE requires 1 Gb of shared memory and fails otherwise (I didn't try 512 mb) according to https://blogs.oracle.com/oraclewebcentersuite/implement-oracle-database-xe-as-docker-containers.
vi /etc/fstab
change/add the line to:
tmpfs /dev/shm tmpfs defaults,size=1024m 0 0
Then reload the configuration by:
mount -a
Keep in mind that if you later restart the docker container you might have to do 'mount -a' once more as it starts with the default set on the container ~ 65 mb.
Normally the failed configuration will have succeeded in creating a listener and you will have to kill this before rerunning configuration.
ps -aux | grep tnslsnr
kill {process identified in the step above}
Lost a full day to this one as none of the other answers on this page worked for me (Ubuntu).
Proper instructions where here
The main trick missing from other tutorials was to execute
sed -i 's,/var/lock/subsys,/var/lock,' /etc/init.d/oracle-xe
before
/etc/init.d/oracle-xe configure
check the permissions for: /u01/
In my case these were set to root:root I changed this to oracle:dba and it worked for me.
But before that I tried the following:
Setting up the IP/hostname in the /etc/hosts
installing bc and reinstalling oracle
both the steps did not work for me but I uninstalled and reinstalled oracle-xe, changed permissions and then ran the command for configure.

PostgreSQL 9: could not fsync file "base/16386": Invalid argument

I'm trying to test a small PostgreSQL setup, so I cobbled together a quick local install. However, when I'm trying to create my personal db with createdb, it chokes on errors like this (notably, it starts with base/16384 the first time, and increments each time I run it). Anyone know what's going on here, or if there's some trivial config I missed that would cause this? Thanks, and this is somewhat time-critical, so please respond if you do know anything. Thanks!
UPDATES:
I'm running this on a CentOS 5 server, apologies that I don't have too many further details (it's a shared account on that server). uname -a has the following output:
Linux {OMITTED} 2.6.18-194.11.4.el5 #1 SMP Tue Sep 21 05:04:09 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
I installed PostgreSQL from source from:
http://wwwmaster.postgresql.org/download/mirrors-ftp/source/v9.0.1/postgresql-9.0.1.tar.bz2
built in my home directory and installed to prefix=$HOME/local/pgsql.
Here's a terminal readout for me attempting to create my user's db on a fresh data setup:
[htung#{OMITTED}:~]$ killall postgres
LOG: autovacuum launcher shutting down
LOG: received smart shutdown request
LOG: shutting down
LOG: database system is shut down
[htung#{OMITTED}:~]$ rm -r tmp
mk[1]+ Done ../local/pgsql/bin/postgres -D $HOME/tmp (wd: ~/tmp)
(wd now: ~)
[htung#{OMITTED}:~]$ mkdir tmp
[htung#{OMITTED}:~]$ local/pgsql/bin/initdb -D $HOME/tmp
The files belonging to this database system will be owned by user "htung".
This user must also own the server process.
The database cluster will be initialized with locale en_US.UTF-8.
The default database encoding has accordingly been set to UTF8.
The default text search configuration will be set to "english".
fixing permissions on existing directory /afs/{OMITTED}/htung/tmp ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 32MB
creating configuration files ... ok
creating template1 database in /afs/{OMITTED}/htung/tmp/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the -A option the
next time you run initdb.
Success. You can now start the database server using:
local/pgsql/bin/postgres -D /afs/{OMITTED}/htung/tmp
or
local/pgsql/bin/pg_ctl -D /afs/{OMITTED}/htung/tmp -l logfile start
[htung#{OMITTED}:~]$ local/pgsql/bin/postgres -D $HOME/tmp
LOG: database system was shut down at 2010-11-15 13:47:25 PST
LOG: autovacuum launcher started
LOG: database system is ready to accept connections
[1]+ Stopped local/pgsql/bin/postgres -D $HOME/tmp
[htung#{OMITTED}:~]$ bg
[1]+ local/pgsql/bin/postgres -D $HOME/tmp &
[htung#{OMITTED}:~]$ local/pgsql/bin/createdb
ERROR: could not fsync file "base/16384": Invalid argument
STATEMENT: CREATE DATABASE htung;
createdb: database creation failed: ERROR: could not fsync file "base/16384": Invalid argument
[htung#{OMITTED}:~]$
I would guess that you're possibly running into the SE linux system here. I'd recommend to either turn off SELinux and see if that works, or to install from RPMs available from the postgresql website.

Resources