MSSQL-AZURE-EDGE sqlcmd not found - sql-server

I have docker file, which only download azure-edge latest image and accept EULA with starting of sqlserver & sqlcmd. But during starting it throws exception, that sqlcmd not found. I tried to up container without accepting of EULA, to check if sqlcmd exists. And it wasn't downloaded. Can't figure out, why)
Works on Mac M1

Related

cx_Oracle in Azure Databricks

I am unable to establish connection to my Oracle database from Azure Databricks although it works in ADF where I am able to query the table. But ADF takes time to filter the records so I am still trying to connect from Databricks.
I followed the steps from this Microsoft link, both manually and using init-script but error seems to persist.
When I looked into my cluster event log it says the init-script execution was successfully.
Error message when I tried to establish the connection:
DPI-1047: Cannot locate a 64-bit Oracle Client library: "/databricks/driver/oracle_ctl//lib/libclntsh.so: cannot open shared object file: No such file or directory".
When I executed the following command
dbutils.fs.ls("/databricks/driver/")
there was no such directory
This triggered me to post some questions here:
Does this mean the init-script did not perform its job?
Is /databricks/driver/oracle_ctl a hidden directory for dbutils.fs.ls?
Error message points to /databricks/driver/oracle_ctl//lib/libclntsh.so, when I manually inspected the downloaded oracle client, there is no such folder called lib although libclntsh.so exists in the main directory. Is there a problem that databricks is checking the wrong directory for the libclntsh.so?
Does this connections still works for others?
Syntax for connection: cx_Oracle.connect(user= user_name, password= password,dsn= IP+':'+Port+'/'+DB_name)
Above syntax works fine when connected from inside a on-premises machine.
Try installing the latest major release of cx_Oracle - which got renamed to python-oracledb, see the release announcement.
This version doesn't need Oracle Instant Client. The API is the same as cx_Oracle, although obviously the name is different.
If I understand the instructions, your init script would do something like:
/databricks/python/bin/pip install oracledb
Application code would be like:
import oracledb
connection = oracledb.connect(user='scott', password=mypw, dsn='yourdbhostname/yourdbservicename')
with connection.cursor() as cursor:
for row in cursor.execute('select city from locations'):
print(row)
Resources:
Home page: oracle.github.io/python-oracledb/
Quick start: Quick Start python-oracledb Installation
Documentation: python-oracle.readthedocs.io/en/latest/index.html
PyPI: pypi.org/project/oracledb/
Source: github.com/oracle/python-oracledb
Upgrading: Upgrading from cx_Oracle 8.3 to python-oracledb
Changed the path from "/databricks/driver/oracle_ctl/" to "/databricks/driver/oracle_ctl/instantclient" in the init-script and that error does not appear anymore.
Please use the following init script instead
dbutils.fs.put("dbfs:/databricks/<init-script-folder-name>/oracle_ctl.sh","""
#!/bin/bash
sudo apt-get install libaio1
wget --quiet -O /tmp/instantclient-basiclite-linuxx64.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip
unzip /tmp/instantclient-basiclite-linuxx64.zip -d /databricks/driver/oracle_ctl/
mv /databricks/driver/oracle_ctl/instantclient* /databricks/driver/oracle_ctl/instantclient
sudo echo 'export LD_LIBRARY_PATH="/databricks/driver/oracle_ctl/instantclient/"' >> /databricks/spark/conf/spark-env.sh
sudo echo 'export ORACLE_HOME="/databricks/driver/oracle_ctl/instantclient/"' >> /databricks/spark/conf/spark-env.sh
""", True)
Notes:
The above init-script was advised by a databricks employee and can be found here.
As mentioned by Christopher Jones in one of the comments, cx_Oracle has been recently upgraded to oracledb with a thin and thick version.
You will get the above error if you don’t have Oracle instant client in your Cluster.
To resolve above error in azure databricks, please follow this code:
%sh
mkdir -p /opt/oracle
cd /opt/oracle
wget https://download.oracle.com/otn_software/nt/instantclient/19600/instantclient-basic-windows.x64-19.6.0.0.0dbru.zip
unzip instantclient-basic-windows.x64-19.6.0.0.0dbru.zip
set ORACLE_HOME=%ORABAS%\instantclient_19_3
set TNS_ADMIN=%ORACLE_HOME%
set PATH=%ORACLE_HOME%;%PATH%
To create init script, use the following code:
As per official doc,
dbutils.fs.put("dbfs:/databricks/<init-script-folder>/oracle_ctl.sh","""
#!/bin/bash
wget --quiet -O /tmp/instantclient-basiclite-linuxx64.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip
unzip /tmp/instantclient-basiclite-linuxx64.zip -d /databricks/driver/oracle_ctl/
sudo echo 'export LD_LIBRARY_PATH="/databricks/driver/oracle_ctl/"' >> /databricks/spark/conf/spark-env.sh
sudo echo 'export ORACLE_HOME="/databricks/driver/oracle_ctl/"' >> /databricks/spark/conf/spark-env.sh
""", True)
To read data from oracle database in PySpark follow this article by Emrah Mete
For more information refer this official document:
https://docs.databricks.com/data/data-sources/oracle.html#oracle

Getting snowsql bootstrap could not verify the signature error when initializing snowsql after successful install

I installed snowsql-1.2.17-windows_x86_64.msi from the installer. After install, I am getting the
below error when calling the below command. please advice
snowsql -a XXXXXXX -u YYYYYY
Installing version: 1.2.17 [####################################] 100%
snowsql bootstrap could not verify the signature for the downloaded file. This snowsql command is not legitimate code. Enable the debug log -o log_level=DEBUG.
Failed to download snowsql. Check network connectivity: 1.2.17
I think it's because SnowSQL is trying to automatically update to the latest version but the signature file is still from the installed version.
In your config file under options, add noup=True (default is false which means update when available), and try again. Haven't tested this, but I think you can also add an option on the launch like this:
snowsql -a XXXXXXX -u YYYYYY --noup
Based on Nick's answer, this seems like a bug so let's hope we don't see these errors going forward.
Open cmd as administrator. Execute the command:
snowsql --noup
Close the window and now run snowsql as administrator. It should open, show a bunch of options and close automatically. Your snowsql should run fine now.
Snowflake reviewed the issue and found that the signature file was incorrect. It has been replaced with a correct signature file and you should not see this issue going forward.

Getting error on windows 7, while executing "npm run deploy". I am trying to deploy my code to pages.github.com

bash: /dev/tty: No such device or address
error: failed to execute prompt script (exit code 1)
fatal: could not read Username for 'https://github.com': No error
https://github.com/atom/atom/issues/8984
Same problem but I worked around it by changing the git remote url to
include my git username and password.
List your remote URL using:
git remote -v Update the remote URL to include username and password
just before github.com:
git remote set-url
https://:#github.com//.git
Not sure why this was necessary, I thought I'd be prompted for my
credentials if git didn't know them.
More details, hopefully relevant:
OS Windows 10 Git version is 2.6.0 from http://git-scm.com/ Recently
done a Windows re-install
good video https://www.youtube.com/watch?v=SKXkC4SqtRk

Deploy the database to Docker Container microsoft/mssql-server-linux

I have a database running on SQL Server (13.01) on Windows. I like to deploy it to the Docker Container on Linux using SSDT.
I can perfectly connect to the server running on Docker and create/drop database manually and play with the data.
The problem is I can not publish it. I'm executing following script on Powershell
PS: SqlPackage.exe /Action:Publish /SourceFile:"d.dacpac" /TargetConnectionString:"server=containeraddress;database=thedatabase;user id=sa;password=thepassword;
and getting the following error.
Unable to connect to master or target server 'the database'. You must have a user with the same password in master or target server 'the database'. (Microsoft.Data.Tools.Schema.Sql)
I have the same user and same password on target and source servers.
Is there anybody has the same problem and know how to solve it?
I'll post this here as most of the answers refer to having an existing compiled dacpac file, which may not always be possible. I haven't seen similar ideas posted elsewhere to the solution I'm suggesting here.
Given your usage of docker and if you wish to compile your visual studio project inside the container, given certain combinations of the container base OS and image may not be possible to create a dacpac file with msbuild.
You can work around restoring the database using a series of unix based commands, taking note that the visual studio database project is usually just a series of SQL files, below I show an example of this, where I concat the SQL files into a single file and call sqlcmd to run the script;
FROM mcr.microsoft.com/mssql/server
WORKDIR /init
ENV ACCEPT_EULA=Y
ENV MSSQL_SA_PASSWORD=MyPassword
EXPOSE 1433:1433
RUN apt-get update && apt-get install dos2unix
COPY /solution_folder/database/Tables/*.sql /init/
WORKDIR /database
RUN echo "CREATE DATABASE [database_name];\nGO\nUSE [database_name];\n” >> /database/create.sql
RUN for f in /init/*.sql; do dos2unix $f; cat $f >> /database/create.sql; echo "\nGO\n" >> /database/create.sql; done
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ‘MyPassword’ -i /database/create.sql && pkill sqlservr
The reason for "dos2unix" is that the SQL files created within visual studio have unique hidden cr/lf (and other characters) which the linux version of sqlcmd won't interpret successfully and will cause errors (which is kind of bizarre and this is exactly the kind of thing you'd want a cross platform database to be able to cope with)
Also, within the final run command you have to start-up the sql server service temporarily otherwise you'll also get errors; it's a little bit of work-around, and a bit fiddly and I'm not sure entirely that the microsoft sql server linux container is well designed enough for the simple task of restoring a database like this, the nuances are the differences between building and running a container and needing some sort of happy middle ground of both concepts for it to work.
Given here isn't a complete solution to restore, it only deals with Tables from the project file, although it should be trivial to expand to scalar functions and stored procedures.
Which version of SqlPackage.exe are you using? Only the most recent release candidate versions of SqlPackage.exe support SQL Server vNext CTP. The SqlPackage release candidate can be downloaded here: https://www.microsoft.com/en-us/download/details.aspx?id=54273

postgresql - start on mac - `pg_ctl` not working

I want to use the pre-installed postgresql on my local machine (mac os 10.7.5), when I run which psql I find it (in /usr/bin/psql), but then running
pg_ctl -D /usr/bin/psql -l /usr/bin/psql/server.log start
results in:
-bash: pg_ctl: command not found
How can I start/use my postgresql database? do I need to install it (with say Homebrew) or can I use the pre-installed one on my mac?
I also tried using the initdb command (initdb /usr/bin/psql -E utf8) and also go tthe same message: -bash: initdb: command not found.
Also, is this psql the same as postgres? (I tried which postgres and got nothing)
Update: I'm using psql commands in the command line, but am getting there the following message (for psql -l and psql -a for instance):
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?
OS X ships with the command line client (for interacting with postgres databases) not the server.
You need to install the server.
Check the postgres site or grab the postgres.app

Resources