How do I connect to the PostgreSQL database with the following connection info?
I'm using Jupyter Notebook.
from sqlalchemy import create_engine
POSTGRES_DIALECT = 'postgresql'
POSTGRES_SERVER = 'server'
POSTGRES_DBNAME = 'db'
POSTGRES_SCHEMA = 'public'
POSTGRES_USERNAME = 'user'
POSTGRES_PASSWORD = 'password'
postgres_str = ('{dialect}://{username}:{password}#{server}:{schema}/{dbname}'.format(
dialect=POSTGRES_PROVIDER,
server=POSTGRES_SERVER,
dbname=POSTGRES_DBNAME,
schema=POSTGRES_SCHEMA,
username=POSTGRES_USERNAME,
password=POSTGRES_PASSWORD
))
# Create the connection
cnx = create_engine(postgres_str)
ValueError: invalid literal for int() with base 10: 'public'
You are subbing in "schema" where the port belongs. 'public' is not a valid port number.
Related
I'm trying to load a csv into a PostgreSQL database with SQLAlchemy but I get the following error:
OperationalError: (psycopg2.OperationalError) connection to server at "localhost"
(::1), port 5432 failed: FATAL: password authentication failed for user "jim"
How do I fix the password authentication? Here is the code:
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
from sqlalchemy.orm import declarative_base
from sqlalchemy import Column, Integer, String, Date, Float
engine = create_engine('postgresql+psycopg2://jim:password#localhost:5432/travel')
session = Session(engine)
Base = declarative_base()
class travel(Base):
__tablename__ = 'travel_full'
id = Column(Integer, primary_key=True)
year = Column(Date)
quarter = Column(Integer)
mode = Column(String(55))
country = Column(String(55))
purpose = Column(String(55))
csv = 'travel.csv'
df = pd.read_csv(csv)
df.to_sql(con=engine, index_label='id', name=travel.__tablename__, if_exists='replace')
I want to connect Azure MS SQL Database with Azure Databricks via python spark. I could do this with pushdown_query if I run Select * from.... But I need to run ALTER DATABASE to scale up/down.
I must change this part
spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)
otherwise I get this error Incorrect syntax near the keyword 'ALTER'.
Anyone can help me. Much appreciated.
jdbcHostname = "xxx.database.windows.net"
jdbcDatabase = "abc"
jdbcPort = 1433
jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
connectionProperties = {
"user" : "..............",
"password" : "............",
"driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}
pushdown_query = "(ALTER DATABASE [DBNAME] MODIFY (SERVICE_OBJECTIVE = 'S0')) dual_down"
df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)
display(df)
I have a perl application hosted on heroku which needs to connect with some sql server. I am unable to establish connection. It fails with following error:
DBI connect('Driver={ODBC Driver 11 For SQL Server}; Server=****; UID=****; PWD=****','',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified (SQL-IM002) at work.pl line 12.
This is the code.
work.pl
use strict;
use warnings;
use DBI;
my $DRIVER = '{ODBC Driver 11 for SQL Server}';
my $SERVER = '****';
my $UID = '****';
my $PWD = '****';
my $x = DBI->connect("dbi:ODBC:Driver=$DRIVER; Server=$SERVER; UID=$UID; PWD=$PWD");
Relevant Environment Vars:
LIBRARY_PATH=/app/.platform/vendor/usr/lib64:
LD_LIBRARY_PATH=/app/.platform/vendor/usr/lib64:
PATH=/app/.platform/vendor/usr/bin:/app/vendor/perl/bin:/usr/bin:/bin
LANG=en_US.UTF-8
ODBCSYSINI=/app/.platform/vendor/etc
HOME=/app
PWD=/app
ODBCINI=/app/.platform/vendor/etc/odbc.ini
ODBCHOME=/app/.platform/vendor/etc/
PERL5OPT=-Mlocal::lib=/app/vendor/perl-deps
odbcinst.ini:
Description = ODBC for PostgreSQL
Driver = /usr/lib/psqlodbc.so
Setup = /usr/lib/libodbcpsqlS.so
Driver64 = /usr/lib64/psqlodbc.so
Setup64 = /usr/lib64/libodbcpsqlS.so
FileUsage = 1
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/libmyodbc5.so
Setup = /usr/lib/libodbcmyS.so
Driver64 = /usr/lib64/libmyodbc5.so
Setup64 = /usr/lib64/libodbcmyS.so
FileUsage = 1
[ODBC Driver 11 for SQL Server]
Description = Microsoft ODBC Driver 11 for SQL Server
Driver = .platform/vendor/opt/microsoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2270.0
Threading = 1
UsageCount = 1
odbc.ini is empty.
Am I missing something?
Tried everything and found the reason. It should be dbd:ODBC:DRIVER instead of dbd:ODBC:Driver.
I have a SQL Server on which I have databases that I want to use pandas to alter that data. I know how to get the data using pyodbc into a DataFrame, but then I have no clue how to get that DataFrame back into my SQL Server.
I have tried to create an engine with sqlalchemy and use the to_sql command, but I can not get that to work because my engine is never able to connect correctly to my database.
import pyodbc
import pandas
server = "server"
db = "db"
conn = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+db+';Trusted_Connection=yes')
cursor = conn.cursor()
df = cursor.fetchall()
data = pandas.DataFrame(df)
conn.commit()
You can use pandas.DataFrame.to_sql to insert your dataframe into SQL server. Databases supported by SQLAlchemy are supported by this method.
Here is a example how you can achieve this:
from sqlalchemy import create_engine, event
from urllib.parse import quote_plus
import logging
import sys
import numpy as np
from datetime import datetime, timedelta
# setup logging
logging.basicConfig(stream=sys.stdout,
filemode='a',
format='%(asctime)s.%(msecs)3d %(levelname)s:%(name)s: %(message)s',
datefmt='%m-%d-%Y %H:%M:%S',
level=logging.DEBUG)
logger = logging.getLogger(__name__) # get the name of the module
def write_to_db(df, database_name, table_name):
"""
Creates a sqlalchemy engine and write the dataframe to database
"""
# replacing infinity by nan
df = df.replace([np.inf, -np.inf], np.nan)
user_name = 'USERNAME'
pwd = 'PASSWORD'
db_addr = '10.00.000.10'
chunk_size = 40
conn = "DRIVER={SQL Server};SERVER="+db_addr+";DATABASE="+database_name+";UID="+user_name+";PWD="+pwd+""
quoted = quote_plus(conn)
new_con = 'mssql+pyodbc:///?odbc_connect={}'.format(quoted)
# create sqlalchemy engine
engine = create_engine(new_con)
# Write to DB
logger.info("Writing to database ...")
st = datetime.now() # start time
# WARNING!! -- overwrites the table using if_exists='replace'
df.to_sql(table_name, engine, if_exists='replace', index=False, chunksize=chunk_size)
logger.info("Database updated...")
logger.info("Data written to '{}' databsae into '{}' table ...".format(database_name, table_name))
logger.info("Time taken to write to DB: {}".format((datetime.now()-st).total_seconds()))
Calling this method should write your dataframe to the database, note that it will replace the table if there is already a table in the database with the same name.
I am trying to connect to a redshift server and run some sql commands. Here is the code that I have written:
Class.forName("org.postgresql.Driver")
val url: String = s"jdbc:postgres://${user}:${password}#${host}:${port}/${database}"
val connection: Connection = DriverManager.getConnection(url, user, password)
val statement = connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)
val setSearchPathQuery: String = s"set search_path to '${schema}';"
statement.execute(setSearchPathQuery)
But I am getting the following error:
java.sql.SQLException: No suitable driver found for jdbc:postgres://user:password#host:port/database
But when I am using play framework's default database library with the same configuration, then I am able to connect to database successfully. Below is the configuration for the default database:
db.default.driver=org.postgresql.Driver
db.default.url="postgres://username:password#hostname:port/database"
db.default.host="hostname"
db.default.port="port"
db.default.dbname = "database"
db.default.user = "username"
db.default.password = "password"
The problem was with the url. The correct format for the url is:
jdbc:postgresql://hostname:port/database