I want to read a table from SQL Server using SQLAlchemy. This table already exists and has primary key. However, it's located in the schema 'my_schema'. And I can't reach this table.
On the contrary, using the following code I can reach a table from another database, which does not have schemas:
from sqlalchemy import create_engine, MetaData, Column, String, Table
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
no_schema_engine = create_engine(
"firebird+fdb://%(user)s:%(pwd)s#%(host)s:%(port)d/%(path)s?charset=%(charset)s" % insert_params,
encoding=insert_params['charset'])
metadata = MetaData()
my_table= Table('my_table_name',metadata, Column('id', String, primary_key=True), autoload=True, autoload_with=no_schema_engine )
session = Session(no_schema_engine)
sample_row = session.query(my_table).first()
print(sample_row)
>> (1, datetime.datetime(2020, 9, 8, 22, 58, 23, 947000))
When I change the engine to connect to SQL Server and to copy the same table (but now it has schema), it throws an error sqlalchemy.exc.NoSuchTableError: my_table_name.
I use the same code as above, except I change the engine and the row
my_table= Table('my_table_name',metadata, Column('id', String, primary_key=True), autoload=True, autoload_with=schema_engine, schema='my_schema')
It's also important to note that SQLAlchemy actually can see all the schemas in SQL Server database but can't see tables:
from sqlalchemy import inspect
inspector = inspect(source_engine)
print(inspector.get_table_names())
>> []
schemas = inspector.get_schema_names()
print(schemas)
>> ['my_schema_1', 'my_schema_2', ...]
Related
Using snowflake-sqlalchemy, is there a way to use declarative base to join tables across database boundaries. e.g.
# This table is in database1
meta = MetaData(schema="Schema1")
Base = declarative_base(metadata=meta)
class Table1(Base):
__tablename__ = 'Table1'
...
# This table is in database2
meta = MetaData(schema="Schema2")
Base = declarative_base(metadata=meta)
class Table2(Base):
__tablename__ = 'Table2'
...
# I want to do this...
session.query(Table1).join(Table2).filter(Table1.id > 1).all()
###
# The engine specifies database1 as the default db, as such the query builder
# assumes Table2 is in database1.
The account specified in the engine connection params has access to both databases. I would prefer not to use raw sql for this.. for reasons.
I'm copying table from SQL Server to Firebird. I have a column of type BIT in SQL Server, but Firebird doesn't know this type. How can I change the type of the column to create table in my Firebird database?
from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy.orm import Session
# Get table from SQL Server
source_engine = create_engine(connection_url_source)
dest_engine = create_engine(connection_url_dest)
metadata = MetaData()
table = Table('my_table', metadata, autoload=True, autoload_with=source_engine, schema='my_schema')
session = Session(source_engine)
query =session.query(table)
# Create table in firebird database
new_metadata = MetaData(bind=dest_engine)
columns = [Column(desc['name'], desc['type']) for desc in query.column_descriptions]
column_names = [desc['name'] for desc in query.column_descriptions]
table_new = Table("my_table", new_metadata, *columns)
table_new.create(dest_engine)
Here I receive the error:
raise exception sqlalchemy.exc.CompileError: (in table 'my_table',
column 'my_column'): Compiler <sqlalchemy_firebird.base.FBTypeCompiler
object at 0x00000061ADAC8D60> can't render element of type BIT
I am accessing the other database using elastic queries. The data source was created like this:
CREATE EXTERNAL DATA SOURCE TheCompanyQueryDataSrc WITH (
TYPE = RDBMS,
--CONNECTION_OPTIONS = 'ApplicationIntent=ReadOnly',
CREDENTIAL = ElasticDBQueryCred,
LOCATION = 'thecompanysql.database.windows.net',
DATABASE_NAME = 'TheCompanyProd'
);
To reduce the database load, the read-only replica was created and should be used. As far as I understand it, I should add the CONNECTION_OPTIONS = 'ApplicationIntent=ReadOnly' (commented out in the above code). However, I get only the Incorrect syntax near 'CONNECTION_OPTIONS'
Both databases (the one that sets the connection + external tables, and the other to-be-read-only are at the same server (thecompanysql.database.windows.net). Both are set the compatibility lever SQL Server 2019 (150).
What else should I set to make it work?
The CREATE EXTERNAL DATA SOURCE Syntax doesn't support the option CONNECTION_OPTIONS = 'ApplicationIntent=ReadOnly'. We can't use that in the statements.
If you want achieve that readonly request, the way is that please use the user account which only has the readonly(db_reader) permission to login the external database.
For example:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
CREATE DATABASE SCOPED CREDENTIAL SQL_Credential
WITH
IDENTITY = '<username>' -- readonly user account,
SECRET = '<password>' ;
CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc
WITH
( TYPE = RDBMS ,
LOCATION = '<server_name>.database.windows.net' ,
DATABASE_NAME = 'Customers' ,
CREDENTIAL = SQL_Credential
) ;
Since the option is not supported, then we can't use it with elastic query. The only way to connect to the Azure SQL data with SSMS is like this:
HTH.
I am working with SQLAlchemy and MS SQL server and I would like to create a unique constraint that allows multiple NULL value.
I know that MS SQL server does not ignore the null value and considers it as violation for the UNIQUE KEY.
I also know how to fix it with SQL code (see here)
But is there a way to do the same thing with SQLAlchemy directly ?
Here is my code :
class Referential(db.Model):
__tablename__ = "REFERENTIAL"
id = db.Column("ID", Integer, primary_key=True, autoincrement=True)
name = db.Column("NAME", String(100), index=True, unique=True, nullable=False)
internal_code = db.Column("INTERNAL_CODE", String(50), unique=True, index=True)
Thanks in advance
MSSQL's implementation when it comes to allowing nulls in a unique column is a little odd.
import sqlalchemy as sa
sa.Table(
sa.Column('column', sa.String(50), nullable=True),
sa.Index('uq_column_allows_nulls', mssql_where=sa.text('column IS NOT NULL'),
)
If you are planning on using alembic like I was this is the code:
import sqlalchemy as sa
import alembic as op
op.create_index(
name='uq_column_name',
table_name='table',
columns=['column'],
mssql_where=sa.text('column IS NOT NULL'),
)
This uses the sql expression text for sqlalchemy and create_index's dialect_expression key word arguments mssql_where=
I'm inserting data from mysql table to postgres table and my code is:
from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy.orm import mapper, sessionmaker
import psycopg2
class TestTable(object):
pass
class StoreTV(object):
pass
if __name__ == "__main__":
engine = create_engine('mysql://root#localhost:3306/irt', echo=False)
Session = sessionmaker(bind=engine)
session = Session()
metadata = MetaData(engine)
test_table = Table('test_1', metadata, autoload=True)
store_tv_table = Table('roku_store', metadata, autoload=True)
mapper(TestTable, test_table)
mapper(StoreTV, store_tv_table)
res = session.query(TestTable).all()
print res[1].test_1col
tv_list = session.query(StoreTV).all()
for tv in tv_list:
tv_data = dict()
tv_data = {
'title': tv.name,
'email': tv.business_email
}
print tv_data
conn = psycopg2.connect(database="db", user="user", password="pass", host="localhost", port="5432")
print "Opened database successfully"
cur = conn.cursor()
values = cur.execute("Select * FROM iris_store")
print values
cur.execute("INSERT INTO iris_store(title, business_email) VALUES ('title':tv_data[title], 'business_email':tv_data[business_email])")
print "Record created successfully"
conn.commit()
conn.close()
And I'm not able to get data from postgres data and insert into postgres table
while I'm successful to get data from Mysql table
ERROR is:
something
{'email': 'name#example.com', 'title': "Some Name"}
Opened database successfully
None
Traceback (most recent call last):
File "/home/Desktop/porting.py", line 49, in
cur.execute("INSERT INTO iris_store(title, business_email) VALUES ('title':tv_data[title], 'business_email':tv_data[business_email])")
psycopg2.ProgrammingError: syntax error at or near ":"
LINE 1: ... iris_store(title, business_email) VALUES ('title':tv_data[t...
^
Usman
you have to check your data type for email to insert data.
because to insert data from mysql to postgres you have to both fields of same type.
click here and page no 28 will describe you aboout the data types of mysql and postgres
Your main problem is that you hava a sql syntax error in your insert query. It should look something like this:
cur.execute("INSERT INTO iris_store(title, business_email) VALUES (%(title)s, %(email)s)", tv_data)
For reference, see: Passing parameters to SQL queries
Also you probably don't want to create a new connection to your postgres db for each single value in tv_list, you should move the connect and close calls outside of the for loop, and printing the whole table each time also doesn't seem very useful