How to programmatically change table prefix in symfony2-doctrine - database

I setup my symfony3 application to use 2 different databases. they are pretty much similar, structure of tables are the same and so the fields. The problem is, for example, the article table in db1 is called db1_article and article table in db2 is called db2_article. They have different data but same structure.
Now, I am setting up an entity for articles like that:
/**
* #ORM\Entity
* #ORM\Table(name="db1_article")
*/
class Article {
...
}
I'd prefer not to create a different entity for the same table in db2, can I dinamically define the table name somewhere in order to avoid duplications?
thanks

In order to change the table you've got to to update Doctrine's class meta data of that entity.
// getEntityManager() = $this->getDoctrine()->getManager()
$articleMetaData = $this->getEntityManager()->getMetadataFactory()->getMetadataFor(Article::class);
$metaDataBuilder = new ClassMetadataBuilder($articleMetaData);
$metaDataBuilder->setTable('db2_article');
$this->getEntityManager()->getMetadataFactory()
->setMetadataFor(Article::class, $metaDataBuilder->getClassMetadata());
$article2MetaData = $this->getEntityManager()->getClassMetadata(Article::class);
$article2MetaData->getTableName(); // is now db2_article
$this->getEntityManager()->find(Article::class, 1); // will query db2_article ID -> 1
To see what the class meta data is up to as in methods, see: Doctrine PHP Mapping

I would go for an approach using different entity managers for each database, so you can use the same entities.
//config.yml
doctrine:
dbal:
default_connection: first_entity_manager
connections:
first_entity_manager:
driver: %database_driver%
host: %database_host%
port: %database_port%
dbname: %database_name%
user: %database_user%
password: %database_password%
charset: UTF8
second_entity_manager:
driver: %database_2nd_driver%
host: %database_2nd_host%
port: %database_2nd_port%
dbname: %database_2nd_name%
user: %database_2nd_user%
password: %database_2nd_password%
charset: UTF8
orm:
default_entity_manager: first_entity_manager
entity_managers:
first_entity_manager:
connection: first_entity_manager
mappings:
AppBundle: ~
second_entity_manager:
connection: second_entity_manager
mappings:
AppBundle: ~
Then just program some functions to use the correct entity manager
$em_first = $this->getDoctrine()->getManager('first_entity_manager');
$em_second = $this->getDoctrine()->getManager('second_entity_manager');
$article_first_em = $em_first->getRepository('AppBundle:Article')->find(1);
$article_second_em = $em_second->getRepository('AppBundle:Article')->find(2);
For the table prefix I would use a table suscriber
Quite old but still works
How to setup table prefix in symfony2
http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/cookbook/sql-table-prefixes.html

Related

com.microsoft.sqlserver.jdbc.SQLServerException: Login failed for user: CREATE VIEW USING JDBC in Databricks

I am trying to create a view in Databricks by querying a table in my SQL Server database using JDBC.
The following PySpark code for creating a temporary view works without a problem,
jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
connectionProperties = {
"user" : jdbcUsername,
"password" : jdbcPassword,
"driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}
table_name = '<my_table>'
pushdown_query = f"(select * from {table_name}) AS tmp"
df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)
df.createOrReplaceTempView('tmp')
However, as soon as I try to bring this to Spark SQL code, I get the above error,
CREATE OR REPLACE VIEW tmp
USING JDBC
OPTIONS (
url "<jdbc_url>",
dbtable "(select * from <my_table>) AS tmp",
user '<user>',
password '<password>',
driver "com.microsoft.sqlserver.jdbc.SQLServerDriver"
)
What is the problem here? The URL, credentials etc. that I am using here is the same as what has been used in the PySpark code.
You may need to use table, not view. Your PySpark code isn't really a 1:1 match to what you're trying to do in SQL. If you check documentation for CREATE VIEW, you will see that it only could be created as query from another table/view.
Corresponding CREATE TABLE statement should look as following:
CREATE OR REPLACE TABLE jdbc_source
USING JDBC
OPTIONS (
url "<jdbc_url>",
dbtable "<my_table>",
user '<user>',
password '<password>',
driver "com.microsoft.sqlserver.jdbc.SQLServerDriver"
)
Theoretically, CREATE VIEW may work as well, but you'll need to put all options into JDBC URL:
CREATE OR REPLACE VIEW tmp
AS SELECT * FROM jdbc.`<full-jdbc-url>`
I agree with #Alex Ott. But in your case, I could able to achieve your requirement by replacing VIEW tmp to TEMPORARY VIEW tmp.
Follow below syntax:
%sql
CREATE OR REPLACE TEMPORARY VIEW <view_name>
USING JDBC
OPTIONS (
url "jdbc:sqlserver://<server_name>.database.windows.net:1433;database<database_name>;user=<user_name>;password=<Password>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;",
dbtable "(select * from sample_table1) AS tmp",
driver "com.microsoft.sqlserver.jdbc.SQLServerDriver"
)
My Execution:
Result:

Joining tables across database boundaries using Snowflake SQL alchemy connector

Using snowflake-sqlalchemy, is there a way to use declarative base to join tables across database boundaries. e.g.
# This table is in database1
meta = MetaData(schema="Schema1")
Base = declarative_base(metadata=meta)
class Table1(Base):
__tablename__ = 'Table1'
...
# This table is in database2
meta = MetaData(schema="Schema2")
Base = declarative_base(metadata=meta)
class Table2(Base):
__tablename__ = 'Table2'
...
# I want to do this...
session.query(Table1).join(Table2).filter(Table1.id > 1).all()
###
# The engine specifies database1 as the default db, as such the query builder
# assumes Table2 is in database1.
The account specified in the engine connection params has access to both databases. I would prefer not to use raw sql for this.. for reasons.

Duplicate tables created in django migrations when using SQL Server schemas

I want to place all Django specific tables and my custom Auth models into the default dbo schema, and have all my different app specific tables in a schema named after the app. Something to note is that all of my app tables will foreign key back to my auth model (I have _created_by and _last_updated_by fields on a base model that all apps inherit from). Basically I want the DB structure to be something like this:
DBO
- my_custom_auth_table
- django_migrations
- django_session
- django_content_type
- etc...
APP1
-table1
-table2
APP2
-table1
-table2
In order to achieve this, I tried creating a Login/User pair on the DB server for each app and implemented a DB router.
my allow_migrate method:
def allow_migrate(self, db, app_label, model_name=None, **hints):
if app_label == db:
return True
else:
return False
my database settings (I will use my doglicense app as an example):
IP = xxx
default_db_settings = {
'ENGINE': 'mssql',
'NAME': 'DB',
'USER': 'some_user',
'PASSWORD':'***',
'HOST': IP,
'PORT':'1433',
'OPTIONS':{'driver':'ODBC Driver 18 for SQL Server', 'extra_params': 'trustServerCertificate=yes'},
}
doglicense = {
'ENGINE': 'mssql',
'NAME': 'DB',
'USER': 'DogLicense',
'PASSWORD':'***',
'HOST': IP,
'PORT':'1433',
'OPTIONS':{'driver':'ODBC Driver 18 for SQL Server', 'extra_params': 'trustServerCertificate=yes'},
}
I have successfully migrated the custom auth app and all of djangos apps into dbo, however this is where the fun begins.
If I run:
python manage.py migrate DogLicense --plan
we can see that it only tries to create the new tables:
Planned operations:
DogLicense.0001_initial
Create model Breed
Create model Color
Create model Dog
Create model ZipCode
Create model Veterinarian
Create model Street
Create model Registration
Create model Owner
Create model DogType
Add field owners to dog
Add field type to dog
However when I try to specify the database connection in order to dump theses files into the doglicense schema:
python manage.py migrate DogLicense --plan --database=doglicense
I get:
Planned operations:
contenttypes.0001_initial
Create model ContentType
Alter unique_together for contenttype (1 constraint(s))
contenttypes.0002_remove_content_type_name
Change Meta options on contenttype
Alter field name on contenttype
Raw Python operation
Remove field name from contenttype
auth.0001_initial
Create model Permission
Create model Group
Create model User
auth.0002_alter_permission_name_max_length
Alter field name on permission
auth.0003_alter_user_email_max_length
Alter field email on user
auth.0004_alter_user_username_opts
Alter field username on user
auth.0005_alter_user_last_login_null
Alter field last_login on user
auth.0006_require_contenttypes_0002
auth.0007_alter_validators_add_error_messages
Alter field username on user
auth.0008_alter_user_username_max_length
Alter field username on user
auth.0009_alter_user_last_name_max_length
Alter field last_name on user
auth.0010_alter_group_name_max_length
Alter field name on group
auth.0011_update_proxy_permissions
Raw Python operation -> Update the content_type of prox…
auth.0012_alter_user_first_name_max_length
Alter field first_name on user
MSSAuth.0001_initial
Create model FailedLoginAttempt
Create model MSSUser
Create model AuthProfile
DogLicense.0001_initial
Create model Breed
Create model Color
Create model Dog
Create model ZipCode
Create model Veterinarian
Create model Street
Create model Registration
Create model Owner
Create model DogType
Add field owners to dog
Add field type to dog
You can see it wants to create every table all over again. And obviously running this migration without the --plan flag does indeed result in tables like doglicense.django_migrations being created.
How can I prevent these duplicate tables from being created? Is this a problem with my SQL Server user permissions? Perhaps my router is poorly implemented?
Any help will be appreciated.

How to sync the table of postgresql schema - Sequelize ORM

I have two schemas in my postgres
public // default schema
first_user
Now I have same tables in both schemas
I changed the table structure, so I want to run the sync now,
I sync the tables using:
const db = new Sequelize(postgres_db, postgres_user, postgres_pwd, {
host: postgres_host,
port: 5432,
dialect: 'postgres',
logging: false,
});
db.sync().then(() => {
console.log('Table Synced');
}, (err) => {
console.log(err);
});
After running this my table structure inside the public schema changed successfully, but my first_user schema's table structure remains same.
How to solve this?
NOTE: I don't want to lose my data inside my table.
Finally implemented this using sequelize migrations
http://docs.sequelizejs.com/manual/tutorial/migrations.html
If you can't use Sequelize migration because of lack of Typescript support you can fall back to Migra which is easy to use.
https://djrobstep.com/docs/migra
You Can Try CREATE TABLE AS TABLE Query.
create table first_user.tableName as table public.tableName;
It will create the table with updated table structure as well as with the data.
Thanks..

Django SQL Server Error: "Cannot create new connection because in manual or distributed transaction mode."

I have some strange issue with querying SQL Server from django.
When I query db twice in single request, I got errors in some cases. Namely when first db query returns big amount of data, we end up with error while querying db second time.
Details:
We're using Microsoft SQL Server backend for Django (https://bitbucket.org/Manfre/django-mssql/src) running on windows.
We want allow user to filter data from some table ("Activity") via form, display it on the website's table and then show related data from another table ("Frames") on map.
class Frames(models.Model):
...
class Activity(models.Model):
frame_from = models.ForeignKey(Frames, ...)
...
The problem is: when we want to filter larger amount of data from Activity (let's say 200rows x 6 colums), we can not make other queries in the same request on table Frames (MARS is turned on in Django settings.py):
result = Aktywnosci.objects.filter(qset1)
is always ok, but
path = Frames.objects.filter(qset2)
when the previous query returned larger amount of data, raises OLE DB Error:
'Microsoft OLE DB Provider for SQL Server' Error: Cannot
create new connection because in manual or distributed transaction mode.
PS. Database settings from settings.py:
# Database for this installation.
DATABASES = {
'default':{
'ENGINE': 'django.db.backends.sqlserver_ado',
'NAME': '***',
'USER': '***',
'PASSWORD': '***',
'HOST': '***',
'PORT': '',
'OPTIONS' : {
'provider': 'SQLOLEDB',
'use_mars': True,
}
}
}
PS2. I came across this issue on the google-code page of djang-mssql: http://code.google.com/p/django-mssql/issues/detail?id=79 - but it seems to be solved in new version of package...
What can I do about it?
Thanks in advance
We got the solution at bitbucket: https://bitbucket.org/Manfre/django-mssql/issue/13/ole-db-provider-for-sql-server-error from Michael Manfre - thanks a lot for this.
The solution is following:
"SQLOLEDB and MARS doesn't work very well and I intend on changing all of the documentation and defaults to assume a native client driver will be used. Try using the native client; "SQLNCLI10" or "SQLNCLI11".
DATABASES = {
'default': {
'ENGINE': 'sqlserver_ado',
'NAME': 'mydb',
'HOST': r'localhost',
'USER': '',
'PASSWORD': '',
'OPTIONS': {
'provider': 'SQLNCLI10',
'extra_params': 'DataTypeCompatibility=80;MARS Connection=True;',
},
}
}
Is "use_mars=True" set up in your "settings.py" file?
http://django-mssql.readthedocs.org/en/latest/settings.html
If this doesn't work, I have a question: is your selection in SQL Server involving tables with triggers on them (transact SQL scripts) - in this case the SQL Server will use a static cursor instead of a firehose one (which is what you need) therefore you will get your error. Try to get rid of the triggers, use some views in SQL Server and select from them instead of tables.

Resources