Generally, I have a question about ORM(s), and the best way for managing enterprise/small applications' database schema and model (Actually, to keep application model and database schema sync always)
Is this a good way to create database schema from application models or first creating database schema and then, create application model from it? Which one is better?
Note: I see this principle in Django ORM, it has a tool which creates/syncs application database schema from application models.
I usually start with a logical model (i.e. model the problem domain), and go from there.
In the dynamic scripting languages, the practice seems to be to create the classes and then let whatever database migration tool create the schema for you.
In Java/Hibernate, the shops I've been on have been too paranoid for that, so we created the DDL independently from the classes. The classes and ORM mappings then refer to an existing schema.
Related
We want to connect a remote existing database from settings.py, can we use those tables directly using model without migrate from app.
We know about the legacy database connection but inspect.db command always asks to migrate the connected database.
Is using mysql connector is preferable or it is out of standard, please suggest.
Thanks, your help is appreciated!
I think what you are looking for is the managed meta option in your models. When you define your models, by default you have managed=True. If you want to use an existing database without Django interfering with migrations, you should use managed=False.
See this part of the doc:
[...] If False, no database table creation or deletion operations will be performed for this model. This is useful if the model represents an existing table or a database view that has been created by some other means. This is the only difference when managed=False. All other aspects of model handling are exactly the same as normal. [...]
I've been tasked to create a web API for a vendors database that we host on-site ourselves. What we want to do is create EF data models/db sets for some of the existing vendor tables, but we also want to create a special schema for our custom tables that store the relationships between the vendor data and our other applications.
The thing I've been wondering about is how can we use automatic migrations and database initialization just for our special schema and let the vendor application manage its tables separately on its own. Is this possible?
If you have another application that uses data of an existing database and needs some more, and you don't want to change the schema of the existing database, how do you do that?
Background of my question: We use an IBM product (Connections) to store user profiles. But we have lots of custom requirements (lots of custom fields and logics), so currently we create a few more tables, views and functions in the backend database of Connections to store the custom data. However, as it is IBM's internal database and we are not supposed to touch it, when we upgrade Connections, all our custom tables, views and functions are gone.
So we decide to move out our custom things. But the problem is we still need to join with the data from Connections. (Or not database join, just some other way to integrate with the data before presenting to the users. )
If we create a federated table in our own database, we can create tables and views like we used to. But would it have performance issues? And we are still going to be heavily depend on IBM's schema and have to assume they don't change it. Is it a good approach?
What are the other options we could consider?
If we create a federated table in our own database, we can create tables and views like we used to. But would it have performance issues?
Probably. Your application code would have to do joins between the IBM database tables and your database tables.
I'm assuming that Connections uses DB2. If you bring up your own DB2 database, I think you can do SQL joins between two separate DB2 databases.
Either way, this code should reside in a separate data access package made up of data access objects. The rest of your applications would use the data access package.
And we are still going to be heavily depend on IBM's schema and have to assume they don't change it.
IBM will change their schema, and you have to plan on making corresponding changes to your database and / or application.
What are the other options we could consider?
You could copy the IBM data from their database to your database. You still have to make changes to the copy process when the IBM schema table definitions change.
What is the best way to create Sql Server tables from business objects. For example I'm writing an application that has a user object to store user information. What's the best practice for creating tables from these objects? Create the objects first and then define the database or is there a tool to transform business objects into tables?
I'm just curious how others are doing this type of task.
Use an ORM (object-relational mapping) tool, a list for tools for several languages can be found here: http://en.wikipedia.org/wiki/List_of_object-relational_mapping_software
We are using NHibernate. It allows to write classes first, map them to the database by configuration (mapping files) and generate the database tables from that. It also allows restructuring the database (eg. for optimization) without touching business logic at all.
Of course, you still need to care about existing databases (schema migration).
There may be other ORM which have similar functionality. NHibernate is one of the most powerful.
I can see in the AdventureWorks database that different schemas are used to group tables. Why is this done (security, ...?) and are there best-practices I can find?
thx, Lieven Cardoen
As a manager of Business Intelligence, we rely on schema for logical grouping and managing security. Here are some cases as to how we use schema:
LOGICAL ORGANIZATION
We have a general database that is loaded by SSIS packages solely for staging data before we load our operational data store (ODS). In this database, with the exception of the schema all objects are indentical in structure (table names, column names, data types, nullability, etc.) to their original source. We use the schema to indicate the original source system of the table. In some rare instances, two different databases have tables with the same name and schema allows us to continue to use the original name in the staging database.
In every database on our BI servers each team member has a test_username schema. When we create test objects in a database, this makes it easy to keep track of who made the object. It also makes it a lot easier to purge the test objects later since everyone knows who made what. Frankly, just knowing that we made it is usually enough to know it can be deleted safely, especially when we can't remember when or why we made it!
In our data controller database, we rely on schema to separate different types of processes between reports, etl, and generic resources.
In our star schema data warehouse, all objects are devided into dimension and fact schemas.
When we push data to other departmental servers, we make all BI objects on their servers use the schema bi. This makes it REALLY easy to know bi loads and maintains the table even though it isn't on our server. If the target server isn't a 2008/2005 SQL Server box, then we prefix the table with bi_.
When it gets down to it, we use schema for logical organization anytime we WOULD have appended a prefix or suffix to an object to help organize it in the absence of schema. Having said this, there are a few instances where we don't use schema on our BI servers. In our WorkingDB, everything is dbo. Our WorkingDB is used like TempDB to create temporary tables, but these tables are temporary tables that we know we will create everytime an ETL process runs. The special property of WorkingDB is that we don't ever backup the database and all ETL processes that use the database must be able to recreate their objects from scratch in the absence of the table. In this instance, we felt using schema didn't add ANY organizational value since we don't actually use the objects outside of their temporary ETL process.
SECURITY
Since we are a BI group, we don't generally build and support our own applications. We almost exclusively use other people's applications and bring data from their back-end databases to our server. However, we do have one database called bi_applications that is the back-end for a variety of small CRUD applications. These applications are usually data entry forms that we provide to the business so that they can capture data we would otherwise have to maintain in BI. It is a way of getting data that should be in production applications into BI while we wait for our low priority application enhancements to gather dust in the future development lists. Each application has a separate schema and the application account used to update the underlying tables ONLY has access to objects of the associated schema. This makes it really easy to understand, secure, and maintain the separate applications.
In a few instances, I have let power users have direct database access to our tables or stored procedures. We rely on using schema combined with roles to secure the objects. We grant permissions to the schema and users are added to roles. This allows us to easily understand which objects are used by whom without having to dig through roles to figure it out.
In short, we use schema for security purposes when we probably would have considered separating the objects out into their own databases and when we expect an application or user outside of BI to access our databases.
Although these aren't best business practices for application developers, I hope my bi use-cases may help you think of some of the ways to use schema in your end of the business.