Delete database tables programmatically - database

My application created database tables using JPA. I want to delete these tables programmatically using JPA and/or EclipseLink API (without using DROP TABLE command).
What is the right way to delete database tables created by JPA framework?

If you are able to use JPA 2.1, you can specify that the provider write or read scripts to drop tables, allowing you to control what gets executed. Unfortunately though, it doesn't give you fine grain control unless you can dynamically write your own drop table scripts.
Without using provider specific code, the only other way to drop a particular table is native SQL.
Within EclipseLink you would get the session from the EntityManager and create a new SchemaManager(session). With it you can call dropDefaultTables() to drop all tables in the persistence unit, or use dropObject(DatabaseObjectDefinition) to drop single DatabaseObjectDefinition objects which are used to represent tables, stored procs etc. You can use getDefaultTableCreator(boolean generateFKConstraints) to get the default table creator for the PU. With it you can use getTableDefinitions() to get all DatabaseObjectDefinition objects representing the tables in the PU, so you can select one to drop.
http://wiki.eclipse.org/EclipseLink/FAQ/JPA#How_to_access_table.2C_column_and_schema_information_at_runtime.3F
shows how to unwrap the EM to get the session.

Here's what worked for me (EclipseLink JPA 2.1):
EntityManagerFactory emf = Persistence.createEntityManagerFactory("you persistence unit");
EntityManager em = emf.createEntityManager();
SchemaManager sm = new SchemaManager(em.unwrap(DatabaseSession.class));
sm.dropTable("table name");

Related

How can I store SQL Server Database Metadata for Sync Framework in a different database on the same server?

I would like to be able to store the tracking tables in a different database the original. For a couple of reasons.
I would like to be able to drop it on demand if I change versions of my application.
I would like to have multiple sync scopes separated by user permissioning.
I am sure through the sqlmetadatastore class there is a way, but I have not found it yet.
the sqlmetaadatastore will not help you in any way with what you're trying to achieve. am pretty sure its not in anyway exposed in the database sync providers you're using.
note that the tracking tables are not the only objects Sync Framework provisioning creates, you will have triggers, tracking tables, stored procedures and user defined table types. and you're not supposed to be dropping them separately or even dropping them by yourself, but you should be using the deprovisioning API.
now if you really want to have the tracking tables on a separate db, the provisioning API has a Script method that can generate the SQL statements required to create the Sync Fx objects.
you can alter that to create the tracking tables on another DB, but you have to alter the triggers as well to insert on this other database.

Perform InsertOnSubmit on a View

We have a ASP.NET MVC application with Linq2Sql and a SQL Server-Backend. The application is run on the main site of our customer, but every site has their own database in SQL Server. (Due to different reasons, they shouldn't share most of the information between each other.) However some general information is shared, this is stored in a Shared-Database, and every site-specific database has views which represent those tables in their respective databases.
For example when I have Sites S1, S2, S3 with their databases D1, D2, D3 and a shard database DS with a shared table TS
now in the databases S1-S3 I'll have a View which underlying query is simply:
SELECT * FROM DS.TS;
Having it written like this, SQL Server somehow automagically propagates all inserts, updates and deletes to DS.TS without the need of explicit instead of triggers. Which makes our lives much easier, since we also need to handle only one connection to one database and don't need to bother with two different databases.
While we write our Delete and Update commands ourselves in the application and don't use Linq2Sql, they work fine. However the insert command on the shared table is using InsertOnSubmit and fails with the following exception:
Can't perform Create, Update, or Delete operations on 'Table(TS)' because it has no primary key.
at System.Data.Linq.Table`1.InsertOnSubmit(TEntity entity)
Is there any way to make this work, or will I have to create the Insert-Commands on those shared tables by myself and execute them with DbCommand.ExecuteNonQuery()
LINQ to SQL with views doesn't know which column contains the key. You may be able to tweak the generated model and only set the column(s) that should be primary keys as appropriate. Be aware in your view however that you shouldn't do select * as it may kill performance over time.
You can set IsPrimaryKey property of the primary key in your model to True

HBM2DDL -- Create a database view instead of a Table?

All,
Is there some setting that I can tell hbm2ddl to run a view creation statement instead of create a table when generating the database schema?
I'm creating my database schema using the wonderful hbm2ddl tool, but I have one issue. I need to flatten some of the tables into views to aid searching the database, and hql would be overly complex a solution. I've created Entity objects pointed at these views, in order to fetch search results via hibernate. This all works fine, until hbm2ddl is used. In an empty database schema, hbm2ddl will create the database schema based on the jpa annotations, unfortunately, it will also create my views as tables. Is there some setting that I can tell hbm2ddl to run a view creation statement instead of create a table? In lieu of that, is there a way to tell hbm2ddl to skip table creation for an entity (exclude, or something)?
Thanks!
To my knowledge, and this is unfortunate, Hibernate doesn't support things like creating views instead of tables nor validating a schema containing views. See issues like HHH-1872, HHH-2018 or HHH-1329.

Linq to Sql Data class in dbml

I am abit curious about dbml.... Should I create one dbml file for one database or separated into different parts e.g. User dbml (only tables relate to users) etc? When I do this I will have abit of problems. Assume the User dbml has a User table and if the Order dbml has a User table as well, this won't be allowed if the entity namespace are the same. If I have set a different entity namespace for each of the dbml, it works but this will gives me a different entity of User table. When a single data returns to Business Logic layer, there is a difficulty of knowing which entity namespace of the user table to be used.
If I built one dbml file instead of having separate dbml, will single dbml appear slower than the separated dbml version when fetching the data from the database.
Linq to SQL is designed to be operated with a single Data Context object.
Lifetime of a LINQ to SQL DataContext
http://blogs.msdn.com/dinesh.kulkarni/archive/2008/04/27/lifetime-of-a-linq-to-sql-datacontext.aspx
The NerdDinner tutorial has a pretty good example of typical Linq to SQL usage, using a repository pattern. In all cases, the repository objects use a single Data Context object to perform the work:
http://nerddinnerbook.s3.amazonaws.com/Part3.htm
What you are trying to do sounds like it might be more suitable for the Entity Framework. Be advised, however, that the Entity Framework has some issues in its present release, particularly with regards to lazy loading.

How to partially migrate a database to a new system over time?

We are in the process of a multi-year project where we're building a new system and a new database to eventually replace the old system and database. The users are using the new and old systems as we're changing them.
The problem we keep running into is when an object in one system is dependent on an object in the other system. We've been using views, but have run into a limitation with one of the technologies (Entity Framework) and are considering other options.
The other option we're looking at right now is replication. My boss isn't excited about the extra maintenance that would cause. So, what other options are there for getting dependent data into the database that needs it?
Update:
The technologies we're using are SQL Server 2008 and Entity Framework. Both databases are within the same sql server instance so linked servers shouldn't be necessary.
The limitation we're facing with Entity Framework is we can't seem to create the relationships between the table-based-entities and the view-based-entities. No relationship can exist in the database between a view and a table, as far as I know, so the edmx diagram can't infer it. And I cannot seem to create the relationship manually without getting errors. It thinks all columns in the view are keys.
If I leave it that way I get an error like this for each column in the view:
Association End key property [...] is
not mapped.
If I try to change the "Entity Key" property to false on the columns that are not the key I get this error:
All the key properties of the
EntitySet [...] must be mapped to all
the key properties [...] of table
viewName.
According to this forum post it sounds like a limitation of the Entity Framework.
Update #2
I should also mention the main limitation of the Entity Framework is that it only supports one database at a time. So we need the old data to appear to be in the new database for the Entity Framework to see it. We only need read access of the old system data in the new system.
You can use linked server queries to leave the data where it is, but connect to it from the other db.
Depending on how up-to-date the data in each db needs to be & if one data source can remain read-only you can:
Use the Database Copy Wizard to create an SSIS package
that you can run periodically as a SQL Agent Task
Use snapshot replication
Create a custom BCP in/out process
to get the data to the other db
Use transactional replication, which
can be near-realtime.
If data needs to be read-write in both database then you can use:
transactional replication with
update subscriptions
merge replication
As you go down the list the amount of work involved in maintaining the solution increases. Using linked server queries will work best if its the right fit for what you're trying to achieve.
EDIT: If they're the same server then as suggested by another user you should be able to access the table with servername.databasename.schema.tablename Looks like it's an entity-framework issues & not a db issue.
I don't know about EntityToSql but I know in LinqToSql you can connect to multiple databases/servers in one .dbml if you prefix the tables with:
ServerName.DatabaseName.SchemaName.TableName
MyServer.MyOldDatabase.dbo.Customers
I have been able to click on a table in the .dbml and copy and paste it into the .dbml of the alternate project prefix the name and set up the relationships and it works... like I said this was in LinqToSql, though have not tried it with EntityToSql. I would give it shot before you go though all the work of replication and such.
If Linq-to-Entities cannot cross DB's then Replication or something that emulates it is the only thing that will work.
For performance purposes you probably want either Merge replication or Transactional with queued (not immediate) updating.
Thanks for the responses. We're going to try adding triggers to the old database tables to insert/update/delete records in the new tables of the new database. This way we can continue to use Entity Framework and also do any data transformations we need.
Once the UI functions move over to the new system for a particular feature, we'll remove the table from the old database and add a view to the old database with the same name that points to the new database table for backwards compatibility.
One thing that I realized needs to happen before we can do this is we have to search all our code and sql for ##Identity and replace it with scope_identity() so the triggers don't mess up the Ids in the old system.

Resources