Is there a way to tell EF what to use for Autogrowth and Maxsize when it creates a database on SQL Server?
Actually I think you can't. You can also have a look here just to be sure
https://entityframework.codeplex.com/SourceControl/latest#src/EntityFramework.SqlServer/SqlProviderServices.cs
and on the called function (but it runs just a "create database" eventually specifing mdf and ldf file names).
Probably the best thing is that you create the database running a direct query before standard migrations.
EDIT
Database creation is not really automated, or better, you can decide when (and writing your migration, how) create it.
Actually I think you are using automatic migration (you did not write any code for migration).
Database is created if it does not exists and tables are created checking __MigrationHistory table (this is quite important because you can just create the DB without updating EF metadatas).
It happens before the first access to EF (at first access EF reads the mapping, check that the DB is ok and then executes the query you requested) or you can force it.
Actually I think that is more simple to decide if you need to run a create database (you can just check if it exists) than run an alter database if the parameters are wrong.
Your code could:
1. read the connection string from App/Web config
2. connect to master and not to the specified database
3. check if the database exists (select count(*) from sysdatabases where name = ...)
4. if it does not exists create it
Then you can continue with your normal code (EF will find the DB so the migrations will avoid to create it).
Enable migrations, but not automatic migrations. See this to learn about migrations.
Create the first migration. That creates an Up() and Down() method. You can see how to customize it in Customizing migrations section of the previous document.
In the up method you can use Sql("Your sql command") to run the ALTER DATABASE commands to set the growth an max size DB configuration.
Related
I’m seriously confused about how flyway generally works to maintain the db as a code. Suppose I have the following V0 script:
Create table student(
Name varchar(25)
)
That would be my initial db. Now, suppose I want to add a new column, why am I being forced to do a V1 script like this one?
Alter table student add column surname varchar(25)
What I’d like to do would be to simply update the v0 script like this:
Create table student(
Name varchar(25),
Surname varchar(25)
)
Then the tool, by comparing the actual db, should be able to understand that a new column should be created!
This is as other code (java, javasctipt,..) tools work and the same I would it like to be for db as a code tools.
So my question is: is there a way to achieve this behavior without dropping/recreating the db?
I tagged this question with flyway and liquibase tools but feel free to suggest other tools that would fit my needs.
Whatever way you develop the database,there is no way to achieve this behavior without dropping/recreating the db, because the CREATE TABLE statement assumes that the table that you specify isn't already there. You can't use a CREATE OR ALTER statement because these aren't supported for tables even where the RDBMS that you use supports that syntax.
In the early stages of a database project, you can work very much quicker with a build script that you use to create a database with tables, views and so on. You can then insert some data, try it out, run a few tests, maybe and then tear it down. Flyway community supports this: you just have a single migration script starting from an empty database that you repeatedly 'clean' and 'migrate', until you reach your first version. Flyway takes care of the 'Tear-down' process. and give you a fresh start, by wiping your configured schemas completely clean.
Flyway Teams supports a special type of migration, the 'repeatable' that allows you to use for migrations SQL files that you can alter. However, you would need to add logic that deletes the table if it already exists before it executes your CREATE TABLE statement. It avoids having to 'Flyway clean', but it is a lot of extra work. It also means that you lose the whole advantage of a version representing an exact state of a database.
At some point, you are going to use migrations because you're likely to have copies of the database to keep up-to-date. Whatever tool you use to update a development or producton database, you are going to need to use a migration for this because of the existing data in tables.
Flyway Enterprise supports the automatic generation of a migration, if you are using Oracle or SQL Server. SQL Compare is provided to compare two versions of a database and produce a migration script from one version to the next. This allows you to use a build script as you suggest, compare it with the current version of the database, and generate a migration script to get from the one to the other.
I am relativley new to MS SQL server. I need to create a test database from exisitng test data base with same schema and get the data from production and fill the newly created empty database. For this I was using generate scripts in SSMS. But now I need to do it on regular basis in a job. Please guide me how I can create empty databases automatically at a point of time.
You will have a very hard time automating the generate scripts wizard. I would suggest using something like Red-Gate's SQL Compare (or any alternative that supports command-line). You can create a new, empty database, then script a compare/deploy using the command line from SQL Server Agent.
Another, more icky alternative, is to deploy your schema and modules to the model database. You can keep this in sync using SQL Compare (or alternatives), or just be diligent about deployment of schema/module changes, then when you create a new database it will automatically inherit the current state of your schema/modules. The problem with this approach (other than depending on you keeping model in sync) is that all new databases will inherit this schema, since there currently is no way to have multiple models.
Have you considered restoring backups?
To add to Aaron's already good answer, I've been using SQLDelta for years - I think it's excellent.
(I have no connection to SqlDelta, other than being a very satisfied customer)
Due to an employee quitting, I've been given a project that is outside my area of expertise.
I have a product where each customer will have their own copy of a database. The UI for creating the database (licensing, basic info collection, etc) is being outsourced, so I was hoping to just have a single stored procedure they can call, providing a few parameters, and have the SP create the database. I have a script for creating the database, but I'm not sure the best way to actually execute the script.
From what I've found, this seems to be outside the scope of what a SP easily can do. Is there any sort of "best practice" for handling this sort of program flow?
Generally speaking, SQL scripts - both DML and DDL - are what you use for database creation and population. SQL Server has a command line interface called SQLCMD that these scripts can be run through - here's a link to the MSDN tutorial.
Assuming there's no customization to the tables or columns involved, you could get away with using either attach/reattach or backup/restore. These would require that a baseline database exist - no customer data. Then you use either of the methods mentioned to capture the database as-is. Backup/restore is preferrable because attach/reattach requires the database to be offline. But users need to be sync'd before they can access the database.
If you got the script to create database, it is easy for them to use it within their program. Do you have any specific pre-requisite to create the database & set permissions accordingly, you can wrap up all the scripts within 1 script file to execute.
We are in the process of a multi-year project where we're building a new system and a new database to eventually replace the old system and database. The users are using the new and old systems as we're changing them.
The problem we keep running into is when an object in one system is dependent on an object in the other system. We've been using views, but have run into a limitation with one of the technologies (Entity Framework) and are considering other options.
The other option we're looking at right now is replication. My boss isn't excited about the extra maintenance that would cause. So, what other options are there for getting dependent data into the database that needs it?
Update:
The technologies we're using are SQL Server 2008 and Entity Framework. Both databases are within the same sql server instance so linked servers shouldn't be necessary.
The limitation we're facing with Entity Framework is we can't seem to create the relationships between the table-based-entities and the view-based-entities. No relationship can exist in the database between a view and a table, as far as I know, so the edmx diagram can't infer it. And I cannot seem to create the relationship manually without getting errors. It thinks all columns in the view are keys.
If I leave it that way I get an error like this for each column in the view:
Association End key property [...] is
not mapped.
If I try to change the "Entity Key" property to false on the columns that are not the key I get this error:
All the key properties of the
EntitySet [...] must be mapped to all
the key properties [...] of table
viewName.
According to this forum post it sounds like a limitation of the Entity Framework.
Update #2
I should also mention the main limitation of the Entity Framework is that it only supports one database at a time. So we need the old data to appear to be in the new database for the Entity Framework to see it. We only need read access of the old system data in the new system.
You can use linked server queries to leave the data where it is, but connect to it from the other db.
Depending on how up-to-date the data in each db needs to be & if one data source can remain read-only you can:
Use the Database Copy Wizard to create an SSIS package
that you can run periodically as a SQL Agent Task
Use snapshot replication
Create a custom BCP in/out process
to get the data to the other db
Use transactional replication, which
can be near-realtime.
If data needs to be read-write in both database then you can use:
transactional replication with
update subscriptions
merge replication
As you go down the list the amount of work involved in maintaining the solution increases. Using linked server queries will work best if its the right fit for what you're trying to achieve.
EDIT: If they're the same server then as suggested by another user you should be able to access the table with servername.databasename.schema.tablename Looks like it's an entity-framework issues & not a db issue.
I don't know about EntityToSql but I know in LinqToSql you can connect to multiple databases/servers in one .dbml if you prefix the tables with:
ServerName.DatabaseName.SchemaName.TableName
MyServer.MyOldDatabase.dbo.Customers
I have been able to click on a table in the .dbml and copy and paste it into the .dbml of the alternate project prefix the name and set up the relationships and it works... like I said this was in LinqToSql, though have not tried it with EntityToSql. I would give it shot before you go though all the work of replication and such.
If Linq-to-Entities cannot cross DB's then Replication or something that emulates it is the only thing that will work.
For performance purposes you probably want either Merge replication or Transactional with queued (not immediate) updating.
Thanks for the responses. We're going to try adding triggers to the old database tables to insert/update/delete records in the new tables of the new database. This way we can continue to use Entity Framework and also do any data transformations we need.
Once the UI functions move over to the new system for a particular feature, we'll remove the table from the old database and add a view to the old database with the same name that points to the new database table for backwards compatibility.
One thing that I realized needs to happen before we can do this is we have to search all our code and sql for ##Identity and replace it with scope_identity() so the triggers don't mess up the Ids in the old system.
We have a production SQL Server 2005 database server with the production version of our application's database on it. I would like to be able to copy down the data contents of the production database to a development server for testing.
Several sites (and Microsoft's forums) suggest using the Backup/Restore options to copy databases from one server from another, but this solution is unworkable for several reasons (I don't have backup authority on our production database, I don't want to overwrite permissions on the development server, I don't want to overwrite structure changes on the development server, etc...)
I've tried using the SQL Import/Export Wizard in SQL Server 2005, but it always reports primary key violations. How can I copy the contents of a database from the production server to development without using the "Backup/Restore" method?
Well without the proper rights it really becomes more tedious and less than ideal.
One way that I would recommend though is to drop all of your constraints and indexes and then add them again once the data has been imported/exported.
Not an elegant solution but it'll process really fast.
EDIT:
Another option is to create an SSIS package where you specifically dump the tables in an order that won't violate the constraints.
I often use SQL Data Compare (http://www.red-gate.com/products/sql_data_compare/index.htm) for this task: the synchronization scripts it writes will remove the relationships during the transfer and reapply them, but that is OK in most development cases. It works especially well with smaller databases or subsets of databases.
If your database is large, I would recommend finding someone with the keys to the kingdom. Doing an out of sequence backup could mess with the ability to restore the database from the primary backup (if they are doing partials during the week for example) by marking records backed up when they are only in your backup, so don't try to bypass that security if you are unsure why it is there.
Assuming that you can connect to both DB's from the same machine (which almost always you can - I do it with my production servers via a VPN).
For each table
DELETE FROM devserv.dbo.tablename;
SET identity_insert [devserv.dbo.tablename] ON;
INSERT into devserv.dbo.tablename SELECT * from prodserv.dbo.tablename;
SET identity_insert [devname.dbo.tablename] OFF;
It is obviously worth noting that you will need to do this in a certain order if your tables have foreign key constraints.
The import/ export wizard is notorious for this sort of thing, and actually has a bug that makes it even less useful in working out the dependencies (sorry, don't have the details to hand).
SSIS does a much better job, but you'll have to add each table copy task by hand (in fact a datasource, copy task and data destination objects. It's a little tedious to set up (more than it should be), but a lot simpler than writing your own code.
One tip: avoid generating an SSIS project with the import/ export wizard, thinking it will be easier to just tweak it. It generates something that most people would find unrecognisable, even with some SSIS experience!
If you do not have backup permission on the production server, I guess this is because you are using a shared SQL Server from a webhoster. In this case, check if your webhoster provides the tool called myLittleBackup. It allows installing a db from one server to another in a few clicks...
I'd contact someone that does have access to backup the database. Permissions are usually there for a reason.
I might consider getting a backup as there will be one wether you run it or not (t least in theory a Prod DB is being backed up :) )
Then just restore to a brand new database on your dev box so you dont conflict with anything or anyone else.
If you restore to a new DB you could also pull the tables and data across manually if you wanted and since you create the DB you give yourself rights and it's all ok. There's a number of other methods, all tedious.
It is obviously worth noting that you will need to do this in a certain order if your tables have foreign key constraints.
We just use the SQL Server Database Publishing Wizard at work.
You would use this little utility to generate a T-SQL script that describes your production database (including all its data). Then connect to your dev server and run the generated script.
If you have to avoid backup/restore this is what I would recommend (these steps assuming you don't want to maintain the old schema NAME, just the structure) -
Download opendbdiff. Choose 'Compare' between source and (empty) destination. Choose sync. script tab and copy only the create table rows (without dbo.sysdiagrams tables etc.) paste into sql managment studio new query, delete all the schemas names appearing before the table names.
Now you have the full structure including primary keys, identity etc. Next step - use sql server import and export data like you did before (make sure you choose edit mappings and choose destination schema as dbo etc.). Also make sure you tick drop and recreate destination table.
On your Dev machine, setup a linked server to your production machine. Then just
INSERT dev.db.dbo.table (fieldlist)
SELECT (fieldlist) from prod.db.dbo.table