As the post title implies, I have a legacy database (not sure if that matters), I'm using Fluent NHibernate and I'm attempting to test my mappings using the Fluent NHibernate PersistenceSpecification class.
My question is really a process one, I want to test these when I build locally in Visual Studio using the built in Unit Testing framework for now. Obviously this implies (I think) that I'm going to need a database. What are some options for getting this into the build? If I use an in memory database does NHibernate or Fluent NHibernate have some some mechanism for sucking the database schema from a target database or maybe the in memory database can do this? Will I need to manually get the schema to feed to an in memory database?
Ideally I would like to get this this setup to where the other developers don't really have to think about it other than when they break the build because the tests don't pass.
NHibernate isn't able to re-create your database's schema and because it's a legacy system you likely won't be able to generate the schema from NH. The best approach is to do your integration tests in transactions and roll them back when complete. We run integration tests against our dev and test databases which are periodically refreshed from the live system.
Related
I want to be able to create a database and tables the same way Amazon's DynamoDb client does it but with Sql Server. Is that possible?
I'm using .net Core and this is for integration tests. Figured I can throw in the code in the fixture.
Anyone have any ideas?
EF Core Migrations:
"The migrations feature in EF Core provides a way to incrementally
update the database schema to keep it in sync with the application's
data model while preserving existing data in the database."
Create and Drop APIs:
"The EnsureCreated and EnsureDeleted methods provide a lightweight
alternative to Migrations for managing the database schema. These
methods are useful in scenarios when the data is transient and can be
dropped when the schema changes. For example during prototyping, in
tests, or for local caches."
to create your tables at runtime.
And then use one of the Data Seeding techniques:
Data seeding is the process of populating a database with an initial
set of data. There are several ways this can be accomplished in EF
Core:
Model seed data
Manual migration customization
Custom initialization logic
to populate them with known data.
You could start the SQL Server (at least the logfiles) on a RAM disk. and/or use delayed durability ALTER DATABASE x SET DELAYED_DURABILITY = forced. You could also use memory optimized tables but I think you won’t get full compatibility.
BTW: it is dangerous to use such shortcuts if your development process relies entirely on it since developers very late get feedback on bad habits and performance problems.
For that kind of volatile databases (also applies to containers) you need to add some code to your test pipeline or product tomactually create and populate the DB. (If you use containers you can think about packaging a pre-populated DB snapshot)
I'm trying to find out a proper database development process in my applications. I've tried Visual Studio Database projects with Post/Pre deployment scripts (very nice feature), Entity Framework Database First approach (with separate script for each database change placed under source control), and now I'm dealing with Entity Framework Code First approach. I have to say that I'm really impressed with the possibilities that it gives, but I'm trying to figure out how to manage the changes in the models during the development. Assuming that I have the following environments in my company:
LOCALHOST - for each single developer,
TEST - single machine with SQL Server database for testing purposes,
PRODUCTION - single machine with SQL Server database used by clients
Now each time when I'm working on an application and the code changes, it's ok for me to drop and recreate the database each time when I'm testing an application (so for LOCALHOST and TEST environments). I've created proper database initializers that seeds the database with test data and I'm pretty happy with them.
However with each new build when model changes, I want to handle the PRODUCTION database changes in such a way that I won't lost the whole data. So, in Visual Studio 2012 there is the "SQL Schema Compare" tool and I'm just wondering if it is not enough to manage all changes in the database for PRODUCTION development? I can compare my {local} database schema with PRODUCTION schema and simply apply all changes?
Now, I want to ask what's the point of Code First Migrations here? Why should I manage all changes in the database through it? The only reason I can find is to allow to perform all sort of "INSERT" and "UPDATE" commands. However I think that if database is correctly designed there shouldn't be such need to perform these commands. (It's topic for another discussion so I don't want to go into details). Anyway I want to ask - what are the real advantages of Code First Migrations over the Code First + Schema Compare pattern?
It simplifies deployment. If you didn't manage the migrations in code, then you would have to run the appropriate delta scripts manually on your production environment. With EF migrations, you can configure your application to migrate the database automatically to the latest version on start up.
Typically, before EF migrations, if you wanted to automate this you would either have to run the appropriate delta scripts during a custom installation routine, or write some infrastructure into your application which runs the delta scripts in code. This would need to know the current database version, so that it knows which of the scripts to run, which you would normally have in a DbVersion table or something similar. With EF migrations, this plumbing is already in place for you.
Using migrations means the alignment of model and database changes is automated and therefore easier to manage.
With an ASP.NET MVC3 application, how do I deploy the database into production, and how do I manage the schema changes?
When I'm developing my application, I see an aspnetmvc.mdf (and .ldf) file in app_data. This has the aspnet._ tables, and also my tables (which I hand-created in SQL Server Express). This file is 10MB, and it doesn't seem to me that I should simply upload it to my production machine.
Should I instead keep schema (and seed data) changes in a .SQL file and (somehow) run them on the server? Should I use NHibernate's methods for auto-generating tables? (If so, what about the ASP.NET standard tables?)
What's the best way to manage this? Ideally, I'd like something like LiquiBase or Rails' DB migrations, where I can isolate changes and run them in isolation. But I've never put a from-scratch ASP.NET MVC site into production, so I'm not sure what to do.
My thoughts on NHibernate's Schema Update are here.
There is no one right solution, but SchemaUpdate can get you about 90% of the way there. For the other 10%, I currently use hand-written sql files (named by the date they were created), but there are other, more sophisticated options (such as RedGates SqlCompare or the data tools built into some versions of Visual Studio).
I am currently working on a sample website proof of concept and planning to provide the entire VS2010 (ASP.NET and C#) solution to the company. I also use SQL Server and need to provide the database (tables(including some records) and stored procedures). What is the easiest way to ensure that I can bundle the database along with my VS2010 solution? Please provide some steps if possible.
At my company, we actually take the same approach that you're taking, and just do everything with scripts by hand for deployment. However, we do this mostly because we have a large legacy database, and we do incremental updates for a system that has to always be online.
If you're starting a new website from scratch, you might look into Database Projects inside of Visual Studio. It also has some functionality for unit testing your database built in that might be beneficial.
http://www.visualstudiotutor.com/2010/08/manage-database-projects-with-visual-studio-2010/
As you develop an application database changes inevitably pop up. The trick I find is keeping your database build in step with your code. In the past I have added a build step that executed SQL scripts against the target database but that is dangerous in so much as you could inadvertanly add bogus data or worse.
My question is what are the tips and tricks to keep the database in step with the code? What about when you roll back the code? Branching?
Version numbers embedded in the database are helpful. You have two choices, embedding values into a table (allows versioning multiple items) that can be queried, or having an explictly named object (such as a table or somesuch) you can test for.
When you release to production, do you have a rollback plan in the event of unexpected catastrophe? If you do, is it the application of a schema rollback script? Use your rollback script to rollback the database to a previous code version.
You should be able to create your database from scratch into a known state.
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
I've definitely been interested in some sort of DB versioning system, but I haven't found anything yet. So, instead of a solution, you'll get my vote. :-P
You really do want to be able to take a clean machine, get the latest version from source control, build in one step, and run all tests in one step. Making this fast makes you produce good software faster.
Just like external libraries, database configuration must also be in source control.
Note that I'm not saying that all your live database content should be in the same source control, just enough to get to a clean state. (Do back up your database content, though!)
Define your schema objects and your reference data in version-controlled text files. For example, you can define the schema in Torque format, and the data in DBUnit format (both use XML). You can then use tools (we wrote our own) to generate the DDL and DML that take you from one version of your app to another. Our tool can take as input either (a) the previous version's schema & data XML files or (b) an existing database, so you are always able to get a database of any state into the correct state.
I like the way that Django does it. You build models and the when you run a syncdb it applies the models that you have created. If you add a model you just need to run syncdb again. This would be easy to have your build script do every time you made a push.
The problem comes when you need to alter a table that is already made. I do not think that syncdb handles that. That would require you to go in and manually add the table and also add a property to the model. You would probably want to version that alter statement. The models would always be under version control though, so if you needed to you could get a db schema up and running on a new box without running the sql scripts. Another problem with this is keeping track of static data that you always want in the db.
Rails migration scripts are pretty nice too.
A DB versioning system would be great, but I don't really know of such a thing.
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
Backups and compression can help you there. Sorry - there's no excuse not to be able to get a a good set of data to develop against. Even if it's just a sub-set.
Put your database developments under version control. I recommend to have a look at neXtep designer :
http://www.nextep-softwares.com/wiki
It is a free GPL product which offers a brand new approach to database development and deployment by connecting version information with a SQL generation engine which could automatically compute any upgrade script you need to upgrade any version of your database into another. Any existing database could be version controlled by a reverse synchronization.
It currently supports Oracle, MySql and PostgreSql. DB2 support is under development. It is a full-featured database development environment where you always work on version-controlled elements from a repository. You can publish your updates by simple synchronization during development and you can generate exportable database deliveries which you will be able to execute on any targetted database through a standalone installer which validates the versions, performs structural checks and applies the upgrade scripts.
The IDE also offers you SQL editors, dependency management, support for modular database model components, data model diagrams, SQL clients and much more.
All the documentation and concepts could be found in the wiki.