I would like to know if there is some way to reset the database after each integration test without #DirtiesContext:
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
This works but it is very slow, because the Spring context is reloaded each test.
My tests are using MockMvc, doing rest calls for an API. Like:
mockMvc.perform(put("/products/)
.header("Content-Type", "application/json")
.content(jsonPost))
.andExpect(status().isOk())
.andReturn();
So, without manual intervention (create and maintain a script to drop and create the tables), the Spring framework offer some alternative?
You can clean the tables you need by doing the following:
Inject a JdbcTemplate instance
#Autowired
private JdbcTemplate jdbcTemplate;
Use the class JdbcTestUtils to delete the records from the tables you need to.
JdbcTestUtils.deleteFromTables(jdbcTemplate, "table1", "table2", "table3");
Call this line in the method annotated with #After or #AfterEach in your test class:
#AfterEach
void tearDown() throws DatabaseException {
JdbcTestUtils.deleteFromTables(jdbcTemplate, "table1", "table2", "table3");
}
I found this approach in this blog post:
Easy Integration Testing With Testcontainers
In simple case annotate each your test class as #Transactional and transaction manager will do rollback after each #Test method. Get more information reading this.
I am a bit late to the party, but I had the same problem. All the unit tests (which could be considered integration tests) in an application I inherited took approximately 35 minutes to complete, using an embedded H2 as database for tests. All test classes where annotated by #DirtiesContext, usually method level.
So, the database was destroyed and recreated for each method. This takes time.
By removing the dirties annotation and using a database truncation class in the #Before method I now run the complete test suite in about 4 minutes.
If you have anything else than JPA stuff (not handled by the Entity manager) in your Spring context that should be removed between tests you have to do it explicitly.
I can share the DB truncation class if you like, but it is simply using the JPA meta model to find the tables to truncate. Truncation seems to be very efficient in H2. Exceptions for entities based on views, not tables, can be configured.
To make truncation easier, turn off refererential integrity before truncation and switch it back on when you're done.
You could use org.springframework.test.context.jdbc #Sql({"clear-database.sql"}) and then just write a script to clear the db.
So you'd end up with something like this:
#Test
#Sql({"classpath:sql/clear-database.sql", "classpath:sql/set-up-db.sql"}
void doThings(){
this.mockMvc.perform(etc..);
}
Related
I have an EF 6.2 project in my MVC solution.
This uses a SQL server db and has about 40 tables with many foreign keys.
The first query is very slow, 20 secs.
I immediately hit the same page again, changing the user param, and the query takes less than 1 second.
So this looks like a warm up issue in EF6. That's fine and there's loads of things i can do to sort apparently.
The Model cache (part of EF6.2) looks like it could be beneficial, but everywhere i read about it states model first. Nothing about DB first. Would this still work with db first?
Also there's the Entity Framework 6 power tools, these allow for me to Generate Views. Tried this and it doesn't seem to make any difference. Is this still a valid route?
Any other ideas?
EF DbContexts incur a one-off cost to resolve their entity mappings. For web applications you can mitigate this by having your application start up kick off a simple query against the DbContext which "kicks off" this warm-up rather than during your first user-triggered query. Simply new-ing up a context doesn't trigger the initialization, running a query does. So for ASP.Net MVC on the Application_Start, after initializing everything:
using (var context = new MyContext())
{
var warmup = context.MyTable.Count(); // against a small table.
}
You can test this behaviour with unit tests by having a suite of timed tests that read data from the DbContext, and putting a break-point in DbContext's OnModelCreating event. It will be executed just once from the first test with the first query. You can add a OneTimeSetUp in a test fixture setup to run ahead of the tests with the above quick count example to incur this cost before measuring the performance of the test runs.
So, the answer was to update EF to 6.2 then use the newest feature:
public class MyDbConfiguration : DbConfiguration
{
public MyDbConfiguration() : base()
{
var path = Path.GetDirectoryName(this.GetType().Assembly.Location);
SetModelStore(new DefaultDbModelStore(path));
}
}
for the full story check out this link: https://entityframework.net/why-first-query-slow
Your gonna take a small performance hit at startup but then it all moves a lot faster.
For anyone using an Azure web app you can use a deployment slot (https://stackify.com/azure-deployment-slots/), this allow you to publish into a non-production slot then warm it up before swapping it in as the production slot.
Let's say I have 10 entites. 8 of them completelly new and build with EF Code-first aproach. So before I was using DropCreateDatabaseIfModelChanges initialization strategy and that's worked perfect for me.
But now I have 2 entites which build from database based on some 3rd party data, and I need this data all the time, I can't allow EF to drop this tables even if model chnages. I need something more inteligent there.
Which is correct approach in that case?
In short, I want something quite similar. I just need DbInitializer behavior, but per table basis, instead of per Database. I wan't Code-first entities work the same as before, regerating and all that stuff. But add only something custom for this specific 2 DB based entities.
You could use EF Code First Migrations
First, you need to run the Enable-Migrations command in Package Manager Console. This command will add a Migrations folder to our project. This new folder contains the Configuration class that allows you to configure how Migrations behaves for your context.
Now, after that,If you followed the required steps, you can run "update database" from the "Package Manager Console" and add the eight new tables to your DB:
Example:
Make the changes in your model (add the eight new entities)
From the Package Manager Console: Run Add-Migration [Migration Name]
Make any neccessary changes to the generated code (this is optional).
From the Package Manager Console: Run Update-Database
If you don't change or remove any property related to your existing entities, you should not loose the data that you already have in DB.
Update
To achieve what you want you can use Automated Migration. This way when you run your application, you will always get your database in the latest version because EF will do implicit migration every time it is needed - in the purest version you never need to do anything more than enabling automatic migrations.
First, you need to set the database initializer in the context class with the new db initialization strategy MigrateDatabaseToLatestVersion as shown below:
public class YourContext: DbContext
{
public YourContext(): base("DefaultConnectionString")
{
Database.SetInitializer(new MigrateDatabaseToLatestVersion<YourContext, YourProject.Migrations.Configuration>("DefaultConnectionString"));
}
}
Later, in the constructor of the Configuration class you have to enable automatic migrations:
public Configuration()
{
AutomaticMigrationsEnabled = true;
}
Now, if you are working with an existing database, before add your new eight entities, you need to do this first:
Run the Add-Migration InitialCreate –IgnoreChanges command in
Package Manager Console. This creates an empty migration with the
current model as a snapshot.
Run the Update-Database command in Package Manager Console. This
will apply the InitialCreate migration to the database. Since the
actual migration doesn’t contain any changes, it will simply add a
row to the __MigrationsHistory table indicating that this migration
has already been applied.
After that, you can apply the changes that you want to your model (adding, for example, the new eight entities), and when you execute your app again, EF will do the migrations for you.
In case that you are going to change
someting that provoke some inconsistency regarding to your database
schema that it could end in data loss, an exception will be throw.
If this exception is not thrown, you don't have to worry about loss
your data, it will remain intact in your DB.
As an aditional information, if you don't mind loose your data (which I think this is not your escenerario, but is useful to know anyway) you can set in true the AutomaticMigrationDataLossAllowed property (its default value is false), and no exception will be thrown in case you are going to loose some data in your DB in the execution of a migration.
public Configuration()
{
AutomaticMigrationsEnabled = true;
AutomaticMigrationDataLossAllowed=true;
}
In my application I use Spring context and JPA. I have some set of entities annotated with #Entity and tables for them are created automatically during system startup. Recently I started using Spring ACL, so I have to have the following additional DB schema and I don't want these tables to be mapped to the entities (simply I don't need them to, because Spring ACL manages them independently).
I want to automatically insert e.g. admin user account into the
User's entity table. How to do that properly?
I want to initialize Spring ACL custom tables during system startup, but the SQL script file does not seem to be good solution because if I use different database for production and functional testing, the different SQL dialect does not allow me to execute script properly on both engines (e.g. when using MySQL and HSQL).
At first I tried to use ServletListener that during servlet initialization check the db and adds the necessary data and schema, but this does not work for integration tests (because there are no servlet involved at all).
What I want to achieve is the Spring bean (?) that is launched after the JPA has initialized all entity tables, insert all startup data using injected DAOs and somehow creates the Spring ACL schema. Then - I want the bean to be removed from IoC (because I simly don't need it anymore). Is it possible?
Or is there any better way of doing this?
The default JPA allows you to add an SQL script upon loading the persistence.xml:
http://docs.oracle.com/javaee/7/tutorial/persistence-intro005.htm
Add this property to your persistence.xml file:
<property name="javax.persistence.sql-load-script-source"
value="META-INF/sql/data.sql" />
And fill the data.sql file with your default values.
If you are using EclipseLink you could use a SessionEventListener to execute code after JPA login. You could perform your schema creation and setup in a postLogin event.
You could use the Schema Framework in EclipseLink (org.eclipse.persistence.tools.schemaframework) to create tables and DDL in a database platform independent way. (TableDefinition, SchemaManager classes)
I use PostConstruct annotation to invoke initialize methods.
As document described: The PostConstruct annotation is used on a method that needs to be executed after dependency injection is done to perform any initialization. You may simply add a spring bin with methods has #PostConstruct annotation on it, the methods would be executed after tables are created(or we can say, they are executed after other beans are ready).
Code sample:
#Component
public class EntityLoader {
#Autowired
UserRepository userRepo;
#PostConstruct
public void initApiUserData() {
User u = new User();
// set user properties here
userRepo.save(u);
}
}
If you use hibernate, then create a sql script import.sql in the class path root. Hibernat will execute it on startup. -- This worked in former hibernate version. In the docu of the current version 4.1 I have not found any hint of this feature.
But Hibernate 4.1 has an other feature
Property: hibernate.hbm2ddl.import_files
Comma-separated names of the optional files containing SQL DML statements executed during the SessionFactory creation. This is useful for testing or demoing: by adding INSERT statements for example you can populate your database with a minimal set of data when it is deployed.
File order matters, the statements of a give file are executed before the statements of the following files. These statements are only executed if the schema is created ie if hibernate.hbm2ddl.auto is set to create or create-drop.
e.g. /humans.sql,/dogs.sql
You could try using flyway to
create the tables using the SQL DDL as an SQL migration and
possibly put some data in the tables using either SQL or Java based migrations. You might want to use the latter if in need of environment or other info.
It's a lot easier than it sounds and you will also end up with flyway itself as a bonus, if not a must.
I'm using NHibernate on a project and I need to do data auditing. I found this article on codeproject which discusses the IInterceptor interface.
What is your preferred way of auditing data? Do you use database triggers? Do you use something similar to what's dicussed in the article?
For NHibernate 2.0, you should also look at Event Listeners. These are the evolution of the IInterceptor interface and we use them successfully for auditing.
[EDIT]
Post NH2.0 release, please look at the Event Listeners as suggested below. My answer is outdated.
The IInterceptor is the recommended way to modify any data in nhibernate in a non-invasive fashion. It's also useful for decryption / encryption of data without your application code needing to know.
Triggers on the database are moving the responsibility of logging (an application concern) in to the DBMS layer which effectively ties your logging solution to your database platform. By encapsulating the auditing mechanics in the persistance layer you retain platform independance and code transportability.
I use Interceptors in production code to provide auditing in a few large systems.
I prefer the CodeProject approach you mentioned.
One problem with database triggers is that it leaves you no choice but to use Integrated Security coupled with ActiveDirectory as access to your SQL Server. The reason for that is that your connection should inherit the identity of the user who triggered the connection; if your application uses a named "sa" account or other user accounts, the "user" field will only reflect "sa".
This can be overriden by creating a named SQL Server account for each and every user of the application, but this will be impractical for non-intranet, public facing web applications, for example.
I do like the Interceptor approach mentioned, and use this on the project I'm currently working on.
However, one obvious disadvantage that deserves highlighting is that this approach will only audit data changes made via your application. Any direct data modifications such as ad-hoc SQL scripts that you may need to execute from time to time (it always happens!) won't be audited, unless you remember to perform the audit table insertions at the same time.
I understand this is an old question. But I would like to answer this in the light of the new Event System in NH 2.0. Event Listeners are better for auditing-like-functions than Interceptors. Ayende wrote a great example on his blog last month. Here's the URL to his blog post -
ayende.com/Blog/archive/2009/04/29/nhibernate-ipreupdateeventlistener-amp-ipreinserteventlistener.aspx
As an entirely different approach, you could use the decorator pattern with your repositories.
Say I have
public interface IRepository<EntityType> where EntityType:IAuditably
{
public void Save(EntityType entity);
}
Then, we'd have our NHibernateRepository:
public class NHibernateRepository<EntityType>:IRepository<EntityType>
{
/*...*/
public void Save ( EntityType entity )
{
session.SaveOrUpdate(entity);
}
}
Then we could have an Auditing Repository:
public class AuditingRepository<EntityType>:IRepository<EntityType>
{
/*...*/
public void Save ( EntityType entity )
{
entity.LastUser = security.CurrentUser;
entity.LastUpdate = DateTime.UtcNow;
innerRepository.Save(entity)
}
}
Then, using an IoC Framework (StructureMap, Castle Windsor, NInject) you could build it all up without the rest of your code every knowing you had auditing going on.
Of course, how you audit the elements of cascaded collections is another issue entirely...
As a novice in practicing test-driven development, I often end up in a quandary as to how to unit test persistence to a database.
I know that technically this would be an integration test (not a unit test), but I want to find out the best strategies for the following:
Testing queries.
Testing inserts. How do I know that the insert that has gone wrong if it fails? I can test it by inserting and then querying, but how can I know that the query wasn't wrong?
Testing updates and deletes -- same as testing inserts
What are the best practices for doing these?
Regarding testing SQL: I am aware that this could be done, but if I use an O/R Mapper like NHibernate, it attaches some naming warts in the aliases used for the output queries, and as that is somewhat unpredictable I'm not sure I could test for that.
Should I just, abandon everything and simply trust NHibernate? I'm not sure that's prudent.
Look into DB Unit. It is a Java library, but there must be a C# equivalent. It lets you prepare the database with a set of data so that you know what is in the database, then you can interface with DB Unit to see what is in the database. It can run against many database systems, so you can use your actual database setup, or use something else, like HSQL in Java (a Java database implementation with an in memory option).
If you want to test that your code is using the database properly (which you most likely should be doing), then this is the way to go to isolate each test and ensure the database has expected data prepared.
As Mike Stone said, DbUnit is great for getting the database into a known state before running your tests. When your tests are finished, DbUnit can put the database back into the state it was in before you ran the tests.
DbUnit (Java)
DbUnit.NET
You do the unit testing by mocking out the database connection. This way, you can build scenarios where specific queries in the flow of a method call succeed or fail. I usually build my mock expectations so that the actual query text is ignored, because I really want to test the fault tolerance of the method and how it handles itself -- the specifics of the SQL are irrelevant to that end.
Obviously this means your test won't actually verify that the method works, because the SQL may be wrong. This is where integration tests kick in. For that, I expect someone else will have a more thorough answer, as I'm just beginning to get to grips with those myself.
I have written a post here concerning unit testing the data layer which covers this exact problem. Apologies for the (shameful) plug, but the article is too long to post here.
I hope that helps you - it has worked very well for me over the last 6 months on 3 active projects.
Regards,
Rob G
The problem I experienced when unit testing persistence, especially without an ORM and thus mocking your database (connection), is that you don't really know if your queries succeed. It could be that you your queries are specifically designed for a particular database version and only succeed with that version. You'll never find that out if you mock your database. So in my opinion, unit testing persistence is only of limited use. You should always add tests running against the targeted database.
For NHibernate, I'd definitely advocate just mocking out the NHibernate API for unit tests -- trust the library to do the right thing. If you want to ensure that the data actually goes to the DB, do an integration test.
For JDBC based projects, my Acolyte framework can be used: http://acolyte.eu.org . It allows to mockup data access you want to tests, benefiting from JDBC abstraction, without having to manage a specific test DB.
I would also mock the database, and check that the queries are what you expected. There is the risk that the test checks the wrong sql, but this would be detected in the integration tests
I usually create a repository, use that to save my entity and retrieve a fresh one. Then I assert that the retrieved is equal to the saved.
Technically unit tests of persistance are not unit tests. They are integration tests.
With C# using mbUnit, you simply use the SqlRestoreInfo and RollBack attributes:
[TestFixture]
[SqlRestoreInfo(<connectionsting>, <name>,<backupLocation>]
public class Tests
{
[SetUp]
public void Setup()
{
}
[Test]
[RollBack]
public void TEST()
{
//test insert.
}
}
The same can be done in NUnit, except the attribute names differ slightly.
As for checking, if your query iss successful, you normally need to follow it with a second query to see if the database has been changed as you expected.