Dapper ORM and Paging and Sorting Extension - dapper

I am using Dapper for a Generic DAL that can be used for both Oracle and SQL server. What would be the best way to provide Paging and Sorting methods so that it works both for SQL and Oracle without manually creating/changing the SQL statments?
Something like:
var users= Dapper
.Query<User>(sqlStatment
.Skip(10)
.Take(10)); // where sqlStatment string

As #Alex pointed out, paging is done differently on the two databases of your choice so your best best for having most optimized queries is to write separate queries for each.
It would probably be best to create two data provider assemblies each one serving each database:
Data.Provider.Sql
Data.Provider.Oracle
And then configure your application for one of the two. I've deliberately also created Data.Provider namespace (which can be part of some Data assembly and defines all data provider interfaces (within Data.Provider) that upper couple of providers implement.

Related

Custom Data Service Provider (WCF Data Service) for a dynamic table in MS SQL Server

After reading the 'Custom Data Service Providers' documentation of the ODataProviderToolkit from microsoft, I end up a little confused where to start.
My intention is, to have a OData server for a table in the database, which structure is unkown during compile time. This we can not change.
For now, we update our model from the database every time this table is reconfigured by the user, compile and update the OData service. This is not suitable.
The documentation 'Part 4: Basic Read-Only Untyped Data Provider' explains the implementation using Linq to objects. The implementation is very complex, creating IQueryable data providers.
My question: Shouldn't the requested implementation be simple and straight forward using standard classes which provide a TSQL dataprovider and IQueryable interfaces?!
Using TSQL datasource, SQLConnectionString, c# EF 4 with linq to sql...
I just don't know how to put this all together.
Or is it really necessary, that I implement all these expression tree related methods like ExpressionVisitor, GetSquenceValueMethodInfo, etc.?
Thanks in advance.
I tried this about half a year ago (Web API OData 2). What I did was to create the EDM model on the fly for every request and set it in a custom OData Route constraint. In the controller I returned EdmEntityObjects of the type corresponding to the current request. I expected that returning an IQueryable would do all the magic of filtering, but it failed saying that the operation is not supported.
We (timecockpit) have since gone implemented an QueryNodeVisitor for the OData AST, which is substantially less complicated than a full IQueryable implementation. We don't support all functions, but only those that are available in our custom query language (TCQL).
I haven't checked recently if the newer OData Web API supports filtering on typeless EdmEntityObjects. Even if, one would still pass filter criterias to the SQL database In order to not load all entities into memory before filtering.
I doubt that directly forwarding the query to SQL using Linq-2-SQL works and I would expect that at least some rewriting of the query (e.g. for adding permission checks) is necessary in every but the most trivial applications.

what is the best approach to make an application to work with multiple ado.net providers?

We develop a set of industrial applications that use SQL Server.
As demand expands, customers want to use our applications with their own RDBMS such as Oracle, MySQL and others.
Currently, some applications uses OLEDB provider, and other uses native SQLServer for various reasons, including programming experience and religion.
We seek a unified approach. The project manager prefers OLEDB because "it works with everything." Personally, I hate it because how the queries parameters are handled...
I have two solutions in mind :
The first would be to use SQLOLEDB keeping the existing code and adapt each SQL statement in case of incompatibility by using branch instructions. It would go very quickly, the SQL would not differ that much and it would please the project manager. However, it might turn code to spaghetti.
The second would be to use the native ADO.NET provider for each RDBMS, and to develop a library of data access for each. Each library might contain a common part to avoid code duplication. Of course it would take some time, but it will lead to a clean architecture and optimal performances.
It's mainly client server applications. Some parts of our database are dynamically generated, and there is a lot of dynamic queries. That's why using an ORM is not possible.
How would you try to achieve the same objective?
I would do one of two things:
Use the common features between them all, in code this would mean using IDbConnection etc.
In your app, create a set of interfaces onto the DAL and then have a DAL implementation for each supported database.
If you can get away with it, go the lowest-common-denominator route. However, it may sound like you need to take advantage of database-specific features. If so, then your best route to cleanly do this in code is to talk to a set of interfaces instead of a specific DAL.
Just make sure your interfaces are well enough defined to reduce changes later on and also offer enough openings to support all the necessary databases.
It is then just a case of having a factory to provide the concrete implementation for a given interface.
In your library, ensure you are only using the base classes in the System.Data.Common namespace.
So, instead of SqlCommand, use DbCommand etc.
You can inject the actual implementation to the DAL.
public class MyDal
{
private cmd DbCommand;
public MyDal(DbCommand command)
{
cmd = command;
}
}
Alternatively, write your own abstraction (interface/abstract class), write implementation classes for it that wrap the different provider implementation and use that.

Using DTO Pattern to synchronize two schemas?

I need to synchronize two databases.
Those databases stores same semantic objects but physically different across two databases.
I plan to use a DTO Pattern to uniformize object representation :
DB ----> DTO ----> MAPPING (Getters / Setters) ----> DTO ----> DB
I think it's a better idea than physically synchronize using SQL Query on each side, I use hibernate to add abstraction, and synchronize object.
Do you think, it's a good idea ?
Nice reference above to Hitchhiker's Guide.
My two cents. You need to consider using the right tool for the job. While it is compelling to write custom code to solve this problem, there are numerous tools out there that already do this for you, map source to target, do custom tranformations from attribute to attribute and will more than likely deliver with faster time to market.
Look to ETL tools. I'm unfamiliar with the tools avaialable in the open source community but if you lean in that direction, I'm sure you'll find some. Other tools you might look at are: Informatica, Data Integrator, SQL Server Integration Services and if you're dealing with spatial data, there's another called Alteryx.
Tim
Doing that with an ORM might be slower by order of magnitude than a well-crafted SQL script. It depends on the size of the DB.
EDIT
I would add that the decision should depend on the amount of differences between the two schemas, not your expertise with SQL. SQL is so common that developers should be able to write simple script in a clean way.
SQL has also the advantage that everybody know how to run the script, but not everybody will know how to run you custom tool (this is a problem I encountered in practice if migration is actually operated by somebody else).
For schemas which only slightly differ (e.g. names, or simple transformation of column values), I would go for SQL script. This is probably more compact and straightforward to use and communicate.
For schemas with major differences, with data organized in different tables or complex logic to map some value from one schema to the other, then a dedicated tool may make sense. Chances are the the initial effort to write the tool is more important, but it can be an asset once created.
You should also consider non-functional aspects, such as exception handling, logging of errors, splitting work in smaller transaction (because there are too many data), etc.
SQL script can indeed become "messy" under such conditions. If you have such constraints, SQL will require advanced skills and tend to be hard to use and maintain.
The custom tool can evolve into a mini-ETL with ability to chunck the work in small transactions, manage and log errors nicely, etc. This is more work, and can result in being a dedicated project.
The decision is yours.
I have done that before, and I thought it was a pretty solid and straightforward way to map between 2 DBs. The only downside is that any time either database changes, I had to update the mapping logic, but it's usually pretty simple to do.

Winforms Application Architecture using LinqToSql

I am starting a new Winforms application, and I've always used traditional ADO.NET methods in my DAL (...yes, hardcoding!)
I have used Linq on many ocassions previously, but more for adhoc queries, so I have some experience with it and I loved how easy it was to write querying code. What I would like to perhaps do is replace my existing DAL with pure LINQ queries. I know that they may be areas of concerns with this, which is why I need your help.
If I had to do things like how I always had done in the past, I would structure my app like:
[AppName].ClientUI --> desktop client Presentaion layer
[AppName].WebUI --> web Presentation layer
[AppName].DAL --> ADO.NET Data access layer
[AppName].BLL --> Business logic layer (validation, extra calcs, + manager classes)
[AppName].BE --> Business Entities containing business objects and collection classes
To be honest, I've used this always in web apps and had never done an n-layered Winforms app before.
If I want to replace my old DAL methods with LINQ ones, what challenges am I in store for.
Finally, is it even recommended to use LINQ in a multi-layered app? Sometimes I think that using the old ADO.NET methods is still better...more work, but better. I mean - and correct me if Im wrong - at the core of all these new technologies (that are meant to make our work as developers better) are they not all still making use of traditional ADO.NET???
Hope someone can shed some strong light on this! :)
For a straight forward winforms client app that connects directly to the database, Linq is a good replacement for ADO.NET, there are very few gotchas that you will come up against. Use a tool like SQLMetal (Or the designer that's built in with VS2008) to generate your Linq data objects and database connection classes. If you want use the generated objects as your "BE" objects you can just copy this stuff and move it into what ever assembly you want (if you want to separate the assemblies). Or alternatively you can just provide separate "Business entities" and a translation layer that copies data from the BEs to the Linq generated objects and back again.
The one gotcha that I have come across with Linq a few times is that it doesn't have very good support for disconnected architectures. For example if you wanted to have your DAL on a server and have all your client apps connect to it, you will hit on problems if you just let your linq objects be transfered across the server.
If you do choose to have separate business entities (or have a disconnected architecture) you will find you have to carefully manage disconnecting the Linq objects from the data context, and then reattaching them when you are ready to save/update. It's worth doing some prototyping in this area first to make sure you understand how it works.
The other thing that often trips people up is that linq queries are not executed immediately against the database, they are only executed as the data is needed. Watch out for this, as it can catch you out if you aren't expecting it (and it's hard to spot when debugging because when you look at your linq query in the debugger it will execute to get the data).
It's also worth considering the Entity framework as an alternative of linq2sql (you can still to linq2EF queries). The EF is a more complete ORM and has better support for mapping related tables to multiple objects, but still suffers from poor support for disconnected apps. The EF in .net 4.0 is supposed to have better support for disconnected architectures.
I've done it both ways.
You are right, rolling an ADO.NET DAL by hand is more work, so do you get a performance benefit for that additional work? Yes, when you use Linq to SQL classes, you will take about 7% to 10% off the top as overhead. However, for that small overhead, you get a lot of benefits:
Lazy Loading (which provides performance increases by deferring execution of queries until they are actually needed)
CRUD for free, with built-in
concurrency handling
Linq, IQueryable, and all of the
goodness that provides
Partial classes allow you to insert
business logic and validation int
your Linq to Sql classes, without
worrying about the Linq to SQL code
generator overwriting them.
Of course, if you don't need any of these things, then Linq to SQL might seem like a luxury. However, I find that Linq to SQL is easier to maintain, especially since the DAL classes can be regenerated if the database changes.
And yes, Linq to SQL uses ADO.NET under the hood.
Linq to SQL's multi-tier story is less clear, in large part due to the way the DataContext object needs to be handled. I suggest checking out this CodePlex project:
An Example of a Multi Tier Architecture for Linq to Sql
http://www.codeplex.com/MultiTierLinqToSql

An erlang database schema generator

Is there a way I can generate a database schema from an erlang application like I can do with hibernate.
I assume you mean Mnesia, and if that is the case, you don't really understand the nature of the Mnesia database. It is by its very design and implementation "schemaless". You might could write some really messy ugly code that walked a Mnesia database and tried to document the various records that are in it, but that would be pretty much a futile exercise. If you are storing Records in Mnesia, you already have the "schema" in the .hrl files that the Records are defined in.
There's nothing like nhibernate for sql databases in erlang.
Check out SumoDB
Overview
sumo_db gives you a standard way to define your db schema, regardless of the db implementation (mongo, mysql, redis, elasticsearch, etc.).
Your entities encapsulate behavior in code (i.e. functions in a module) and state in a sumo:doc() implementation.
sumo is the main module. It translates to and from sumo internal records into your own state.
Each store is managed by a worker pool of processes, each one using a module that implements sumo_store and calls the actual db driver (e.g: sumo_store_mnesia).
Some native domain events are supported, that are dispatched through a gen_event:notify/2 automatically when an entity is created, updated, deleted. Also when a schema is created and when all entities of a given type are deleted. Events are described in this article.
Full conditional logic support when using find_by/2 and delete_by/2 function. You can find more information about the syntax of this conditional logic operators here.
Support for sorting (asc or desc) based on multiple fields unsing find_by/5 and find_all/4 functions. For example this [{age, desc}, {name, asc}]] will sort descendently by age and ascendently by name.
Support for docs/models validations through sumo_changeset (check out the Changeset section).
If you are looking for Java hibernate type of object to SQL mapping framework in Erlang, you may have to write your own mapping module. One option is to map Erlang records to SQL. Any framework has to make sure the type mapping. Here is the link to Erlang's ODBC mapping http://erlang.org/doc/apps/odbc/databases.html#type
Erlang's ETS and Mnesia which is an extension of ETS are very flexible and efficient to manage records. If these two databases cannot be your choice, you might have to implement ways for record mapping

Resources