dapper GetList(predicate) vs Query(sql condition) - dapper

In dapper I can get data with
Query<Person>("sql condition").
and I can do with dapper extensions
GetList<Person>(predicate)
Both return an IEnumerable
Which approach should I choose for which situation?
As the extensions library is newer it seems to me that the author did not like building sql
conditions with strings. Instead I have a bit of intellisense and strong typing the
condition.
But is that all the advantage? Are there disadvantages you are aware of due to experience
in using both libraries?

The DapperExtension is there to help with the simple task of CRUD against a single table. If you need more then Query is the way to handle this. Admittedly SELECT is not as useful as it is a single table, but it has simplified many of the write operations.

Related

DQL function that ignores accents?

Is it possible in DQL to do a LIKE query that ignores accents? Searching for a word that has an accent should return accented and non accented versions.
My question is similar to this one in the emc forums. Any solution ?
I am not familiar with solution for this kind of requirement at DQL/translated SQL level.
Maybe you could write your views, and maybe you could write procedures at DB level to achieve this, but this wouldn't be clean solution.
Maybe if you could specify your client over you execute your DQL it could be more clearer what could be done at client level.
I remind you that DQL queries depends on translation to underlying DB SQL flavor. If there are known solutions at DB level, maybe you could use them for your specific situation. Try adding new tags on your question regarding underlying DB.

Is there any functionality similar to Criteria API in MyBatis, or any wrapper like QueryDSL to provide that functionality over it?

In JPA/Hibernate, we can write type-safe queries and accumulate our query predicates one step at a time. I believe that there is no equivalent of that in MyBatis, but is there any abstraction framework (like QueryDSL) that provides a layer above MyBatis, that can enable us to write criteria-like queries. My basic reason for wanting the criteria API is that I need to construct a query, each of whose predicates come from a separate logic.
MyBatis Generator seems pretty unmaintained to me. Also need to generate classes from existing DB during build is quiet cumbersome.
Isn't MyBatis SQL Builder Class a better solution?
You can use MBG (MyBatis Generator) with example classes.
Though I doubt that the example classes are as powerful as Hibernate's Criteria Queries, they should suit well your task of constructing queries.

ICursor implementation for ORM in MonoDroid

I'm trying to figure out how to make implementation of android database Cursor to wrap "ORMed" database layer.
To have ORM in MonoDroid we can use sqlite-net project (very lightweight ORM) or ServiceStack.OrmLite
My thoughts are to implement ICursor interface and "wrap" ORM
For now I just can't set it in my mind how it should work, and should it work ever or not.
Should it load "framed" set of results, or fetch it one by one?
Which is better for performance, how to get column values - reflection or..?
So, actually question is: is it possible ever?
Any thoughts will be appreciated.
Thanks.
I'm not sure what "problem" you're trying to solve with an ICursor implementation, perhaps you should be a little more specific as to what specific task you're trying to do. The entire point of an ORM (and you missed this one that also supports SQLite on Android) is to abstract away the whole RDBMS paradigm from the code and give you an object-oriented paradigm instead.
An ICursor gives you back an updatable resultset from a SQL query - which means you have to know about rows, resultsets, queries and all of that. An ORM gives back an object, or a collection of objects. If you want to update one, you update the object and send it back to the ORM.
Now I fully admit that there are times when an ORM's might not provide the cleanest mechanism to do something that a SQL query might do well. For example, if you logically wanted to "delete all parts built yesterday during second shift". A lightweight ORM might give you all parts and then you have to use LINQ or similar to filter that to the right day and shift and then iterate that resulting collection to delete each, whereas with a SQL query you just pass in a DELETE FROM Parts WHERE BornONDate BETWEEN #start AND #end, but that's one of the trade-offs you face.
In some cases the ORM might provide a facility to do what you want. For example in the OpenNETCF ORM linked above, you can cast your DataStore (if it isn't already) to a SQLDataStore and then you have access to the ExecuteNonQuery method, allowing you to pass in a direct SQL statement. If still doesn't have a means to pass you back a record set because, as I said, returning database rows is really the antithesis or an ORM.
There's also some inherent risk in using something like ExecuteNonQuery. If you want to change your backing store, from say a RDBMS like SQLite to something totally different like an object database, an XML file or whatever, then your code that builds and uses a SQL query breaks. Admittedly this might not be common, but if code portability and extensibility and on your radar, then it's at least something to keep in mind.

How to create a proper database layer?

Currently I use a lot of the ado.net classes (SqlConnection, SqlCommand, SqlDataAdapter etc..) to make calls to our sql server. Is that a bad idea? I mean it works. However I see many articles that use an ORM, nHibernate, subsonic etc.. to connect to SQL. Why are those better? I am just trying to understand why I would need to change this at all?
Update:
I did check the following tutorial on using nHibernate with stored-procedures.
http://ayende.com/Blog/archive/2006/09/18/UsingNHibernateWithStoredProcedures.aspx
However it looks to me that this is way to overkill. Why would I have to create a mapping file? Even if I create a mapping file and lets say my table changes, then my code wont work anymore. However if I use ado.net to return a simple datatable then my code will still work. I am missing something here?
There's nothing wrong with using the basic ADO.NET classes.
You might just have to do a lot more manual work than necessary. If you e.g. select your top 10 customers from a table with SqlCommand and SqlDataReader, it's up to you go iterate over the results, pull out each and every single item of data (like customer number, customer name, and so forth), and you're dealing very closely with the database structures, e.g. rows and columns. That's fine for some scenarios, but too much work in others.
What an ORM gives you is a lot of this "grunt work" being handled for you. You just tell it to get a list of your top 10 customers - as "Customer" objects. The ORM will go off and grab the data (most likely using SqlCommand, SqlDataReader) and then pulling out the bits and pieces, and assemble nice, easy to use "Customer" objects for you, that are a lot easier to use, since they are what your code is dealing with - Customer objects.
So there's definitely nothing wrong with using ADO.NET and it's a good thing if you know how it works - but an ORM can save you a lot of tedious, repetitive and boring grunt work and let you focus on your real business problems on the object level.
Marc
First of all, the ORMs are likely to do a much better job at producing the SQL queries than your normal non-SQL specialized Joe :)
Secondly, ORMs are a great way to somewhat "standardize" your DALs, increasing flexibility over different projects.
And lastly, with a good ORM, you're likely to have an easier time substituting your underlaying data-source, as a good ORM will have many different dialects. Of course, this is just a side-bonus :)
ORM's are great to avoid code repetition. You can often find that your object model and database model are extremely close to each other and whenever you add a field you'll be adding it to the database, your objects, your sql statements as well as everywhere else. If you use an ORM then you change your code in one place and it builds the rest of it for you.
As for performance, this can go either way. You will probably find that a lot of the simple sql that is written for you is often extremely tailored with various shortcuts that you would have been too lazy to write, such as only returning the absolutely required data. On the other hand, if you have some extremely complex queries and joins that an automated system could not possibly build then you're better of keeping these written yourself.
In summary though, they're fantastic for fast builds!
You don't need to change. If SqlConnection, SqlCommand, etc. work for you then that's great.
They work just peachy fine for the DB app I'm developing, and I have dozens of concurrent users with no problems.
There's nothing wrong with using straight up ADO.Net, but using an ORM will save you time, both in development and maintenance. Thats the biggest benefit.
One thing to consider: will a future "new developer" be more inclined to know or learn a well documented and widely adopted OR/M or your custom data access layer?
The number one thing for me though is the time. Minutes with my favorite OR/M, nHibernate vs. hours/days writing a custom data access layer using ADO.NET.
I also favor OR/Ms because maintaining declarative XML mappings is way easier than maintaining potentially thousands of lines of imperative code... or worse thousands of lines of C# data access code on top of thousands of lines of stored procedure code. In my current project I have 58 objects mapped in 58 XML mapping files, each with less than 50 lines. I cringe when I think about writing/maintaining CRUD code for 58 entities in ADO.NET.
I must warn you to read the documentation. Many, dare I say most, folks with whom I've worked will jump on a tool like mice on cheese, but they'll never read the documentation and learn the technology. I recommend reading the docs BEFORE moving to a new technology like nHibernate. A good cup o' jo and an hour or two of hard reading before-hand will pay dividends.
I haven't find pointing to an general idea of ORM in any answer. The general idea of ORM is to perform an Object-Relational mapping and provide your business classes with persistence. It means that you will think only about you business logic and will let ORM tool to save its state for you. Sure there are a lot of different scenarios. As was already said, it is nothing bad in using pure ADO.NET and may be your application (that is already written in this stile won't get any benefit), but using ORM tool in new projects is a very good idea. As for other - I totally agree with other answers.

Why would a Database developer use LINQ

I am curious to know why should a Database person learn LINQ. How can it be useful.
LINQ has one big advantage over most other data access methods: it centers around the 'query expression' metaphor, meaning that queries can be passed around like objects and be altered and modified, all before they are executed (iterated). In practice what that means is that code can be modular and better isolated. The data access repository will return the 'orders' query, then an intermediate filter in the request processing pipeline will decorate this query with a filter, then it gets passed on to a display module that adds sorting and pagination etc. In the end, when its iterated, the expression has totally transformed into very specific SELECT ... WHERE ... ORDER BY ... LIMIT ... (or the other back-end specific pagination like ROW_NUMBER). For application developers, this is priceless and there simply isn't any viable alternative. This is why I believe LINQ is here to stay and won't vanish in 2 years. It is definitely more than just a fad. And I'm specifically referring to LINQ as database access method.
The advantage of manipulating query expression objects alone is enough of a factor to make LINQ a winning bid. Add to this the multiple container types it can manipulate (XML, arrays and collections, objects, SQL) and the uniform interface it exposes over all these disparate technologies, consider the parallel processing changes coming in .Net 4.0 that I'm sure will be integrated transparently into LINQ processing of arrays and collections, and is really no way LINQ will go away. Sure, today it sometimes produces unreadable, poorly performant and undebuggable SQL, and is every dedicated DBA nightmare. It will get better.
Linq provides "first class" access to your data. Meaning that your queries are now part of the programming language.
Linq provides a common way to access data objects of all kinds. For example, the same syntax that is used to access your database queries can also be used to access lists, arrays, and XML files.
Learning Linq will provide you with a deeper understanding of the programming language. Linq was the driver for all sorts of language innovations, such as extension methods, anonymous types, lambda expressions, expression trees, type inference, and object initializers.
I think a better question is why would one replace SQL with LINQ when querying a database?
"Know your enemy" ?
OTOH, out of interest. I'm learning more about XMl, XSLT, XSDs etc because I can see a use for them as a database developer.
Is this a real question? It can't be answered with a code snippet. Most of these discussion type questions get closed quickly.
I think one of the many reasons why one could consider linq it because linq is big productivity boost and huge time saver
This question also prompts me to ask, if LINQ becomes immensely popular, will server side developers know SQL as well as their predecessors did?

Resources