What percent of your codebase is represented by Data Access code? - database

An interesting question came up on twitter tonight and I tought I'd post it here.
Basically, I am wondering what you are using to persist data to your database and an estimation of the percent of your codebase that is data access code.
-- Edit --
Other interesting metrics (as noted in the comments) include the number of business classes and the overall size of your codebase.
I just looked at a project that I did a long time before discovering NHibernate. Quickly looking at some of the data access code showed about 10 lines of code for persistence/hydration for every persistable property in a class.

In one project we did, we used LLBLGenPro as our OR/M. But since we didn't want to litter our application with LLBL entities, we mapped those to our BO's before they hit the client. That mean mapping them back to LLBL entities before hitting the DB again. The DAL code ended up being a very big portion of our application. Not 50%, but sizable.
In a project I am working on now I started with db40 hoping to minimize the DAL footprint. And it was very small, but I ran into problems with db40 and had to abandon it. I switched to ADO.NET for a few days just to get something working and was amazed at how much freakin' ADO I had to write just to get a simple repository working. That was a headache, so I finally opted for NHibernate.
My DAL code with NHibernate (2.0) is probably 5% or less of my code base. That's including the XML mapping files. I mean, the DAL footprint is so small and such a pleasure to work with. I had problems with NHibernate 1.2 in the past in a distributed environment where I needed to work with detached objects, but NHibernate 2.0 seems to have solved that issue. I know it's the way I'm doing DAL from now on, until something better comes along.
Interestingly, my colleagues and I had a similar conversation a couple of months ago. But ours was focused not so much on how much code our DAL accounted for, but how much of our application was simply data querying/manipulation. We estimated that probably 90% of our application was just a matter of fetching the correct subset of data and allowing users to edit it.

We use nHibernate most of the times, and it being generated by a tool, dunno if that counts, we still end up adding a lot of code to the nHibernate layer for each of the entities and yes; it does go on to be a large part of the code.
Also the DAL code increases as you add in more functionality and more entities in you app.
So more or less i guess around 20%-30% of our codebase is composed of the DAL.

Related

Entity Framework performance difference with many includes

In my application I have a big nasty query that uses 25 includes. I know that might be a bit exessive, but it haven't given us much problems and have been working fine. If I take the query generated by EF and run it manually in the database it takes around 500ms, and from the code EF uses around 700ms to get the data from the database and build up the object structure, and that is perfectly acceptable.
The problem however is on the production server. If I run the query manually there I see the same around 500ms time usage to fetch the data, but entity framework now uses around 11000ms to get the data and build the object, and that is of course not good by any measure.
So my question is: What can be the cause of these extreme differences when the query fired manually on the database is roughly the same?
I ended up using Entity Framework in a more "manual" way.
So instead of using dbContext.Set<T> and a lot of includes, I had to manually use a series of dbContext.Database.SqlQuery<T>("select something from something else"). After a bit of painful coding binding all the objects together I tested it on the machines that had the problem, and now it worked as expected on all machines.
So I don't know why it worked on some machines and not others, but it seems that EF has problems on some machine setups when there are a great deal of includes.

Disadvantages of the Force.com platform [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We're currently looking at using the Force.com platform as our development platform and the sales guys and the force.com website are full of reasons why it's the best platform in the world. What I'm looking for, though, is some real disadvantages to using such a platform.
Here are 10 to get you started.
Apex is a proprietary language. Other than the force.com Eclipse plugin, there's little to no tooling available such as refactoring, code analysis, etc.
Apex was modeled on Java 5, which is considered to be lagging behind other languages, and without tooling (see #1), can be quite cumbersome.
Deployment is still fairly manual with lots of gotchas and manual steps. This situation is slowly improving over time, but you'll be disappointed if you're used to having automated deployments.
Apex lacks packages/namespaces. All of your classes, interfaces, etc. live in one folder on the server. This makes code much less organized and class/interface names necessarily long to avoid name clashes and to provide context. This is one of my biggest complaints, and I would not freely choose to build on force.com for this reason alone.
The "force.com IDE", aka force.com eclipse plugin, is incredibly slow. Saving any file, whether it be a class file, text file, etc., usually takes at least 5 seconds and sometimes up to 30 seconds depending on how many objects, data types, class files, etc. are in your org. Saving is also a blocking action, requiring not only compilation, but a full sync of your local project with the server. Orders of magnitude slower than Java or .NET.
The online developer community does not seem very healthy. I've noticed lots of forum posts go unanswered or unsolved. I think this may have something to do with the forum software salesforce.com uses, which seems to suck pretty hard.
The data access DSL in Apex leaves a lot to be desired. It's not even remotely competitive with the likes of (N)Hibernate, JPA, etc.
Developing an app on Apex/VisualForce is an exercise in governor limits engineering. Easily half of programmer time is spent trying to optimize to avoid the numerous governor limits and other gotchas like visualforce view state limits. It could be argued that if you write efficient code to begin with you won't have this problem, which is true to an extent. However there are many times that you have valid reasons to make more than x queries in a session, or loop through more than x records, etc.
The save->compile->run cycle is extremely slow, esp. when it involves zipping and uploading the entire static resource bundle just to do something like test a minor CSS or javascript change.
In general, the pain of a young, fledgling platform without the benefits of it being open source. You have no way to validate and/or fix bugs in the platform. They say to post it to their IdeaExchange. Yeah, good luck with that.
Disclaimers/Disclosures: There are lots of benefits to a hosted platform such as force.com. Force.com does regularly enhance the platform. There are plenty of things about it I like. I make money building on force.com
I see you've gotten some answers, but I would like to reiterate how much time is wasted getting around the various governor limits on the platform. As much as I like the platform on certain levels, I would very strongly, highly, emphatically recommend against it as a general application development platform. It's great as a super configurable and extensible CRM application if that's what you want. While their marketing is exceptional at pushing the idea of Force.com as a general development platform, it's not even remotely close yet.
The efficiency of having a stable platform and avoiding big performance and stability problems is easily wasted in trying to code around the limits that people refer to. There are so many limits to the platform, it becomes completely maddening. These limits are not high-end limits you'll hit once you have a lot of users, you'll hit them almost right away.
While there are usually techniques to get around them, it's very hard to figure out strategies for avoiding them while you're also trying to develop the business logic of your actual application.
To give you a simple sense of how developer un-friendly the environment is, take the "lack of debugging environment" referred to above. It's worse than that. You can only see up to 20 of the most recent requests to the server in the debug logs. So, as you're developing inside the application you have to create a "New" debug request, select your name, hit "Save", switch back to your app, refresh the page, click back to your debug tab, try to find the request that will house your debug log, hit "find" to search for the text you're looking for. It's like ten clicks to look at a debug output. While it may seem trivial, it's just an example of how little care and consideration has been given to the developer's experience.
Everything about the development platform is a grafted-on afterthought. It's remarkable for what it is, but a total PITA for the most part. If you don't know exactly what you are doing (as in you're certified and have a very intimate understanding of Apex), it will easily take you upwards of 10-20x the amount of time that it would in another environment to do something that seems like it would be ridiculously simple, if you can even succeed at all.
The governor limits are indeed that bad. You have a combination of various limits (database queries, rows returned, "script statements", future calls, callouts, etc.) and you have to know exactly what you are doing to avoid these. For example, if you have a calculated rollup "formula" field on an object and you have a trigger on a child object, it will execute the parent object triggers and count those against your limits. Things like that aren't obvious until you've gone through the painful process of trying and failing.
You'll try one thing to avoid one limit, and hit another in a never ending game of "whack a limit". In the process you'll have to drastically re-architect your entire app and approach, as well as rewrite all of your test code. You must have 75% test code coverage to deploy into production, which is actually very good thing, but combined with all of the other limits, it's very burdensome. You'll actually hit governor limits writing your test code that wouldn't come up in normal user scenarios, but that will prevent you from achieving the coverage.
That is not to mention a whole host of other issues. Packaging isn't what you expect. You can't package up your app and deliver it to users without significant user intervention and configuration on the part of the administrator of the org. The AppExchange is a total joke, and they've even started charging 5K just to get your app listed. Importing with the data loader sucks, especially if you have any triggers. You can't export all of your data in one step that includes your relationships in such a way that it can easily be re-imported into another org in a single step (for example a dev org). You can only refresh a sandbox once a month from production, no exceptions, and you can't include your data in a refresh by default unless you have called your account executive to get that feature unlocked. You can't mass delete data in custom objects. You can't change your package names. Certain things can take numerous days to complete after you have requested them, such as a data backup before you want to deploy an app, with no progress report along the way and not much sense of when exactly the export occurred. Given that there are synchronicity issues of data if there are relationships between the data, there are serious data integrity issues in that there is no such thing as a "transaction" that can export numerous objects in a single step. There are probably some commercial tools to facilitate some of this, but these are not within reach to normal developers who may not have a huge budget.
Everything else the other people said here is true. It can take anywhere from five seconds to a minute sometimes to save a file.
I don't mean to be so negative because the platform is very cool in some ways and they're trying to do things in a multi-tenant environment that no one else is doing. It's a very innovative environment and powerful on some levels (I actually like VisualForce a lot), but give it another year or two. They're partnering with VMware, maybe that will lead to giving developers a bit more of a playpen rather than a jail cell to work in.
Here are a few things I can give you after spending a fair bit of time developing on the platform in the last fortnight or so:
There's no RESTful API. They have a soap based API that you can call, but there is no way of making true restful calls
There's no simple way to take their SObjects and convert them to JSON objects.
The visual force pages are ok until you want to customize them and then it's a whole world of pain.
Visual force pages need to be bound to SObjects otherwise there's no way to get the standard input fields like the datepicker or select list to work.
The eclipse plugin is ok if you want to work by yourself, but if you want to work in a large team with the eclipse plugin forget it. It doesn't handle synchronizing to and from the server, it crashes and it isn't really helpful at all.
THERE IS NO DEBUGGER! If you want to debug, it's literally debugged by system.debug statements. This is probably the biggest problem I've found
Their "MVC" model isn't really MVC. It's a lot closer to ASP.NET Webforms. Your views are tightly coupled to not only the models but the controllers as well.
Storing a large number of documents is not feasible. We need to store over 100gb's of documents and we were quoted some ridiculous figure. We've decided to implement our document storage on amazons S3 infrastructure
Even tho the language is java based, it's not java. You can't import any external packages or libraries. Also, the base libraries that are available are severely limited so we've found ourselves implementing a bunch of stuff externally and then exposing those bits as services that are called by force.com
You can call external SOAP or REST based services but the message body is limited to 100kb's so it's very restrictive in what you can call.
In all honesty, whilst there are potential benefits to developing on something like the force.com platform, for me, you couldn't use the force.com platform for true enterprise level apps. At best you could write some basic crud style applications but once you move into anything remotely complicated I'd be avoiding it like the plague.
Wow- there's a lot here that I didn't even know were limitations - after working on the platform for a few years.
But just to add some other things...
The reason you don't have a line-by-line debugger is precisely because it's a multi-tenant platform. At least that's what SFDC says - it seems like in this age of thread-rich programming, that isn't much of an excuse, but that's apparently the reason. If you have to write code, you have "System.debug(String)" as your debugger - I remember having more sophisticated server debugging tools in Java 1.2 about 12 years ago.
Another thing I really hate about the system is version control. The Spring framework is not used for what Spring is usually used for - it's really more off a configuration tool in SFDC rather than version control. SFDC provides ZERO version-control.
You can find yourself stuck for days doing something that should seem so ridiculously easy, like, say, scheduling a SFDC report to export to a CSV file and email to a list of recipients... Well, about the easiest way to do that is create a custom object with a custom field, with a workflow rule and a Visualforce email template... and then for code you need to write a Visualforce component that streams the report data to the Visualforce email template as an attachment and you write anonymous APEX code schedule field-update of the custom object... For SFDC developers, this is almost a daily task... trying to put about five different technologies together to do tasks that seem so simple.... And this can cause management headaches and tensions too - Typically, you'd find this out after getting a suggestion to do something that doesn't work in the user-community (like someone already said), and then trying many things that, after you developed them you'd find they just don't work for some odd-ball reason - like "you can't schedule a VisualForce page", or "you can't call getContent from a schedulable context" or some other arcane reason.
There are so many, many maddening little gotcha's on the SFDC platform, that once you know WHY they're there, it makes sense... but they're still very bad limitations that keep you from doing what you need to do. Here's some of mine;
You can't get record owner information "out of the box" on pretty much any kind of record - you have to write a trigger that links the owner on create of the record to the record you're inserting. Why? Short answer because an owner can be either a "person" or a "queue", and the two are drastically different entities... Makes sense, but it can turn a project literally upside down.
Maddening security model. Example: "Manage Public Reports" permission is vastly different from "Create and Customize Reports" and that basically goes for everything on the platform... especially folders of any kind.
As mentioned, support is basically non-existent. If you are an extremely self-sufficient individual, or have a lot of SFDC resources, or have a lot of time and/or a very forgiving manager, or are in charge of a SFDC system that's working fine, you're in pretty good shape. If you are not in any of these positions, you can find yourself in deep trouble.
SFDC is a very seductive business proposition... no equipment footprint, pretty good security, fixed price, no infrastructure, AND you get web-based CRM with batchable, and schedualble processing... But as the other posters said, it is really quite a ramp-up in development learning, and if you go with consulting, I think the lowest price I've seen was $200/hour.
Salesforce tends integrate with other things years after some technologies become common-place - JSON and jquery come to mind... and if you have other common infrastructures that you want to do an integration with, like JIRA, expect to pay a lot extra, and they can be quite buggy.
And as one of the other posters mentioned, you are constantly fighting governor limits that can just drive you nuts... an attachment can NOT be > 5MB. Period. And sometimes < 3MB (if base64 encoded). Ten HTTP callouts in a class. Period. There are dozens of published governor limits, and many that are not which you will undoubtedly find and just want to run out of your office screaming.
I really, REALLY like the platform, but trust me - it can be one really cruel mistress.
But in fairness to SFDC, I'd say this: the biggest problem I find with the platform is not the platform itself, but the gargantuan expectations that almost anyone who sees the platform, but hasn't developed on it has.... and those people tend to be in positions of great authority in business organizations; marketing, sales, management, etc. Huge disconnects occur and heads roll, or are threatened to roll daily - all because there's this great platform out there with weird gotchas and thousands of people struggling daily to get their heads around why things should just work when they just don't and won't.
EDIT:
Just to add to lomaxx's comments about the MVC; In SFDC terminology, this is closely related to what's known as the "viewstate" -- aand it can be really buggy, in that what is on the VF page is not what is in the controller-class for the page. So, you have to go throught weird gyrations to synch whats on the page with what the controller is going to write to SF when you click your "save" button (or make your HTTP callout or whatever).... man, it's annoying.
I think other people have covered the disadvantages in more depth but to me, it doesn't seem to use the MVC paradigm or support much in the way of code reuse at all. To do anything beyond simple applications is an exercise in frustration compared to developing an application using something like ASP.Net MVC.
Furthermore, the tools, the data layer and the frustration of trying to refactor code or rename fields during the development process doesn't help.
I think as a CMS it's pretty cool but as a platform for non CMS applications, it's doesn't make sense to me.
The security model is also very very restrictive... but this isn't the worst part. You can't currently assert whether a user has the ability to perform a particular action.
You can check to see what their role is, but you can't check if that role has permissions to perform the current action.
Even worse is the response from tech support to "try the action and if there's an exception, catch it"
Considering Force.com is a "cloud" platform, its ability to act as a client to an external WSDL-defined service is pretty underwhelming. See http://force201.wordpress.com/2010/05/20/when-generate-from-wsdl-fails-hand-coding-web-service-calls/ for what you might end up having to do.
To all above, I am curious how the release of VMforce, allowing Java programmer to write code for Force.com, changes the disadvantages above?
http://www.zdnet.com/blog/saas/vmforcecom-redefines-the-paas-landscape/1071
I guess they are trying to address these issues. At dreamforce they mentioned they we're trying to drop the Governor limits to only 4. I'm not sure what the details are. They have a REST API for early access, and they bought heroku which is a ruby development in the cloud. They split out the database, with database.com so you can do all your web development on and your db calls using database.com.
I guess they are trying to make it as agnostic as possible. But right about now these are all announcements and early access so like their Safe Harbor statements don't purchase on what they say, only on what they currently have.

Why should you use an ORM? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
If you are motivate to the "pros" of an ORM and why would you use an ORM to management/client, what are those reasons would be?
Try and keep one reason per answer so that we can see which one gets voted up as the best reason.
The most important reason to use an ORM is so that you can have a rich, object oriented business model and still be able to store it and write effective queries quickly against a relational database. From my viewpoint, I don't see any real advantages that a good ORM gives you when compared with other generated DAL's other than the advanced types of queries you can write.
One type of query I am thinking of is a polymorphic query. A simple ORM query might select all shapes in your database. You get a collection of shapes back. But each instance is a square, circle or rectangle according to its discriminator.
Another type of query would be one that eagerly fetches an object and one or more related objects or collections in a single database call. e.g. Each shape object is returned with its vertex and side collections populated.
I'm sorry to disagree with so many others here, but I don't think that code generation is a good enough reason by itself to go with an ORM. You can write or find many good DAL templates for code generators that do not have the conceptual or performance overhead that ORM's do.
Or, if you think that you don't need to know how to write good SQL to use an ORM, again, I disagree. It might be true that from the perspective of writing single queries, relying on an ORM is easier. But, with ORM's it is far too easy to create poor performing routines when developers don't understand how their queries work with the ORM and the SQL they translate into.
Having a data layer that works against multiple databases can be a benefit. It's not one that I have had to rely on that often though.
In the end, I have to reiterate that in my experience, if you are not using the more advanced query features of your ORM, there are other options that solve the remaining problems with less learning and fewer CPU cycles.
Oh yeah, some developers do find working with ORM's to be fun so ORM's are also good from the keep-your-developers-happy perspective. =)
Speeding development. For example, eliminating repetitive code like mapping query result fields to object members and vice-versa.
Making data access more abstract and portable. ORM implementation classes know how to write vendor-specific SQL, so you don't have to.
Supporting OO encapsulation of business rules in your data access layer. You can write (and debug) business rules in your application language of preference, instead of clunky trigger and stored procedure languages.
Generating boilerplate code for basic CRUD operations. Some ORM frameworks can inspect database metadata directly, read metadata mapping files, or use declarative class properties.
You can move to different database software easily because you are developing to an abstraction.
Development happiness, IMO. ORM abstracts away a lot of the bare-metal stuff you have to do in SQL. It keeps your code base simple: fewer source files to manage and schema changes don't require hours of upkeep.
I'm currently using an ORM and it has sped up my development.
So that your object model and persistence model match.
To minimise duplication of simple SQL queries.
The reason I'm looking into it is to avoid the generated code from VS2005's DAL tools (schema mapping, TableAdapters).
The DAL/BLL i created over a year ago was working fine (for what I had built it for) until someone else started using it to take advantage of some of the generated functions (which I had no idea were there)
It looks like it will provide a much more intuitive and cleaner solution than the DAL/BLL solution from http://wwww.asp.net
I was thinking about created my own SQL Command C# DAL code generator, but the ORM looks like a more elegant solution
Abstract the sql away 95% of the time so not everyone on the team needs to know how to write super efficient database specific queries.
I think there are a lot of good points here (portability, ease of development/maintenance, focus on OO business modeling etc), but when trying to convince your client or management, it all boils down to how much money you will save by using an ORM.
Do some estimations for typical tasks (or even larger projects that might be coming up) and you'll (hopefully!) get a few arguments for switching that are hard to ignore.
Compilation and testing of queries.
As the tooling for ORM's improves, it is easier to determine the correctness of your queries faster through compile time errors and tests.
Compiling your queries helps helps developers find errors faster. Right? Right. This compilation is made possible because developers are now writing queries in code using their business objects or models instead of just strings of SQL or SQL like statements.
If using the correct data access patterns in .NET it is easy to unit test your query logic against in memory collections. This speeds the execution of your tests because you don't need to access the database, set up data in the database or even spin up a full blown data context.[EDIT]This isn't as true as I thought it was as unit testing in memory can present difficult challenges to overcome. But I still find these integration tests easier to write than in previous years.[/EDIT]
This is definitely more relevant today than a few years ago when the question was asked, but that may only be the case for Visual Studio and Entity Framework where my experience lies. Plugin your own environment if possible.
.net tiers using code smith templates
http://nettiers.com/default.aspx?AspxAutoDetectCookieSupport=1
Why code something that can be generated just as well.
convince them how much time / money you will save when changes come in and you don't have to rewrite your SQL since the ORM tool will do that for you
I think one cons is that ORM will need some updation in your POJO. mainly related to schema, relation and query. so scenario where you are not suppose to make changes in model objects, might be because it is shared among more that on project or b/w client and server. so in such cases you will need to split it in two levels, which will require additional efforts .
i am an android developer and as you know mobile apps are usually not huge in size, so this additional effort to segregate pure-model and orm-affected-model does not seems worth full.
i understand that question is generic one. but mobile apps are also come inside generic umbrella.

What is a good balance in an MVC model to have efficient data access?

I am working on a few PHP projects that use MVC frameworks, and while they all have different ways of retrieving objects from the database, it always seems that nothing beats writing your SQL queries by hand as far as speed and cutting down on the number of queries.
For example, one of my web projects (written by a junior developer) executes over 100 queries just to load the home page. The reason is that in one place, a method will load an object, but later on deeper in the code, it will load some other object(s) that are related to the first object.
This leads to the other part of the question which is what are people doing in situations where you have a table that in one part of the code only needs the values for a few columns, and another part needs something else? Right now (in the same project), there is one get() method for each object, and it does a "SELECT *" (or lists all the columns in the table explicitly) so that anytime you need the object for any reason, you get the whole thing.
So, in other words, you hear all the talk about how SELECT * is bad, but if you try to use a ORM class that comes with the framework, it wants to do just that usually. Are you stuck to choosing ORM with SELECT * vs writing the specific SQL queries by hand? It just seems to me that we're stuck between convenience and efficiency, and if I hand write the queries, if I add a column, I'm most likely going to have to add it to several places in the code.
Sorry for the long question, but I'm explaining the background to get some mindsets from other developers rather than maybe a specific solution. I know that we can always use something like Memcached, but I would rather optimize what we can before getting into that.
Thanks for any ideas.
First, assuming you are proficient at SQL and schema design, there are very few instances where any abstraction layer that removes you from the SQL statements will exceed the efficiency of writing the SQL by hand. More often than not, you will end up with suboptimal data access.
There's no excuse for 100 queries just to generate one web page.
Second, if you are using the Object Oriented features of PHP, you will have good abstractions for collections of objects, and the kinds of extended properties that map to SQL joins. But the important thing to keep in mind is to write the best abstracted objects you can, without regard to SQL strategies.
When I write PHP code this way, I always find that I'm able to map the data requirements for each web page to very few, very efficient SQL queries if my schema is proper and my classes are proper. And not only that, but my experience is that this is the simplest and fastest way to implement. Putting framework stuff in the middle between PHP classes and a good solid thin DAL (note: NOT embedded SQL or dbms calls) is the best example I can think of to illustrate the concept of "leaky abstractions".
I got a little lost with your question, but if you are looking for a way to do database access, you can do it couple of ways. Your MVC can use Zend framework that comes with database access abstractions, you can use that.
Also keep in mind that you should design your system well to ensure there is no contention in the database as your queries are all scattered across the php pages and may lock tables resulting in the overall web application deteriorating in performance and becoming slower over time.
That is why sometimes it is prefereable to use stored procedures as it is in one place and can be tuned when we need to, though other may argue that it is easier to debug if query statements are on the front-end.
No ORM framework will even get close to hand written SQL in terms of speed, although 100 queries seem unrealistic (and maybe you are exaggerating a bit) even if you have the creator of the ORM framework writing the code, it will always be far from the speed of good old SQL.
My advice is, look at the whole picture not only speed:
Does the framework improves code readability?
Is your team comfortable with writing SQL and mixing it with code?
Do you really understand how to optimize the framework queries? (I think a get() for each object is not the optimal way of retrieving them)
Do the queries (after optimization) of the framework present a bottleneck?
I've never developed anything with PHP, but I think that you could mix both approaches (ORM and plain SQL), maybe after a thorough profiling of the app you can determine the real bottlenecks and only then replace that ORM code for hand written SQL (Usually in ruby you use ActiveRecord, then you profile the application with something as new relic and finally if you have a complicated AR query you replace that for some SQL)
Regads
Trust your experience.
To not repeat yourself so much in the code you could write some simple model-functions with your own SQL. This is what I am doing all the time and I am happy with it.
Many of the "convenience" stuff was written for people who need magic because they cannot do it by hand or just don't have the experience.
And after all it's a question of style.
Don't hesitate to add your own layer or exchange or extend a given layer with your own stuff. Keep it clean and make a good design and some documentation so you feel home when you come back later.

What is a good choice of ORM for an eCommerce website?

I am using C# 3.0 / .NET 3.5 and planning to build an eCommerce website.
I've seen NHibernate, LLBLGEN, Genome, Linq to SQL, Entity Framework, SubSonic, etc.
I don't want to code everything by hand. If there is some specific bottleneck I'll manage to optimize the database/code.
Which ORM would be best? There is so much available those day that I don't even know where to start.
Which feature(s) should I be using?
Links, Screencast and Documentation are welcome.
I've been using nHibernate which is a very good free solution. The one downside is the lack of documentation, which causes a slightly steep rampup time. But once you get the basics down it really speeds up development.
I like Fluent nHibernate for a way to configure without the xml files. The one thing I suggest though is to abstract out your data access from your application. this way should you choose wrong you don't have to worry about re-coding the App tiers.
I can only really speak for LINQ-SQL and can say that it is:
Easy to use
Quick to get you up and running
Good for simple schemas and object models
but it starts to fall down if:
You're using a disconnected (tiered) architecture because its datacontexts require the same object instances to perform tracking and concurrency (though there are ways around this).
You have a complex object model / database
Plus it has some other niggles and strange behaviour
I'm looking to try EF next myself and MS seem to be quietly dropping LINQ-SQL in favour of EF, which isn't exactly a ringing recommendation of LINQ-SQL :)
That depends on the architecture of the data model. I can speak to the effectiveness of SubSonic, since I'm in the process of launching a web app that it backs.
I've run into problems with JOINs and DISTINCTs while using SubSonic. Both times, all I had to do is patch the source and rebuild the DLL. Now, I'm not at all averse to something like this, but you might be.
Other than those two problems, SubSonic is a joy to use. Selects are very easy and flowing. It maps fairly closely to SQL, much the same way LINQ does. Also, SubSonic comes with the scaffolding function that should be able to pre-build certain pages for you. I'm not sure how effective it is, since I like to do that stuff myself.
One more thing, selection of specific rows as opposed to * is slow, but only in debug mode. Once you compile for release, it's actually faster.
That's my two cents.
I started out using Linq to SQL as the whole linq integration is awesome, but if you want to do Model First rather than Schema First and you want to have a rich domain model then nHibernate\Fluent nHibernate is really the way to go. We switched to this and is far simpler, better supported than l2s. However for straight dragging your schema into the dbml code generator, linq to sql is great.
I have also heard very good things about Mindscape Lightspeed but have not used it.

Resources