Comparing Database Platforms [closed] - database

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
My employer has a database committee and we've been discussing different platforms. Thus far we've had people present on SqlLite, MySQL, and PostreSql. We currently use the Microsoft stack so we're all quite familiar with Microsoft Sql Server.
As a part of this comparison I thought it would be interesting to create a small reference application for each database platform to explore the details involved in working with it.
First: does that idea make sense or is the comparison require going beyond the scope of a trivial sample application?
Second: I would imagine each reference application having a discrete but small set of requirements that fulfill many of the scenarios we run into on a regular basis. Here is what I have so far, what else can be added to the list but still keep the application small enough to be built in a very limited timespan?
Connectivity from the application layer
Tools for database administration
Process of creating a schema (small "s" schema, tables/views/functions other objects)
Simple CRUD (Create, Retrieve, Update, Delete)
Transaction support
Third: has anyone gone through this process, what are your findings?

Does that idea make sense or is the
comparison require going beyond the
scope of a trivial sample application?
I don't think it's a good idea. Most of the things that will really affect you are long term database management issues, and how the database management system you choose can handle those things.
You could be tempted in the short term with things like "I found out in 3 seconds how to do this with XYZ database management system". Now, I'm not saying support is not important; quite the contrary. But finding an answer in google in 3 seconds means that you got an answer to a simple question. How quickly, if ever, can you find an answer to a challenging problem?
A short list (not exhaustive) of important things to consider are:
backup and recovery -- at both logical level and physical level
good support for functions (or stored procedures), triggers, various SQL query constructs
APIs that allow real extensibility -- these things can get you out of tough situations and allow you to solve problems in creative ways. You'd be surprised what can be accomplished with user-defined types and functions. How do the user-defined types interact with the indexing system?
SQL standard support -- doesn't trump everything else, but if support is lacking in a few areas, really consider why it is lacking, what the workarounds are, and what are the costs of those workarounds.
A powerful executor that offers a range of fundamental algorithms (e.g. hash join, merge join, etc.) and indexing structures (btree, hash, maybe a full text option, etc.). If it's missing some algorithms or index structures, consider the types of questions that the database will be inefficient at answering. Note: I don't just mean "slow" here; the wrong algorithm can easily be worse by orders of magnitude.
Can the type system reasonably represent your business? If the set of types available is incredibly weak, you will have a mess. Representing everything as strings is kind of like assembly programming (untyped), and you will have a mess.
A trivial application won't show you any of those things. Simple things are simple to solve. If you have a "database committee" then your company cares about its data, and you should take the responsibility seriously. You need to make sure that you can develop applications on it easily with the results you and your developers expect; and when you run into problems you need to have access to a powerful system and quality support that can get you through it.

Actually learning capabilities of each RDMS is more crucial. Because it depends on the application. If you need spatial data capabilities PostGIS with PostgreSQL is better than MySQL. If you need easy replication, high availability features MySQL seems better. Also there are license issues. A link for comparison here. All has strengths and weaknesses. First get the requirements of your project or projects than compare it with list the features of the RDMSs you pick and decide which one to go.

I don't think you need to test the simple CRUD stuff, it's hard to imagine a vendor that doesn't support the basics.

Firstly, you're going beyond the scope of a sample app, in my humble opinion.
Secondly, I'd pick the one most appropriate to the tool or application you wish to develop. For example, are schemas and transactions relevant for a database that stores a single-user app configuration?
Thirdly, I've worked with Access, SQL Server, SQLite, MySQL, PostgreSQL and Oracle, and they all have their place. If you're in the MS space, go with SQL Server (and don't forget Express). There are also ADO.NET ways to talk to the others in my list. It depends on what you want.

Frankly, I doubt an arbitrarily-defined simple application would be likely to really highlight the differences between database engines. I think you'd be better to read the advertising literature for the various engines to see what they claim as their strong points. Then consider which of these matter to you, and construct tests specifically designed to verify claims that you care about.
For example, here are pros and cons of database engines I've used the most that have mattered to me. I don't claim this is an exhaustive list, but it may give you an idea of things to think about:
MySQL: Note: MySQL has two main engines internally: MyISAM and InnoDB. I have never used the InnoDB.
Pros: Fast. Free to cheap depending on how you're using it. Very convenient and easy-to-use commands for managing the schema. Some very useful extensions to the SQL standard, like "insert ... on duplicate".
Cons: The MyISAM engine does not support transactions, i.e. there's no rollback. MyISAM engine does not manage foreign keys for you. (InnoDB does not have these drawbacks, but as I say, I've never used it, so I can't comment much further.) Many deviations from SQL standards.
Oracle: Pros: Fast. Generally good conformance to SQL standards. My brother works for Oracle so if you buy there you'll be helping support my family. (Okay, maybe that's not an important pro for you ...)
Cons: Difficult to install and manage. Expensive.
Postgres: Pros: Very high conformance to SQL standards. Free. Very good "explain" plans.
Cons: Relatively slow. Optimizer is easily confused on complex queries. Some awkwardness in modifying existing tables.
Access: Pros: Easy to install and manage. Very easy to use schema management. Built-in data entry tools and query builder for quick-and-dirty stuff. Cheap.
Cons: Slow. Unreliable with multiple users.

I think that you can investigate Firebird too
This is an extract of Firebird-General on yahoogroups and I find it quite objective
Our natural audience is developers who
want to package and sell proprietary
applications. Firebird is easier to
package and install than Postgres;
more capable than SQLite; and doesn't
charge a royalty like MySQL.

Related

When should I use Datomic?

I'm intrigued in the database service Datomic, but I'm not sure if it fits the needs of the projects I work on. When is Datomic a good choice, and when should it be avoided?
With the proviso that I haven't used Datomic in production, thought I'd give you an answer.
Advantages
Datalog queries are powerful (more so than non-recursive SQL) and very expressive.
Queries can be written with Clojure data structures, and it's NOT a weak DSL like many SQL libraries that allow you to query with data structures.
It's immutable, so you get the advantages that immutability gives you in Clojure/other languages as well
a. This also allows you to store, while saving structures, all past facts in your database—this is VERY useful for auditing & more
Disadvantages
It can be slow, as Datalog is just going to be slower than equivalent SQL (assuming an equivalent SQL statement can be written).
If you are writing a LOT, you could maybe need to worry about the single transactor getting overwhelmed. This seems unlikely for most cases, but it's something to think about (you could do a sort of shard, though, and probably save yourself; but this isn't a DB for e.g. storing stock tick data).
It's a bit tricky to get up and running with, and it's expensive, and the licensing and price makes it difficult to use a hosted instance with it: you'll need to be dealing with sysadminning this yourself instead of using something like Postgres on Heroku or Mongo at MongoHQ
I'm sure I'm missing some on each side, and though I have 3 listed under disadvantages, I think that the advantages outweigh them in more circumstances where disadvantages don't preclude its use. Price is probably the one that will prevent its being used in most small projects (that you expect to outlast the 1 year free trial).
Cf. this short post describing Datomic simply for some more information.
Expressivity (c.f. Datalog) and immutability are awesome. It's SO much fun to work with Dataomic in that regard, and you can tell it's powerful just by using it a bit.
One important thing when considering if Datomic is the right fit for your application is to think about shape of the data you are going to store and query - as Datomic facts are actually very similar to RDF triples (+ first class time notion) it lends itself very good to modeling complex relationships (linked graph data) - something which is often cumbersome with traditional SQL databases.
I found this aspect to be one of the most appealing and important for me, it worked really well, even if this is of course not something exclusive to Datomic, as there are many other high-quality offerings for graph databases, one must mention Neo4J when we are talking about JVM based solutions.
Regarding Datomic schema, i think it's just the right balance between flexibility and stability.
To complete the above answers, I'd like to emphasize that immutability and the ability to remember the past are not 'wizardry features' suited to a few special case like auditing. It is an approach which has several deep benefits compared to 'mutable cells' databases (which are 99% of databases today). Stuart Halloway demonstrates this nicely in this video: the Impedance Mismatch is our fault.
In my personal opinion, this approach is fundamentally more sane conceptually. Having used it for several months, I don't see Datomic has having crazy magical sophisticated powers, rather a more natural paradigm without some of the big problems the others have.
Here are some features of Datomic I find valuable, most of which are enabled by immutability:
because reading is not remote, you don't have to design your queries like an expedition over the wire. In particular, you can separate concerns into several queries (e.g find the entities which are the input to my query - answer some business question about these entities - fetch associated data for presenting the result)
the schema is very flexible, without sacrificing query power
it's comfortable to have your queries integrated in your application programming language
the Entity API brings you the good parts of ORMs
the query language is programmable and has primitives for abstraction and reuse (rules, predicates, database functions)
performance: writers impede only other writers, and no one impedes readers. Plus, lots of caching.
... and yes, a few superpowers like travelling to the past, speculative writes or branching reality.
Regarding when not to use Datomic, here are the current constraints and limitations I see:
you have to be on the JVM (there is also a REST API, but you lose most of the benefits IMO)
not suited for write scale, nor huge data volumes
won't be especially integrated into frameworks, e.g you won't currently find a library which generates CRUD REST endpoints from a Datomic schema
it's a commercial database
since reading happens in the application process (the 'Peer'), you have to make sure that the Peer has enough memory to hold all the data it needs to traverse in a query.
So my very vague and informal answer would be that Datomic is a good fit for most non-trivial applications which write load is reasonable and you don't have a problem with the license and being on the JVM.
As an analogy, you can ask yourself the same question for Git as compared to other version control systems which are not based on immutability.
Just to tentatively add over the other answers:
It is probably fair to say datomic presents the better conceptual framework for a queryable data store of all other current options out there, while being partially scalable and not exceptionally performant.
I say only partially scalable, because queries need to fit in the peer RAM or fail. And not exceptionally performant, as top-notch SQL engines can optimize queries to fit in memory through sophisticated execution plans, something I've not yet seen mentioned as a feature in datomic; Datomic's decoupling of transacting and querying might in the overall offset this feature.
Unlike many NoSQL engines though, transactions are a first-class citizen, which puts it at par with RDBMS systems in that key regard.
For applications where data is read more than being written, transactions are needed, queries always fit in memory or memory is very cheap, and the overall size of accumulated data isn't too large, it might be a win where a commercial-only product can be afforded ― for those who are willing to embrace its novel conceptual framework implied in the API.

Is microsoft access a good stepping stone to learning real database management? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
My sister is going to start taking classes to try to learn how to become a web developer. She's sent me the class lists for a couple of candidate schools for me to help guide her decision.
One of the schools mentions Microsoft Access as the primary tool used in the database classes including relational algebra, SQL, database management, etc.
I'm wondering - if you learn Microsoft Access will you be able to easily pick up another more socially-acceptable database technology later like MySQL, Postgres, etc? My experience with Access was not pleasant and I picked up a whole lot of bad practices when I played around with it during my schooling years.
Basically: Does Microsoft Access use standards-compliant SQL? Do you learn the necessary skills for other databases by knowing how Microsoft Access works?
Access I would say a lot more peculiarities over 'actual' databasing software. Access can be used as a front end for SQL databases easily and that's part of the program.
Let's assume the class is using databases built in Access. Then let's break it down into the parts of a database:
Tables
Access uses a simplified model for variables. Basically you can have typical number, text fields, etc. You can fix the number of decimals for instance like you could with SQL. You won't see variables like varchar(x) though. You will just pick a text field and set the field size to "8", etc. However, like a real database, it will enforce the limits you've put in. Access will support OLE objects, but it quickly becomes a mess. Access databases are just stored like a file and can become incredibly large and bloat quickly. Therefore using it for more than storing address books, text databases, or linking to external sources via code...you have to be careful about how much information you store just because the file will get too big to use.
Queries
Access implements a lot of things along the line of SQL. I'm not aware that it is SQL compliant. I believe you can just export your Access database into something SQL can use. In code, you interact with SQL database with DAO, ADO, ADODB and the Jet or Ace engines. (some are outdated but on older databases work) However, once you get to just making queries, many things are similar. Typical commands--select, from, where, order, group, having, etc. are normal and work as you'd see them work elsewhere. The peculiar things happen when you get into using calculated expressions, complicated joins (access does not implement some kinds of joins but you will see arguably the most important--inner join/union ). For instance, the behavior of distinct is different in Access than other database architecture. You also are limited in the way you use aggregate functions (sum/max/min/avg) . In essence, Access works for a lot of tasks but it is incredibly picky and you will have to write queries just to work around these problems that you wouldn't have to write in a real database.
Forms/Reports
I think the key feature of Access is that it is much more approachable to users that are not computer experts. You can easily navigate the tables and drag and drop to create forms and reports. So even though it's not a database in my book officially, it can be very useful...particularly if few people will be using the database and they highly prefer ease of use/light setup versus a more 'enterprise level' solution. You don't need crystal reports or someone to code a lot of stuff to make an Access database give results and allow users to add data as needed.
Why Access isn't a database
It's not meant to handle lots of concurrent connections. One person can hold the lock and there's no negotiating about it--if one person is editing certain parts of the database it will lock all other users out or at least limit them to read-only. Also if you try to use Access with a lot of users or send it many requests via code, it will break after about 10-20 concurrent connections. It's just not meant for the kinds of things oracle and mySQL are built for. It's meant for the 'everyman' computer user if you will, but has a lot of useful things programmers can exploit to make the user experience much better.
So will this be useful for you to learn more about?
I don't see how it would be a bad thing. It's an environment that you can more easily see the relational algebra and understand how to organize your data appropriately. It's a similar argument to colleges that teach Java, C++, or Python and why each has its merits. Since you can immediately move from Access to Access being the front-end (you load links to the tables) for accessing a SQL database, I'm sure you could teach a very good class with it.
MS-Access is a good Sand-pit to build databases and learn the Basic's (Elementary) design and structure of a Database.
MS-Access'es SQL implementation is jsut about equivalent to SQL1.x syntax. Again Access is a Great app for learning the interaction between Query's, Tables, and Views.
Make sure she doesnt get used to the Macro's available in Access as they structure doesnt translate to Main-Stream RDBMS. The best equivalent is Stored procedures (SProcs) in professional RDBMS but SProcs have a thousand fold more utility and functionality than any Access Macro could provide.
Have her play with MS-Access to get a look and feel for DBMS, but once she gets comfortable with Database design, have her migrate to either MS-SQL Express or MySQL or Both. SQL-Express is as close to the real thing without paying for MS-SQL Std. MySQL is good for the LAMP web infrastructures.

NoSQL movement - justification for SQL RDBMS

In past year I've made numerous projects with NoSQL json based databases - the rich kinds (not the key/value stores) - such as CouchDB, MongoDB, RavenDB. I talk to fellow programmers often about my adoption, I notice though I'm always quick to add "of course SQL RDBMS system still have a place, its always whats best for particular project/task" - as a little disclaimer so not be seen as kool aid drinker, however its pretty shallow statement. Outside of legacy projects that already have an investment in RDBMS, or corporate mandates insisting on Oracle, I struggling to think of any future green field project I'd opt for a SQL database. Its CouchDB all the way for me as far as I can see with rich map/reduce, changes feed, replication support, RESTFUL api (sorry thats starting to sound like a plug)
I'd like to hear those that do "get" (beyond screencasts) NoSQL Json M/R databases such as CouchDb, what type of projects do you think you'll use MS-SQL, Oracle, Postgresql etc.. in the future ?
Thanks
One of the biggest strengths of SQL is that there is a standard way of modelling just about anything - for any given project it may not provide an optimal solution, but it does provide a reasonable one. This means that in a corporate environment you can decide that everything will be stored in Oracle to get the maintenance benefits of having a single system without the risk that it will be completely inappropriate for future projects.
This ability to handle different requirements without needing a lot of planning is also relevant at the start of projects where the design doesn't get signed off until six months into development - again, something that applies mostly in corporate development.
Using NoSQL properly generally requires better developers than you need for SQL development. One of the many SQL code samples available can be edited into a working system by someone who barely knows what they are doing. NoSQL does things like eliminating integrity checks for performance - a good developer produces well tested code that doesn't insert invalid records or understands why a few invalid records don't matter for a particular app. A bad developer thinks anything without error messages is good code. The average developer at a successful web startup can probably handle it. The average developer maintaining internal corporate apps probably can't.
On my own projects where I have complete control of the platform choice and know both the requirements and who will do the development, NoSQL is as good a complete solution as you suggest. Nothing really matters except the technical advantages - easier to code, easier to maintain, better performance, horizontal scaling.
In the corporate environment it's the other realities that dominate - incompetent developers, unknown/changing requirements, long application life cycles, the need to minimize the number of systems - NoSQL wasn't designed for that problem set.
Off-hand, I'd simply say that if the data is inherently relational, an RDBMS is the optimal choice. For instance, an order management system is a good fit for an RDBMS. If the data is not inherently relational (like Google's search index, for example), than NoSQL is a better choice. They both have their place.
My latest application is massively hierarchical/relational with objects containing sub-objects containing sub-objects, all related by natural keys. It's a perfect fit for an RDBMS, and would have been much trickier in a NoSQL DB.

Database independence

We are in the early stages of design of a big business application that will have multiple modules. One of the requirements is that the application should be database independent, it should support SQL Server, Oracle, MySQL and DB2.
From what I have read on the web, database independence is a very bad idea: it would result in a hard-to-maintain code, database design with the least-common features in all supported DBMSs, bad performance and bad scalability. My personal gut feeling is that the complexity of this feature, more than any other feature, could increase the development cost and time exponentially. The code will be dreadful.
But I cannot persuade anybody to ignore this feature. The problem is that most data on this issue are empirical data, lacking numbers to support the case. If anyone can share any numbers-supported data on the issue I would appreciate it.
One of the possible design options is to use Entity framework for the database tier with provider for each DBMS. My personal feeling is that writing SQL statements manually without any ORM would be a "must" since you have no control on the SQL generated by the entity framework, and a database-independent scenario will need some SQL tweaking based on the DBMS the code is targeting, and I think that third-party entity framework providers will have a significant amount of bugs that only appear in the complex scenarios that the application will have. I would like to hear from anyone who has had an experience with using entity framework for database-independent scenario before.
Also, one of the possibilities discussed by the team is to support one DBMS (SQL Server, for example) in the first iteration and then add support for other DBMSs in successive iterations. I think that since we will need a database design with the least common features, this development strategy is bad, since we need to know all the features of all databases before we start writing code for the first DBMS. I need to hear from you about this possibility, too.
Have you looked at Comparison of different SQL implementations ?
This is an interesting comparison, I believe it is reasonably current.
Designing a good relational data model for your application should be database agnostic, for the simple reason that all RDBMSs are designed to support the features of relational data models.
On the other hand, implementation of the model is normally influenced by the personal preferences of the people specifying the implementation. Everybody has their own slant on doing things, for instance you mention autoincremented identity in a comment above. These personal preferences for implementation are the hurdles that can limit portability.
Reading between the lines, the requirement for database independence has been handed down from above, with the instruction to make it so. It also seems likely that the application is intended for sale rather than in-house use. In context, the database preference of potential clients is unkown at this stage.
Given such requirements, then the practical questions include:
who will champion each specific database for design and development ? This is important, inasmuch as the personal preferences for implementation of each of these people need to be reconciled to achieve a database-neutral solution. If a specific database has no champion, chances are that implementing the application on this database will be poorly done, if at all.
who has the depth of database experience to act as moderator for the champions ? This person will have to make some hard decisions at times, but horsetrading is part of the fun.
will the programming team be productive without all of their personal favourite features ? Stored procedures, triggers etc. are the least transportable features between RDBMs.
The specification of the application itself will also need to include a clear distinction between database-agnostic and database specific design elements/chapters/modules/whatever. Amongst other things, this allows implementation with one DBMS first, with a defined effort required to implement for each subsequent DBMS.
Database-agnostic parts should include all of the DML, or ORM if you use one.
Database-specific parts should be more-or-less limited to installation and drivers.
Believe it or not, vanilla-flavoured sql is still a very powerful programming language, and personally I find it unlikely that you cannot create a performant application without database-specific features, if you wish to.
In summary, designing database-agnostic applications is an extension of a simple precept:
Encapsulate what varies
I work with Hibernate which gives me the benefits of the ORM plus the database independence. Database specific features are out of the question and this usually improves my design. Everything (domain model, business logic and data access methods) are testable so development is not painful.
مرحبا , Muhammed!
Database independence is neither "good" nor "bad". It is a design decision; it is a trade-off.
Let's talk about the choices:
It would result in a hard to maintain code
This is the choice of your programmers. If you make your code database-independent, then you should use a layer between your code and the database. The best kind of layer is one that someone else has written.
...Database design with the least common features in all supported DBMSs
This is, by definition, true. Luckily, the common features in all supported databases are fairly broad; they should all implement the SQL-99 standard.
...bad performance and bad scalability
This should not be true. The layer should add minimal cost to the database.
...this is the most feature ever that could increase the development cost and time exponentially with complexity. The code will be dreadful.
Again, I recommend that you use a layer between your code and the database.
You didn't specify which language or platform you're writing for. Luckily, many languages have already abstracted out databases:
Java has JDBC drivers
Python has the Python Database API
.NET has ADO.NET
Good luck.
Database independence is an overrated application feature. In reality, it is very rare for a large business application to be moved onto a new database platform after it's built and deployed. You can also miss out on DBMS specific features and optimisations.
That said, if you really want to include database independence, you might be best to write all your database access code against interfaces or abstract classes, like those used in the .NET System.Data.Common namespace (DbConnection, DbCommand, etc.) or use an O/RM library that supports multiple databases like NHibernate.

To what extent should a developer learn specifics about database systems? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Modern database systems today come with loads of features. And you would agree with me that to learn one database you must unlearn the concepts you learned in another database. For example, each database would implement locking differently than others. So to carry the concepts of one database to another would be a recipe for failure. And there could be other examples where two databases would perform very very differently.
So while developing the database driven systems should the programmers need to know the database in detail so that they code for performance? I don't think it would be appropriate to have the DBA called for performance later as his job is to only maintain the database and help out the developer in case of an emergency but not on a regular basis.
What do you think should be the extent the developer needs to gain an insight into the database?
I think these are the most important things (from most important to least, IMO):
SQL (obviously) - It helps to know how to at least do basic queries, aggregates (sum(), etc), and inner joins
Normalization - DB design skills are an major requirement
Locking Model/MVCC - Its nice to have at least a basic grasp of how your databases manage row locking (or use MVCC to accomplish similar goals with optimistic locking)
ACID compliance, Txns - Please know how these work and interact
Indexing - While I don't think that you need to be an expert in tablespaces, placing data on separate drives for optimal performance, and other minutiae, it does help to have a high level knowledge of how index scans work vs. tablescans. It also helps to be able to read a query plan and understand why it might be choosing one over the other.
Basic Tools - You'll probably find yourself wanting to copy production data to a test environment at some point, so knowing the basics of how to restore/backup your database will be important.
Fortunately, there are some great FOSS and free commercial databases out there today that can be used to learn quite a bit about db fundamentals.
I think a developer should have a fairly good grasp of how their database system works, not matter which one it is. When making design and architecture decisions, they need to understand the possible implications when it comes to the database.
Personally, I think you should know how databases work as well as the relational model and the rhetoric behind it, including all forms of normalization (even though I rarely see a need to go beyond third normal form). The core concepts of the relational model do not change from relational database to relational database - implementation may, but so what?
Developers that don't understand the rationale behind database normalization, indexes, etc. are going to suffer if they ever work on a non-trivial project.
I think it really depends on your job. If you are a developer in a large company with dedicated DBAs then maybe you don't need to know much, but if you are in a small company then it may be really helpful knowing more about databases. In small companies you may wear more than one hat.
It cannot hurt to know more in any situation.
It certainly can't hurt to be familiar with relational database theory, and have a good working knowledge of the standard SQL syntax, as well as knowing what stored procedures, triggers, views, and indexes are. Obviously it's not terribly important to learn the database-specific extensions to SQL (T-SQL, PL/SQL, etc) until you start working with that database.
I think it's important to have a basic understand of databses when developing database applications just like it's important to have an understanding of the hardware your your software runs on. You don't have to be an expert, but you shouldn't be totally ignorant of anything your software interacts with.
That said, you probably shouldn't need to do much SQL as an application developer. Most of the interaction with the database should be done through stored procedures developed by the DBA, I'm not a big fan of including SQL code in your application code. If your queries are in stored procedures, then the DBA can change the implementation of the stored procedure, or even the database schema, and so long as the result is the same it doesn't require any changes to your application code.
If you are uncertain about how to best access the database you should be using tried and tested solutions like the application blocks from Microsoft - http://msdn.microsoft.com/en-us/library/cc309504.aspx. They can also prove helpful to you by examining how that code is implemented.
Basic things about Sql queries are must. then you can develop simple system. but when you are going to implement Complex systems you should know Normalization, Procedures, Functions, etc.

Resources