What's the "best" database for embedded? [closed] - database

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm an embedded guy, not a database guy. I've been asked to redesign an existing system which has bottlenecks in several places.
The embedded device is based around an ARM 9 processor running at 220mHz.
There should be a database of 50k entries (may increase to 250k) each with 1k of data (max 8 filed). That's approximate - I can try to get more precise figures if necessary.
They are currently using SqlLite 2 and planning to move to SqlLite 3.
Without starting a flame war - I am a complete d/b newbie just seeking advice - is that the "best" decision? I realize that this might be a "how long is a piece of string?" question, but any pointers woudl be greatly welcomed. I don't mind doing a lot of reading & research, but just hoped that you could get me off to a flying start. Thanks.
p.s Again, a total rewrite, might not even stick with embedded Linux, but switch to eCos, don't worry too much about one time conversion between d/b formats. Oh, and accesses should be infrequent, at most one every few seconds.
edit: ok, it seems they have 30k entries (may reach 100k or more) of only 5 or 6 fields each, but at least 3 of them can be a search key for a record. They are toying with "having no d/b at all, since the data are so simple", but it seems to me that with multiple keys, we couldn't use fancy stuff like a quicksort() type search (recursive, binary search). Any thoughts on "no d/b", just data-structures?
Btw, one key is 800k - not sure how well SqlLite handles that (maybe with "no d/b" I have to hash that 800k to something smaller?)

Also SQLite is the Database chosen by virtually all mobile operating systems. Android, Iphone OS and Symbian ship with SQLite which makes me think that manpower was spent to optimize it for the processor in those phones (nearly always ARM).

I would stick with SQLite, it's widely supported and pretty rich in features.

Firebird (previously Interbase) claims to work well embedded.
HypersonicQL (HQL) is small and fast and also claims to be suitable for embedded use.
Alas, I have no personal experience to back up either claim.

SQLite is probably a pretty safe bet. However, if performance is really important for your application and you do not need a relational database, I would suggest you take a look at Berkeley DB link text . Berkeley DB is not a relational database though. In other words, if your data is grouped in different tables and you constantly need to query result sets that require relating data from more than one table, you probably need a relational database. Berkeley DB is better suited for something like look up tables (i.e., the data is organized in a few tables and you don't need to query data from more than one of them in order to produce the result sets you want). Berkeley DB is very fast but it will require more work on your end in order to get the most out of it.

if you want an alternative, then berkeleydb is worth looking at. it used to be owned by sleepycat software, but is now available from oracle. it's a barebones database engine; is directly programmable (rather than a sql) frontend. it's used as part of the core engine in many major databases, and as the database in many embedded devices - it used to be particularly popular for managing routing tables in routers.
it tends to get overlooked these days for more fashionable setups, but i've found it to be decent, solid and for the numbers you are talking about it can be lightning fast.

I will suggest sqlite3 too.
It is used by many famous application.

SQLite is ok, but don't plan to use if you plan to insert, update and delete data that involves more that 6 millon rows(All at the same time, or any partial part). The thing is that the VACCUM keyword has to be done everynow and then and it becomes a very severe bottleneck for performance, even when it's automatic.

8 Years late, but as an update: I've had pretty good experience using Raima Database Manager. If you are looking for a small footprint db, they can get down to 40k. One of the reasons I like RDM is the platform independence, it is portable across 32-bit and 64-bit machines and between big-endian and little-endian architectures as well as support for most operating systems, meaning you can use it on Embedded Linux and eCos as mentioned in the first post. And it's performance gets better as you add better hardware and users as opposed to SQLite

i am not familiar with embed system, but iphone use arm9, and sqlite as DB

The 01-11-10 Embedded.com Newsletter does a nice job of covering this topic. The newsletter can be found at Embedded.com: Embedded.com Tech Focus Newsletter (1-11-10): Embedding Databases.

Related

Best C language key/value database around for massive amounts of entries

I am trying to create a key/value database with 300,000,000 key/value pairs of 8 bytes each (both for the key and the value). The requirement is to have a very fast key/value mechanism which can query about 500,000 entries per second.
I tried BDB, Tokyo DB, Kyoto DB, and levelDB and they all perform very bad when it comes to databases at that size. (Their performance is not even close to their benchmarked rate at 1,000,000 entries).
I cannot store my database in memory because of hardware limitations (32 bit software), so memcached is out of the question.
I cannot use external server software as well (only a database module), and there is no need for multi-user support at all. Of course server software cannot hold 500,000 queries per second from a single endpoint anyways, so that leaves out Redis, Tokyo tyrant, etc.
David Segleau, here. Product Manager for Berkeley DB.
The most common problem with BDB performance is that people don't configure the cache size, leaving it at the default, which is pretty small. The second most common problem is that people write application behavior emulators that do random look-ups (even though their application is not really completely random) which forces them to read data out of cache. The random I/O then takes them down a path of conclusions about performance that are not based on the simulated application rather than the actual application behavior.
From your description, I'm not sure if your running into these common problems or maybe into something else entirely. In any case, our experience is that Berkeley DB tends to perform and scale very well. We'd be happy to help you identify any bottlenecks and improve your BDB application throughput. The best place to get help in this regard would be on the BDB forums at: http://forums.oracle.com/forums/forum.jspa?forumID=271. When you post to the forum it would be useful to show the critical query segments of your application code and the db_stat output showing the performance of the database environment.
It's likely that you will want to use BDB HA/Replication in order to load balance the queries across multiple servers. 500K queries/second is probably going to require a larger multi-core server or a series of smaller replicated servers. We've frequently seen BDB applications with 100-200K queries/second on commodity hardware, but 500K queries per second on 300M records in a 32-bit application is likely going to require some careful tuning. I'd suggest focusing on optimizing the performance of a the queries on the BDB application running on a single node, and then use HA to distribute that load across multiple systems in order to scale your query/second throughput.
I hope that helps.
Good luck with your application.
Regards,
Dave
I found a good benchmark comparison web page that basically compares 5 renowned databases:
LevelDB
Kyoto TreeDB
SQLite3
MDB
BerkeleyDB
You should check it out before making your choice: http://symas.com/mdb/microbench/.
P.S - I know you've already tested them, but you should also consider that your configuration for each of these tests was not optimized as the benchmark shows otherwise.
Try ZooLib.
It provides a database with a C++ API, that was originally written for a high-performance multimedia database for educational institutions called Knowledge Forum. It could handle 3,000 simultaneous Mac and Windows clients (also written in ZooLib - it's a cross-platform application framework), all of them streaming audio, video and working with graphically rich documents created by the teachers and students.
It has two low-level APIs for actually writing your bytes to disk. One is very fast but is not fault-tolerant. The other is fault-tolerant but not as fast.
I'm one of ZooLib's developers, but I don't have much experience with ZooLib's database component. There is also no documentation - you'd have to read the source to figure out how it works. That's my own damn fault, as I took on the job of writing ZooLib's manual over ten years ago, but barely started it.
ZooLib's primarily developer Andy Green is a great guy and always happy to answer questions. What I suggest you do is subscribe to ZooLib's developer list at SourceForge then ask on the list how to use the database. Most likely Andy will answer you himself but maybe one of our other developers will.
ZooLib is Open Source under the MIT License, and is really high-quality, mature code. It has been under continuous development since 1990 or so, and was placed in Open Source in 2000.
Don't be concerned that we haven't released a tarball since 2003. We probably should, as this leads lots of potential users to think it's been abandoned, but it is very actively used and maintained. Just get the source from Subversion.
Andy is a self-employed consultant. If you don't have time but you do have a budget, he would do a very good job of writing custom, maintainable top-quality C++ code to suit your needs.
I would too, if it were any part of ZooLib other than the database, which as I said I am unfamiliar with. I've done a lot of my own consulting work with ZooLib's UI framework.
300 M * 8 bytes = 2.4GB. That will probably fit into memory (if the OS does not restrict the address space to 31 bits)
Since you'll also need to handle overflow, (either by a rehashing scheme or by chaining) memory gets even tighter, for linear probing you probably need > 400M slots, chaining will increase the sizeof item to 12 bytes (bit fiddling might gain you a few bits). That would increase the total footprint to circa 3.6 GB.
In any case you will need a specially crafted kernel that restricts it's own "reserved" address space to a few hundred MB. Not impossible, but a major operation. Escaping to a disk-based thing would be too slow, in all cases. (PAE could save you, but it is tricky)
IMHO your best choice would be to migrate to a 64 bits platform.
500,000 entries per second without holding the working set in memory? Wow.
In the general case this is not possible using HDDs and even difficult SSDs.
Have you any locality properties that might help to make the task a bit easier? What kind of queries do you have?
We use Redis. Written in C, its only slightly more complicated than memcached by design. Never tried to use that many rows but for us latency is very important and it handles those latencies well and lets us store the data in the disk
Here is a bench mark blog entry, comparing redis and memcached.
Berkely DB could do it for you.
I acheived 50000 inserts per second about 8 years ago and a final database of 70 billion records.

Looking for a local database for D2009+ [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm trying to update a legacy app that does all its data storage in a hacked-together system of BDE Paradox files. The program works pretty well, under certain narrow conditions, but it has serious performance issues.
I'd like to try and improve things by updating to a better database system. What I need is a local database, preferably one where I can store the whole thing in one file instead of the current "one or more files per table" system. It has to support foreign-key relationships and table indexing, and it has to be able to return a result quickly from a query of a table with hundreds of thousands of elements.
This last one is important. The current system is indexed, but that doesn't seem to matter much. All the queries seem to run in O(N) time where N is the total size of the table, and it gets horrifically slow when the tables start to get large. I'm not really sure why, but that has to go away.
And it has to work under D2009 and later. Can anyone provide some recommendations?
Another vote here for embedded Firebird (and Firebird in general)!
I've just had an awesome experience porting an Interbase 6.0 app to embedded Firebird 1.5; after a short while reading the docs, the actual conversion took literally 20 minutes and now my app runs happily in Vista and Windows 7. If you don't need multi-user support then I'd seriously look at embedded Firebird (and if you do need multi-user support then why not look at regular Firebird anyway).
It's a single file for the db and a couple of small DLLs for the engine, and it's easy to deploy, maintain and backup. There are any number of tools to help during development and the technical support in the Delphi community for IB and Firebird is second-to-none.
The SQL support is excellent with constraints, triggers and stored procedures (we also have UDFs to help augment the language - DLLs which can be written in Delphi and used as in-line functions etc in your database. Very fast, very flexible).
Your final point about performance - well Interbase was always pretty snappy anyway, and my experience with embedded Firebird thus far is that it 'screams' - really, really impressed.
I've used this SQLite Wrapper with good success under D2009. I had it up and running in a matter of minutes. It has indexing and very low overhead. (This one is free and you don't need anything else besides the SQLite Dll)
There is also a commercial SQLite wrapper from Delphi Inspiration and the site says that they have a free for non-commercial and educational use license as well. I haven't used that one.
I've also used the Firebird embedded, but you then also need to have connectivity components to talk to it. I have IBObjects and that's what I use for both the server and embedded versions. I have tried other free Firebird database components but haven't really found any that I like or that I felt confident in.
[EDIT]
Since the majority of people are suggesting Firebird, here are some connectivity components for Firebird that I've tried in the past or that I've heard of:
Mercury Database Objects - Free/Opensource
IBObjects - Commercial (I've bought this one myself)
FIBPlus - Commercial
Firebirds ODBC Driver - Free/Opensource
ZeosLib - Free/Opensource
There's some good information available in this question - SQLite3 and Firebird Embedded seem to be good options.
Concurrency?
I used SQLite in one (non-Delphi) project and was very happy with it.
Otherwise, I think the embedded single-file DBMS of choice for Delphi seems to be Firebird.
Try Advantage Database, offered by Sybase (purchased from Extended Systems)
http://marketing.ianywhere.com/forms/ADS91-30-Day
It's free if you don't need client/server or internet functionality.
The downside is it's not 100% VCL, so the VCL included statically links to DLLs.
If the app ever needs to scale, you won't have to change databases again.
I would recommend using Postgresql as database, we use it in all projects that we work and tested it with over 4 million records in one table and worked pretty well.
Another option would be to use ADO and a microsoft access database. The only disadvantage is that the user has to have the Jet engine and MDAC installed... which most machines do. The advantage to this is that it makes upsizing to MSSQL easy. Just change the connection string to point to the SQL Server database, and make a few minor query changes.
I've used NexusDB for years and it's a small, reliable, flexible database. It's written in Delphi, comes with full source and can be compiled completely into your application (no DLLs to distribute) or run as a client server system.
It's hard to know whether it will meet your performance requirements but I haven't had a problem with my SQL query performance provided I was indexing the right fields. It's a one file per table product but don't let that stop you taking a look.
It's a commercial product but they offer a DCU only version that can only be used in single user/embedded applications for free.
I'm working at finishing up a conversion of a large application that has used BDE/Paradox for a local database and Oracle 8i for a remote db.
I'm using UniDAC from DevArt. That allows me a single component set (completely free from the old BDE) that can hit MSSQLServer as a local db and continue to hit Oracle as my remote. I have the prospect of being able to switch databases at either end much more easily now, just by changing providers.
I like this approach, and the components seem to be quite well done.
Jay
(D2007)
Postgresql is very good but it is a heavy machinery it is closer to oracle so you can do very heavy apps but a bit of a pain to maintain
Firebird is fantastic embedded or not
for connectivity in 2009 you can use FIB plus from devrace.com they have a trial version which just show a nag screen so if it not a commercial app it is ok.
else if it is a commercial app you can spend the 300 $ and buy it, I used also devart components for interbase/firebird and they are very good too
if you want free uses zeos but you get what you pay for http://sourceforge.net/projects/zeoslib/
SQL lite is not a single file and if it is multi user it sucks

Which database if learning from scratch in 2010? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
If someone knew little about databases and wanted to learn about them from scratch, which database would you recommend learning with and why?
MySQL seems ubiquitous, but are there others that are more modern that have learned lessons from the past, or others that are simply nicer or more logical to work with?
Universal compatibility/libraries is not a big concern, unless it is something truly obscure. Mac (Unix) compatibility is a must.
If you just want to be learning the SQL language, and not database administration, I would recommend working with SQLite. If you're on a Mac, it should already be installed. It is a much simpler system than most RDBMSes; there is no server to set up, and client to connect to the server. There are no directories of cryptic files, or anything of the sort. To get started, you can just type:
sqlite3 mydatabase.db
And start working with it. It's so much lighter weight and easier to set up and use than the other database systems that I think it's a good choice for a beginner.
Now, SQLite is a fairly small and lightweight language. If you need be getting into any kind of really complex queries and data mining, I would recommend PostgreSQL. It has a fairly advanced query optimizer, and a pretty long list of SQL features.
And if you want to learn a database as something to use for back-end storage for web programming or something of the sort, MySQL is what I'd choose. It's ubiquitous, supported by almost any web host, and it's pretty fast for very simple queries and updates, which is generally what you need for a web system. It has some real gotchas to avoid when setting it up; you have to choose between several different storage engines, and it can take a lot of work to convince it to actually work with Unicode data. But it's good to learn mostly for its ubiquity.
From what I've seen (at least on the web), MySQL and PostgreSQL are the most ubiquitous free database systems. If you're considering learning one of them, check this comparison out.
You may also want to consider learning SQLite, a "self-contained, embeddable, zero-configuration SQL database engine." It's really easy to get up and going, stored in a single file, and as its description says, has no complicated configuration. SQLite has proved enormously popular as a persistent data store for local apps on the desktop/iPhone. If you're going down this route on a Mac/iPhone, you may also want to check out Core Data, which is an abstraction layer Apple developed on top of SQLite(but can work with pretty much any DB), to simplify working with a database. As a bonus, Core Data includes a nice GUI for forming relationships and entities. You can check out this tutorial for more information.
If you really, truly, want to "learn from scratch", then theory is the first thing to learn. And that means : NOT products, not any. Not DB2, not MySQL, not oracle, not any-of-them.
Hugh Darwen has a freely available e-book entitled "An Introduction to Relational Database Theory". The material is quite "accessible" and quite unlike most other theory textbooks. It's also the accompanying textbook for his university course on database technology.
Chris Date has several books, of which "Introduction to database systems" is the most comprehensive, also the standard textbook in the field, but maybe a little too abstract for some.
If you think that all you need is "just to know a product" and that you can do equally well "without all that theory", then in that case, please disregard this response, because the wording of your question is dishonest.
Sad thing about databases is that each and every one works bit differently. I would most likely pick MySQL first and play with it a bit. Then get PostgreSQL and do the same.
If you need to use databases in corporate environment then I would aim to test also Oracle and SQL Server which both have express versions that can be installed free for yourself.
http://www.microsoft.com/express/sql/download/
http://www.oracle.com/technology/software/products/database/index.html
At start all databases are more or less confusing but I would pick MySQL as first because it can do most of the basic functionalities and has a lot of help available.
I'd go with mysql. It's easy to setup, easy to mess around with via with mysql client, and it's well documented. If you're just starting out, you probably won't need most of the features offered by other databases, like stored procedures and the like.
First of all, MySQL is both ubiquitous and modern.
ANSI SQL is more or less the same in all RDBMs, so you can learn any of them and you'll be good.
Once you've mastered ANSI SQL, then all you've got left is the localized solutions for each one of them, which won't be portable to other systems - and so, totally discouraged to use, unless they simplify your tasks in a way justifying it.
MySQL, PostgreSQL, SQLite - pick one. PostgreSQL is more like Oracle, and in my opinion a bit more mature. It's had stored procedures, triggers, and referential integrity longer than MySQL has. I'll admit that I have both installed, but I use MySQL more often because it's quick and easy.
But do be aware that non-SQL alternatives are out there and growing in importance. BigTable, object databases like db4o, are worth being aware of. "No SQL" is out there.
If you are just getting your feet wet, MYSQL is a great one to start with. Easy install on any platform, great community support and lots of free tools to work with (SQLYog is a favorite of mine).
I agree that theory is very important. Depending on how you learn best, digging in and tinkering may be the thing to do before you try to absorb 40 years of thought on relational systems.
Codd and Date are legends in the field and can help you understand the broader points of relational theory, but are hard to absorb before you have context for the topic.
If you are looking for more pragmatic/immediate guidance, I'd suggest a book like "Databases for Mere Mortals" and anything written by Joe Celko.
Once you get comfortable with the basics, there are lots of other platforms to explore as well. As mentioned above, SQLLite and PostGres are two other great choices for the Mac OS.
If you want to learn SQL : the best way is to choose database who implement more features of the SQL standard. So I would recommand Firebird or PostgreSQL
I might be "sidetracking" a bit with this answer, but I think we're in the same situation!
Check out the "The Manga Guide to Databases"! I haven't read it myself yet, but it's on the way in the mail as we speak! I've heard good things about it from friends and colleges, and it's got some good reviews as well. Albeit a bit "controversial," it's supposed to be a fun and surprisingly in-depth introduction to fundamental techniques and principles!
Alex wrote: "Reading a textbook without incrementally testing your knowledge on an actual database is not going to produce good results for the majority of people."
My book and my university course both use Dave Vooorhis's Rel for that very purpose.
Hugh
A database in the cloud: Amazon EC2, Google App Engine or Microsoft Azure

Can you recommend a database that scales horizontally? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Generally the database server is the biggest, most expensive box we have to buy as scaling vertically is the only option. Are there any databases that scale well horizontally (i.e. across multiple commodity machines) and what are the limitations in this approach?
Oracle RAC is not horizontally scalable at all, because all Oracle instances share the same data storage. Yes, with SAN stuff u can get a large size DB, but it's just not scalable at all. In other words, Oracle RAC is still a scale-up approach. So for scaling-out or horizontally scaling, you have to partition your data by function that means put different groups of tables in different databases; or partition your data per table that means partition one table into multiple subtables with the same schema but store in different databases. In this way, you get a scaling-out solution. There are many resources on that. Sharding has been a buzz word for a while in web 2.0 website architecture blog sphere.
Because Sharding is not directly supported by database itself, you have to build your own solution. But as I said, there are many lessons already. For oracle, partitioning table is possible. For mysql, check this question
Oracle RAC -- Real Application Cluster
This works nicely, you just add boxes to your cluster. You can fail over from one box to the other. It's not replication, all the boxes are part of the same logical unit.
It's pretty spendy, of course.
Don't worry, good solutions are coming!
Couchdb and Hypertable are open source and still in alpha, but they are clearly designed to make scaling on commodity software simple. They work pretty well, and may change how you think about databases.
Also, if it's okay to let someone else do the distributing for you, Google AppEngine and Amazon SimpleDB are extremely cheap distributed database services, though they're both in beta right now so strict limitations are imposed.
There are storage techniques such as JavaSpaces (or a commercial implementation such as Gigaspaces) which provide highly scalable, fast & secure access to objects.
There are also distributed cacheing systems such as memcached, which offer a similar approach.
Of course, neither of these are true databases, but they are things that can work in conjunction with databases to offer a large amount of horizontal scalability, given a suitable architecture. The real problem is that if you want all of the ACID goodness that comes with a database, there are certain unavoidable performance penalties. The only way out is to figure out the bits where you don't need ACID, and use other technologies to service those bits.
Oracle RAC is the Rolls Royce of databases allowing extra hardware nodes to be added relatively easily and hardware failover.
However, your commodity hardware costs will be dwarfed by the licence costs.
Why dod you feel you need horizontal scaling. A multi CPU core server with 40GB RAM and SAN storage can support very sizeable DB installation.
Can you provide any sizing and expected activity information to allow better understanding of your problem?
If you do go down the RAC route it's worth remembering that it doesnt scale horizontally forever. Even the sales guys admit 90% of rac customers are 4 nodes or less. Once you go more than that you get diminishing returns. So rac may work for you, but it's not guaranteed to be the answer.
MySQL: http://www.mysql.com/why-mysql/scaleout.html
Limitations are that it works best with read-mostly workloads. You typically have one 'master' that receives all the writes, and many 'slaves' that replicate the writes. Then you distribute the reads over all the databases.
MySQL replication is asynchronous, so you will probably have to deal with time lag problems (you write to the master, and then read from a slave before the write has been replicated).
Netezza and other datawarehouse appliances scale this way, but they are not good for OLTP and web app workloads.
The Oracle route for scaling across multiple machines is called Real Application Clusters (Oracle RAC). There's no end of documentation on this elsewhere; you might try starting at http://www.oracle.com/database/rac_home.html.
MongoDB
is one of the best database that scales horizontally.
Oracle Real Application Clusters. If you want the best then take a look at it.
If you seriously think you will out scale a decent multicore "Big Iron" box, then you think about partitioning your data. This is a good, database agnostic way to scale out.
All databases which horizontally will come at a serious cost.
Unless you have mega $$'s to throw at the problem, forget about RAC. While its very good, its VERY expensive once you scale beyond 2 nodes.
You might look at DashDB for OLAP -- IBM pairs it with Cloudant for OLTP.
https://www.ibm.com/developerworks/community/blogs/5things/entry/5_things_to_know_about_dashdb_placeholder?lang=en

To what extent should a developer learn specifics about database systems? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Modern database systems today come with loads of features. And you would agree with me that to learn one database you must unlearn the concepts you learned in another database. For example, each database would implement locking differently than others. So to carry the concepts of one database to another would be a recipe for failure. And there could be other examples where two databases would perform very very differently.
So while developing the database driven systems should the programmers need to know the database in detail so that they code for performance? I don't think it would be appropriate to have the DBA called for performance later as his job is to only maintain the database and help out the developer in case of an emergency but not on a regular basis.
What do you think should be the extent the developer needs to gain an insight into the database?
I think these are the most important things (from most important to least, IMO):
SQL (obviously) - It helps to know how to at least do basic queries, aggregates (sum(), etc), and inner joins
Normalization - DB design skills are an major requirement
Locking Model/MVCC - Its nice to have at least a basic grasp of how your databases manage row locking (or use MVCC to accomplish similar goals with optimistic locking)
ACID compliance, Txns - Please know how these work and interact
Indexing - While I don't think that you need to be an expert in tablespaces, placing data on separate drives for optimal performance, and other minutiae, it does help to have a high level knowledge of how index scans work vs. tablescans. It also helps to be able to read a query plan and understand why it might be choosing one over the other.
Basic Tools - You'll probably find yourself wanting to copy production data to a test environment at some point, so knowing the basics of how to restore/backup your database will be important.
Fortunately, there are some great FOSS and free commercial databases out there today that can be used to learn quite a bit about db fundamentals.
I think a developer should have a fairly good grasp of how their database system works, not matter which one it is. When making design and architecture decisions, they need to understand the possible implications when it comes to the database.
Personally, I think you should know how databases work as well as the relational model and the rhetoric behind it, including all forms of normalization (even though I rarely see a need to go beyond third normal form). The core concepts of the relational model do not change from relational database to relational database - implementation may, but so what?
Developers that don't understand the rationale behind database normalization, indexes, etc. are going to suffer if they ever work on a non-trivial project.
I think it really depends on your job. If you are a developer in a large company with dedicated DBAs then maybe you don't need to know much, but if you are in a small company then it may be really helpful knowing more about databases. In small companies you may wear more than one hat.
It cannot hurt to know more in any situation.
It certainly can't hurt to be familiar with relational database theory, and have a good working knowledge of the standard SQL syntax, as well as knowing what stored procedures, triggers, views, and indexes are. Obviously it's not terribly important to learn the database-specific extensions to SQL (T-SQL, PL/SQL, etc) until you start working with that database.
I think it's important to have a basic understand of databses when developing database applications just like it's important to have an understanding of the hardware your your software runs on. You don't have to be an expert, but you shouldn't be totally ignorant of anything your software interacts with.
That said, you probably shouldn't need to do much SQL as an application developer. Most of the interaction with the database should be done through stored procedures developed by the DBA, I'm not a big fan of including SQL code in your application code. If your queries are in stored procedures, then the DBA can change the implementation of the stored procedure, or even the database schema, and so long as the result is the same it doesn't require any changes to your application code.
If you are uncertain about how to best access the database you should be using tried and tested solutions like the application blocks from Microsoft - http://msdn.microsoft.com/en-us/library/cc309504.aspx. They can also prove helpful to you by examining how that code is implemented.
Basic things about Sql queries are must. then you can develop simple system. but when you are going to implement Complex systems you should know Normalization, Procedures, Functions, etc.

Resources