Cheat sheet giving relative 'time costs' of various database operations? - database

This might be considered an odd question - but i'm looking for a "cheat sheet" that describes (for example) how "expensive" various database operations are.
I'm looking for a theoretical understanding (perhaps backed by statistical data).... for example I can guess that a select operation with an order by statement is slower than a plain select one.
But i've heard that JOINs are the moste "expensive" ones but i'm not sure of that... so it would be good to get some proof of that...

Not sure I've ever seen such an item. It's quite a complex field, and a locally cheap operation might not be globally optimal if it precludes a cheaper strategy from being used across the whole query.
For example, table scans are supposed to be expensive, but are often the cheapest operation on a large table if more than a few percent of records need to be selected. This is because a table scan is largely composed of sequential I/O, which can cover a large volume of streamed data relatively efficiently in comparison to a large number of random I/Os that (for example) a bookmarking operation might use.
A query optimisation tutorial is really too big to fit into a stackoverflow posting (at least in the time I have available) but there are many resources on the web, and quite a few print books that give it a good treatment. It is also somewhat platform dependent, although the principles often apply across platforms.
SQL Server is a good system to experiment with as the front-end tool shipped with it has a query plan visualisation feature, even on the free 'express' versions.

Related

NoSQL or RDBMS for audit data

I know that similar questions were asked in the subject, but I still haven't seen anyone that completely contained all my requests.
I would start by saying that I only have experience in RDBMS's so I'm sorry if I get anything regarding NoSQL wrong.
I'm creating a database that would hold a large amount of audit logs (about 1TB).
I'm using it for:
Fast data writing (a massive amount of audit logs is written all the time)
Search - search over the audit data (search actions performed by a certain user, at a certain time or a certain action... the database should support searching any of the 'columns' very quickly)
Analytics & Reporting - Generate daily, weekly, monthly reports of the data (They are predefined at the moment.. if they are more dynamic, does it affect the solution I should choose?)
Reliability (support for fail-over or any similar feature), Scalability (If I grow above 1TB to 2TB, 10TB or 100TB - does any of the solutions can't support this amount of data?) and of course Performance (in the use cases I specified) are very important to me.
I know RDBMS and that would be my easy way of starting, but I'm really concerned that after a while, the DB would simply not keep up with the pace.
My question is should I pick an RDBMS or NoSQL solution and why? If a NoSQL solution, since they are so different, which of them do you think fits my needs?
Generally there isn't a right or wrong answer here.
Fast data writing, either solution will be ok, although you didn't say what volume per second you are storing. Both solutions have things to watch out for.
Search (very quick) over all columns. For smaller volumes, say few hundred Gb, then either solution will be Ok (assuming skilled people put it together). You didn't actually say how fast/often you search, so if it is many times per minute this consideration becomes more important. Fast search can often slow down ability to write high volumes quickly as indexes required for search need to be updated.
Audit records typically have a time component, so searching that is time constrained, eg within last 7 days, will significantly speed up search times compared to search all records.
Reporting. When you get up to 100Tb, you are going to need some real tricks, or a big budget, to get fast reporting. For static reporting, you will probably end up creating one program that generates multiple reports at once to save I/O. Dynamic reports will be the tricky one.
My opinion? Since you know RDBMS, I would start with that as a method and ship the solution. This buys you time to learn the real problems you will encounter (the no premature optimization that many on SO are keen on). During this initial timeframe you can start to select nosql solutions and learn them. I am assuming here that you want to run your own hardware/database, if you want to use cloud type solutions, then go to them straight away.

Best Database for remote sensor data logging

I need to choose a Database for storing data remotely from a big number (thousands to tens of thousands) of sensors that would generate around one entry per minute each.
The said data needs to be queried in a variety of ways from counting data with certain characteristics for statistics to simple outputting for plotting.
I am looking around for the right tool, I started with MySQL but I feel like it lacks the scalability needed for this project, and this lead me to noSQL databases which I don't know much about.
Which Database, either relational or not would be a good choice?
Thanks.
There is usually no "best" database since they all involve trade-offs of one kind or another. Your question is also very vague because you don't say anything about your performance needs other than the number of inserts per minute (how much data per insert?) and that you need "scalability".
It also looks like a case of premature optimization because you say you "feel like [MySQL] lacks the scalability needed for this project", but it doesn't sound like you've run any tests to confirm whether this is a real problem. It's always better to get real data rather than base an important architectural decision on "feelings".
Here's a suggestion:
Write a simple test program that inserts 10,000 rows of sample data per minute
Run the program for a decent length of time (a few days or more) to generate a sizable chunk of test data
Run your queries to see if they meet your performance needs (which you haven't specified -- how fast do they need to be? how often will they run? how complex are they?)
You're testing at least two things here: whether your database can handle 10,000 inserts per minute and whether your queries will run quickly enough once you have a huge amount of data. With large datasets these will become competing priorities since you need indexes for fast queries, but indexes will start to slow down your inserts over time. At some point you'll need to think about data archival as well (or purging, if historical data isn't needed) both for performance and for practical reasons (finite storage space).
These will be concerns no matter what database you select. From what little you've told us about your retrieval needs ("counting data with certain characteristics" and "simple outputting for plotting") it sounds like any type of database will do. It may be that other concerns are more important, such as ease of development (what languages and tools are you using?), deployment, management, code maintainability, etc.
Since this is sensor data we're talking about, you may also want to look at a round robin database (RRD) such as RRDTool to see if that approach better serves your needs.
Found this question while googling for "database for sensor data"
One of very helpful search-results (along with this SO question) was this blog:
Actually I've started a similar project (http://reatha.de) but realized too late, that I'm using not the best technologies available. My approach was similar MySQL + PHP. Finally I realized that this is not scalable and stopped the project.
Additionally, a good starting point is looking at the list of data-bases in Heroku:
If they use one, then it should be not the worst one.
I hope this helps.
you can try to use Redis noSQL database

Which DBMS is suitable for my needs?

I'm working on a project aimed to analyze biometric data collected from various terminals. The process is not very performance critical. Rather it's I/O bounded. Amount of data is very huge. (hundreds of millions records per table). Unfortunately database is relational. And there are 20 foreign keys. Changing values of referenced keys is very common during completion of job. So there will be lots of UPDATE and SET NULL s during collecting data.
Currently, semantics of database is designed. All programs are almost completed, and also a MySQL prototype for database is created. It works fine with sample (small-scale) data.
I do a search to find a suitable DBMS for the project. Googling around "DBMS comparisons" ,... didn't help. People say antithesis things. Some say MySQL will perform faster inserts and updates, some say Oracle9 is better...
I can't find any reliable, benchmark-based comparison between DBMS. I use MySQL in everyday projects, but this one looks more critical.
What we need:
License and cost of DBMS is not important, but of course an open source (GPL or LGPL) is preferred (since whole project is will be published under LGPL).
Very fast inserts, very fast updates, a lot of foreign keys is needed.
DBMS should response to 0 - 100 connections at a time.
Terminals are connected to server by a local network (LAN).
What I'm actually looking for, is a benchmark of various DBMS's. It may contain charts, separated comparisons of different operations (insert, update, delete) in various situations (on a relation with referenced fields, or normal table)...
For this sort of answer, I would recommend PostgreSQL, Informix, or Oracle. PostgreSQL is open source (BSDL, GPL compatible, as everyone agrees). The reasons have to do with some aspects of data modelling that may be extremely helpful in your case. In general you have two important questions:
1) How far can I tune my db for what I am doing? How far can I scale it?
and
2) How can I model my data?
On the first, Oracle and PostgreSQL are more complex but more flexible. That flexibility may come in handy. On the second, the flexibility may save you a lot of effort later. Moreover it opens up new doors regarding optimization which are not possible in a straight relational model. First I would recommend looking at this: http://db.cs.berkeley.edu/papers/Informix/www.informix.com/informix/corpinfo/zines/whitpprs/illuswp/wave.htm as it will give you some background as to what I am thinking. Additionally, if you look at what Stonebraker is talking about you will see that straight benchmarks are really an apples to oranges comparison here.
The idea of going with an ORDBMS means a few important things:
You can model data functionally dependent on your data. For example you can have a function in Java or Python which manipulates your data and returns a result. You can index the output of those functions, trading insert for select performance if you need to, or not, trading between insert and select performance.
Less data being stored means faster inserts.
An ability to extend your data with custom types and functions, providing higher performance access to your data.
PostgreSQL 9.2 will support up to approx 14000 writes per sec on sufficient hardware, which is nothing to sneeze at. Of course this depends on the width of the write, hardware performance on the server, etc. PostgreSQL is used by Affilias to manage the .org and .info top-level domains (web-scale!) and also by Skype's infrastructure (still, even after Microsoft bought them).
Finally as a part of your information pipeline, if you are processing huge amounts of data and need to do some preprocessing before sending to PostgreSQL, you might look at array-native db's (for a NoSQL approach common in scientific work) or VoltDB (for an in-memory store for high-throughput processing). Despite the fact that they are extremely different systems, VoltDB and Postgres were actually started by the same individual.
Finally regarding benchmark charts, the major db vendors more or less ban publication of such in their license agreements so you won't find them.

Database Joins Done On The Webserver

Today I found an article online discussing Facebooks architecture (though it's a bit dated). While reading it I noticed under the section Software that helps Facebook scale, the third bullet point states:
Facebook uses MySQL, but primarily as a key-value persistent storage,
moving joins and logic onto the web servers since optimizations are
easier to perform there (on the “other side” of the Memcached layer).
Why move complex joins to the web server? Aren't databases optimized to perform join logic? This methodology seems contrary to what I've learned up to this point, so maybe the explanation is just eluding me.
If possible, could someone explain this (an example would help tremendously) or point me to a good article (or two) for the benefits (and possibly examples) of how and why you'd want to do this?
I'm not sure about Facebook, but we have several applications where we follow a similar model. The basis is fairly straightforward.
The database contains huge amounts of data. Performing joins at the database level really slows down any queries we make on the data, even if we're only returning a small subset. (Say 100 rows of parent data, and 1000 rows of child data in a parent-child relationship for example)
However, using .NET DataSet objects, of we select in the rows we need and then create DataRelation objects within the DataSet, we see a dramatic boost in performance.
I can't answer why this is, as I'm not knowledgeable about the internal workings of either, but I can venture a guess...
The RDBMS (Sql Server in our case) has to deal with the data that lives in files. These files are very large, and only so much of it can be loaded into memory, even on our heavy-hitter SQL Servers, so it there is a penalty of disk I/O.
When we load a small portion of it into a Dataset, the join is happening entirely in memory, so we lose the I/O penalty of going to the disk.
Even though I can't explain the reason for the performance boost completely (and I'd love to have someone more knowledgeable tell me if my guess is right) I can tell you that in certain cases, when there is a VERY large amount of data, but your app only needs to pull a small subset of it, there is a noticeable boot in performance by following the model described. We've seen it turn apps that just crawl into lightning-quick apps.
But if done improperly, there is a penalty - if you overload the machine's RAM but doing it inappropriately or in every situation, then you'll have crashes or performance issues as well.

Is it worth trying MonetDB?

Has anyone had any experience with MonetDB? Currently, I have a MySQL database that is growing too large, and queries are getting too slow. According to column-oriented paradigm, insertions will be slower (which I don't mind at all), but data retrieval becomes very fast. Do I stand a chance of getting more data retrieval performance just by switching to MonetDB? Is it MonetDB mature enough?
You have a chance of improving the performance of your application. The gain is, however, largely dependent on your workload, the size of your database and your hardware. MonetDB is developed/tuned under two main assumptions:
Your workload is analytical, i.e., you have lots of (grouped) aggregations and the like.
Even more important: your hot dataset (the data that you actually work with) fits into the main memory of your system. MonetDB does not have it's own Buffer Manager but relies on the OS to handle disk I/O. Since the OS (especially windows but Linux too) is sometimes very dumb about disk swapping that may become a problem (especially for joins that run out of memory).
As for the maturity, there are probably more opinions on that than people inhabiting this planet. Personally, I find it mature enough but I am a member of the development team and, thus, biased. But MonetDB is a research project so if you have an interesting application we'd love to hear about it and see if we can help.
The answer of course depends on your payload but my experience so far would seem to indicate that about everything is faster in MonetDB than I've seen in MySQL. The exception would be joins, which not only seem slow, but seem completely inept at pipelining so you end up needing gobs of memory to process large ones. That said my experience with joins in MySQL hasn't exactly been stellar either, so I'm guessing your expectations may be low. If you really want good join performance, I'd probably recommend SQL Server or the like; for those other queries you mention in the follow up comments, MonetDB should be awesome.
For instance, given a table with about 2 million rows in it, I was able to range on one column (wherin there were about 800K rows in the range) and order by another column and the limited result was processed and returned in 25ms. Performance of those type of queries does seem to degrade with scale, but that should give you a taste for what you might expect at that scale.
I should caution that the optimistic concurrency model might throw off those that have only been exposed to pessimistic concurrency (most people). I'd research it before wondering why some of your commits fail under concurrent load.

Resources