Database/NoSQL - Lowest latency way to retrieve the following data - database

I have a real estate application and a "house" contains the following information:
house:
- house_id
- address
- city
- state
- zip
- price
- sqft
- bedrooms
- bathrooms
- geo_latitude
- geo_longitude
I need to perform an EXTREMELY fast (low latency) retrieval of all homes within a geo-coordinate box.
Something like the SQL below (if I were to use a database):
SELECT * from houses
WHERE latitude IS BETWEEN xxx AND yyy
AND longitude IS BETWEEN www AND zzz
Question: What would be the quickest way for me to store this information so that I can perform the fastest retrieval of data based on latitude & longitude? (e.g. database, NoSQL, memcache, etc)?

This is a typical query for a Geographical Information System (GIS) application. Many of these are solved by using quad-tree, or similar spatial, indices. The tiling mentioned is how these often end up being implemented.
If an index containing the coordinates could fit into memory and the DBMS had a decent optimiser, then a table scan could provide a Cartesian distance from any point of interest with tolerably low overhead. If this is too slow, then the query could be pre-filtered by comparing each coordinate axis separately before doing the full distance calculation.

ThereMongoDB supports geospatial indexes, but there are ways to reduce the computation time for things like this. Depending on how your data is arranged, you can place houses in identifiable 'tiles' and then fetch all houses for a given tile and, from that reduced dataset, sort based on distance from whatever coordinates you have.
Depending on how many tiles there are, you can use bitmasks to find houses that may be near or overlap multiple tiles.

I'm going to assume that you're doing lots more reads than writes, and you don't need to have your database distributed across dozens of machines. If so, you should go for a read-optimized database like sqlite (my personal preference) or mysql, and use exactly the SQL query you suggest.
Most (not all) NoSQL databases end up being overly complicated for queries of this sort, since they're better at looking up exact values in their indexes rather than ranges.
It's nice that you're looking for a bounding box instead of cartesian distance; the latter would be harder for a SQL database to optimize (although you could narrow it to a bounding box, then do the slower cartesian distance calculation).

Related

database design for large streaming data with minimal latency

Following is the scenario:
Customer places an order.
Order has type: Physical / Downloadable.
Order is placed from: Web / App.
Order is placed from a Location: UK,AUS,etc.
Can have more dimensions in future.
Consider that all of the dimensions change frequently in every order. And the data is quite huge, approximately 1.3 million records per hour.
Want to design this in a way that reports should be able able to drill down with any requested dimension for each customer.
Example:
- Customer 'A' has placed how many orders of type 'Physical' from 'AUS'
- Customer 'A' has placed how many orders in all.
- Customer 'A' has placed how many orders from of type 'Downloadable' from'APP'.
etc.
Need these reports on realtime, hence low latency writes and reads are a must. What nosql database can be a good fit. And how can this data be well structured to be able to sliced and diced in any required dimension as well as combination of more than one dimension.
If you need high performance then I would recommend ScyllaDB which can handle over 1M ops/s per node (on a good hardware). It shares data model with Cassandra so you can model and query your data using CQL. You can give it a free test drive with just couple of clicks here.
Regarding modeling: A useful technique is to model around your queries. So if you have a particular query you should prepare a table that will serve this query in most efficient way. In this technique you duplicate data by creating as many tables with the same data as many different types of queries you have. Duplicating data comes with a price so you need to trade off the performance and cost depending on your needs. You can read more about it here.

Choosing the right PartitionKey for a date-based worldwide DocumentDB app

I'm developing a worldwide application in which most searches are based on geospacial data (nearest records given coordinates) and date ranges.
So, basically is likely the main searches of applications like AirBnb, Booking, etc.
Which Partition Key should I choose in a DocumentDB Partitioned Collection considering these context?
Thank you!
UPDATE: like I told to Matias (see answers), me and my friend we're thinking about something like the Country.
The app is all about searches. And another important thing is that we have dates. Tons of dates.
Since we are new to DDB, our question is: "what happens if we choose Country as Partition Key and our queries must search within different countries?". i.e. a georadius search near country borders.
Like Matias mentioned, some more information will help us provide a better recommendation. I've added some ideas/options for partition key selection below:
Use a generic partition key like user ID or product ID. In this model, your geospatial queries will be executed across partitions, but since DocumentDB locally builds a spatial index within partitions, this might meet your performance needs
Use a partitioning scheme based on the GeoHash of the location. This will ensure that data points in similar locations will get placed on the same partitions. This will require some additional work in your app to add "GeoHash > abcdef and GeoHash < abcfff" clauses to narrow down query execution to a few partitions
Partition based on a property like country, if most of your queries fall within a single country. Rare queries that need to span countries will also perform well (though not as low latency as queries against a single partition/country), as they can use the local index within each partition. You might need to handle special cases separately. For example if US has >30-40% of your data, you might want to choose a hybrid approach where US data uses state as the partition key, and countries with lesser data use the country as the partition key. A composite key of country + day/month/year might also work depending on the data distribution.
If your queries are spread evenly across time ranges, you can consider using dates as the partition key. But for most applications, since recent data is more frequently accessed, this is not a good option.
Without knowing a bit more is hard to say but I'd start with these official Partition guide: Partitioning and scaling, especially the section about Designing.
Main points should be throughput distribution (You don't want "hot spots") and Transaction atomicity probably. Remember that when you issue a Query it can spans multiple Partitions and DDB will distribute throughput evenly (you can use this feature with the EnableCrossPartitionQuery option).
So, what truly determines which would be the best partition keys really depend on how your data is distributed and how your queries are built.
Since the app is worldwide, maybe the best Partition approach is to divide by country/continent/region (one of those) but it really depends on the amount of data, it should be evenly distributed to avoid having a really hot partition/zone.
Finally, you can also check the Performance and scale test example and DocumentDB performance tips to work on improving performance.
If you are using partitioning because you have a lot of data but expect queries to return one or a few records only based on the geospatial criteria alone then something like country may work as it will cut out a lot of irrelevant data immediately and indexes within the partition will allow the required documents to be found quickly. This will likely cause irregular partition sizes though - imagine if Russia and China end up in the same partition.
If, however, your queries will return a lot of documents based on the geospatial criteria and you wish to either extract all of those records or apply further filtering or other functions over them then you will want to spread that processing over as many partitions as possible. In this case you want a partition key that will spread data evenly over the partitions. If you expect queries to combine multiple document types for the same coordinates, user id or site id etc then it is best to have a key based on that value so that all related documents can be processed together within the same partition.
In practical applications I have found using an incrementing value as partition key to be the best general purpose solution as it allows queries to be processed evenly over all partitions.

Difference between sql query aggregation and aggegration and querying an OLAP cube

I have a query with respect to the advantages of building a OLAP cube vs aggregating data in database table for querying ,data of say 6 months and then archiving the sql table later for analytics purpose.
Which one is better, table or OLAP cube? and why since I can aggregate and keep data in my tables also and query the aggregated data as and when needed.
Short version: Like many development decisions, it depends.
Long version: I wouldn't say that one is "better" than the other - it's just that the two have separate uses and one or the other might be the better solution depending on what the requirements are.
If you have a few specific reports which require specific aggregations, then it might be simpler and easier for everyone involved to just aggregate that information in a table or a view, and point your reports at that.
As an example, if you know your users only want reports at a monthly level for a particular set of parameters - maybe your sales department want the monthly value of each salesperson's sales, for example - then your best bet might be to aggregate this up and pop it into a report where they can select the month and the salesperson, and get the number that they want.
The benefits of this might be that it's quick to develop and provide to your users, there's not too much time spent testing as only a few figures need checking, etc. Your users also don't need to spend time being trained/learning to use a cube - reports are generally pretty easy for people to pick up and use.
But if your users want to be able to carry out much more open-ended analysis on their own terms then it's not much use if you need to go away and develop a report every time they have a new requirement. Your database might start getting very full of similar-but-different tables full of aggregated amounts. You could run into issues where one report ends up not agreeing with another for some reason - you might find you're dealing with the same data quality issues over and over again in each report.
In this case, it might make more sense to develop a cube over the top of data held at the lowest grain which your users want to analyse. In this way, they can essentially self-serve, rather than getting back in touch with you every time they need a new set of aggregated data. They can slice and dice through the data using multiple different "parameters" (dimensions in the OLAP world), rather than being limited by the nature of the reports.
Aggregated data still sometimes plays a role even when you have a cube in place, though. Sometimes performance gains can be found by aggregating data up to certain levels and holding it in a physical table, and getting your OLAP tool to use the physically aggregated data at that level instead of using its own aggregations - but this is an optimization step which would need careful consideration to see whether it's beneficial in terms of performance, whether the space vs. performance payoff is worthwhile, etc. I wouldn't worry about this aspect if you're just starting to look at OLAP, but wanted to note it for the sake of completeness.
To add to Jo's great answer, consider the grain of the facts that need to be aggregated and compared. If you have daily sales by product, but budgets by month and product category, you're going to need an aggregate fact table based on sales in order to compare budgets. That would be further represented as two cubes in your OLAP database - Sales cube, and Budget cube.
If there are very regular use cases which involve specific aggregated data, and this aggregated data would take a while to return from sql database tables then a cube might help.
If there are lots of potential ways in which your db table data needs to be sliced and diced at an aggregated level then there is definitely a good argument to start playing around with olap cubes.
In terms of sums of data olap is a great aggregation tool. I'm not convinced that it is the best tool for distinct counts though, so if your requirements includes lots of distinct counts then maybe look elsewhere. Do you have the option of Tabular/PowerPivot/DAX ?

database table design for storing different sized datasets

I am designing a Microsoft Access database to store results from lab equipment. They are in the form of hundreds of lists of frequency vs. response curves which I have previously stored rather easily, but inefficiently in Excel.
The difficulty comes from the fact that the frequency can vary from 1 - 50E9 Hz, the step size between data points can vary from 1 - 1E9, Hz, and the number of points can vary from ~ 100 - 40,000. This has brought up a challenge when it comes to table design because everything I try seems to be very inefficient.
I have considered using links to external text files to store the data points which solves the table design, but seems to violate good database design. I've considered using tables of arrays (i.e. Start Freq, Stop Freq, Freq Step Size, and Array of Responses), but the array sizes could vary greatly which seems just as inefficient.
Is there a recommended practice for storing this type of data? It seems like a common task when storing instrument data, but I can't seem to find anything in web searches. Any assistance will be greatly appreciated.
Looks like a classic 1:N relationship to me. "1" is the measurement session and "N" is all the measurements (i.e. data points) taken in that session. This is modeled by two tables and one foreign key between them, similar to this:
Tweak the fields to suit your needs, but this general design should be more than able to handle large amounts of data and varying numbers of measurements per session.
That being said, MS Access has historically had significant limitations on the size of the data that can be stored in a single database. If you hit these limits, consider using a "real" DBMS.

Storing time-series data, relational or non?

I am creating a system which polls devices for data on varying metrics such as CPU utilisation, disk utilisation, temperature etc. at (probably) 5 minute intervals using SNMP. The ultimate goal is to provide visualisations to a user of the system in the form of time-series graphs.
I have looked at using RRDTool in the past, but rejected it as storing the captured data indefinitely is important to my project, and I want higher level and more flexible access to the captured data. So my question is really:
What is better, a relational database (such as MySQL or PostgreSQL) or a non-relational or NoSQL database (such as MongoDB or Redis) with regard to performance when querying data for graphing.
Relational
Given a relational database, I would use a data_instances table, in which would be stored every instance of data captured for every metric being measured for all devices, with the following fields:
Fields: id fk_to_device fk_to_metric metric_value timestamp
When I want to draw a graph for a particular metric on a particular device, I must query this singular table filtering out the other devices, and the other metrics being analysed for this device:
SELECT metric_value, timestamp FROM data_instances
WHERE fk_to_device=1 AND fk_to_metric=2
The number of rows in this table would be:
d * m_d * f * t
where d is the number of devices, m_d is the accumulative number of metrics being recorded for all devices, f is the frequency at which data is polled for and t is the total amount of time the system has been collecting data.
For a user recording 10 metrics for 3 devices every 5 minutes for a year, we would have just under 5 million records.
Indexes
Without indexes on fk_to_device and fk_to_metric scanning this continuously expanding table would take too much time. So indexing the aforementioned fields and also timestamp (for creating graphs with localised periods) is a requirement.
Non-Relational (NoSQL)
MongoDB has the concept of a collection, unlike tables these can be created programmatically without setup. With these I could partition the storage of data for each device, or even each metric recorded for each device.
I have no experience with NoSQL and do not know if they provide any query performance enhancing features such as indexing, however the previous paragraph proposes doing most of the traditional relational query work in the structure by which the data is stored under NoSQL.
Undecided
Would a relational solution with correct indexing reduce to a crawl within the year? Or does the collection based structure of NoSQL approaches (which matches my mental model of the stored data) provide a noticeable benefit?
Definitely Relational. Unlimited flexibility and expansion.
Two corrections, both in concept and application, followed by an elevation.
Correction
It is not "filtering out the un-needed data"; it is selecting only the needed data. Yes, of course, if you have an Index to support the columns identified in the WHERE clause, it is very fast, and the query does not depend on the size of the table (grabbing 1,000 rows from a 16 billion row table is instantaneous).
Your table has one serious impediment. Given your description, the actual PK is (Device, Metric, DateTime). (Please don't call it TimeStamp, that means something else, but that is a minor issue.) The uniqueness of the row is identified by:
(Device, Metric, DateTime)
The Id column does nothing, it is totally and completely redundant.
An Id column is never a Key (duplicate rows, which are prohibited in a Relational database, must be prevented by other means).
The Id column requires an additional Index, which obviously impedes the speed of INSERT/DELETE, and adds to the disk space used.
You can get rid of it. Please.
Elevation
Now that you have removed the impediment, you may not have recognised it, but your table is in Sixth Normal Form. Very high speed, with just one Index on the PK. For understanding, read this answer from the What is Sixth Normal Form ? heading onwards.
(I have one index only, not three; on the Non-SQLs you may need three indices).
I have the exact same table (without the Id "key", of course). I have an additional column Server. I support multiple customers remotely.
(Server, Device, Metric, DateTime)
The table can be used to Pivot the data (ie. Devices across the top and Metrics down the side, or pivoted) using exactly the same SQL code (yes, switch the cells). I use the table to erect an unlimited variety of graphs and charts for customers re their server performance.
Monitor Statistics Data Model.
(Too large for inline; some browsers cannot load inline; click the link. Also that is the obsolete demo version, for obvious reasons, I cannot show you commercial product DM.)
It allows me to produce Charts Like This, six keystrokes after receiving a raw monitoring stats file from the customer, using a single SELECT command. Notice the mix-and-match; OS and server on the same chart; a variety of Pivots. Of course, there is no limit to the number of stats matrices, and thus the charts. (Used with the customer's kind permission.)
Readers who are unfamiliar with the Standard for Modelling Relational Databases may find the IDEF1X Notation helpful.
One More Thing
Last but not least, SQL is a IEC/ISO/ANSI Standard. The freeware is actually Non-SQL; it is fraudulent to use the term SQL if they do not provide the Standard. They may provide "extras", but they are absent the basics.
Found very interesting the above answers.
Trying to add a couple more considerations here.
1) Data aging
Time-series management usually need to create aging policies. A typical scenario (e.g. monitoring server CPU) requires to store:
1-sec raw samples for a short period (e.g. for 24 hours)
5-min detail aggregate samples for a medium period (e.g. 1 week)
1-hour detail over that (e.g. up to 1 year)
Although relational models make it possible for sure (my company implemented massive centralized databases for some large customers with tens of thousands of data series) to manage it appropriately, the new breed of data stores add interesting functionalities to be explored like:
automated data purging (see Redis' EXPIRE command)
multidimensional aggregations (e.g. map-reduce jobs a-la-Splunk)
2) Real-time collection
Even more importantly some non-relational data stores are inherently distributed and allow for a much more efficient real-time (or near-real time) data collection that could be a problem with RDBMS because of the creation of hotspots (managing indexing while inserting in a single table). This problem in the RDBMS space is typically solved reverting to batch import procedures (we managed it this way in the past) while no-sql technologies have succeeded in massive real-time collection and aggregation (see Splunk for example, mentioned in previous replies).
You table has data in single table. So relational vs non relational is not the question. Basically you need to read a lot of sequential data. Now if you have enough RAM to store a years worth data then nothing like using Redis/MongoDB etc.
Mostly NoSQL databases will store your data on same location on disk and in compressed form to avoid multiple disk access.
NoSQL does the same thing as creating the index on device id and metric id, but in its own way. With database even if you do this the index and data may be at different places and there would be a lot of disk IO.
Tools like Splunk are using NoSQL backends to store time series data and then using map reduce to create aggregates (which might be what you want later). So in my opinion to use NoSQL is an option as people have already tried it for similar use cases. But will a million rows bring the database to crawl (maybe not , with decent hardware and proper configurations).
Create a file, name it 1_2.data. weired idea? what you get:
You save up to 50% of space because you don't need to repeat the fk_to_device and fk_to_metric value for every data point.
You save up even more space because you don't need any indices.
Save pairs of (timestamp,metric_value) to the file by appending the data so you get a order by timestamp for free. (assuming that your sources don't send out of order data for a device)
=> Queries by timestamp run amazingly fast because you can use binary search to find the right place in the file to read from.
if you like it even more optimized start thinking about splitting your files like that;
1_2_january2014.data
1_2_february2014.data
1_2_march2014.data
or use kdb+ from http://kx.com because they do all this for you:) column-oriented is what may help you.
There is a cloud-based column-oriented solution popping up, so you may want to have a look at: http://timeseries.guru
You should look into Time series database. It was created for this purpose.
A time series database (TSDB) is a software system that is optimized for handling time series data, arrays of numbers indexed by time (a datetime or a datetime range).
Popular example of time-series database InfluxDB
I think that the answer for this kind of question should mainly revolve about the way your Database utilize storage.
Some Database servers use RAM and Disk, some use RAM only (optionally Disk for persistency), etc.
Most common SQL Database solutions are using memory+disk storage and writes the data in a Row based layout (every inserted raw is written in the same physical location).
For timeseries stores, in most cases the workload is something like: Relatively-low interval of massive amount of inserts, while reads are column based (in most cases you want to read a range of data from a specific column, representing a metric)
I have found Columnar Databases (google it, you'll find MonetDB, InfoBright, parAccel, etc) are doing terrific job for time series.
As for your question, which personally I think is somewhat invalid (as all discussions using the fault term NoSQL - IMO):
You can use a Database server that can talk SQL on one hand, making your life very easy as everyone knows SQL for many years and this language has been perfected over and over again for data queries; but still utilize RAM, CPU Cache and Disk in a Columnar oriented way, making your solution best fit Time Series
5 Millions of rows is nothing for today's torrential data. Expect data to be in the TB or PB in just a few months. At this point RDBMS do not scale to the task and we need the linear scalability of NoSql databases. Performance would be achieved for the columnar partition used to store the data, adding more columns and less rows kind of concept to boost performance. Leverage the Open TSDB work done on top of HBASE or MapR_DB, etc.
I face similar requirements regularly, and have recently started using Zabbix to gather and store this type of data. Zabbix has its own graphing capability, but it's easy enough to extract the data out of Zabbix's database and process it however you like. If you haven't already checked Zabbix out, you might find it worth your time to do so.

Resources