I'm designing a project where I'll be storing (potentially hundreds of thousands of) lat/lon pairs in a database. The pairs are associated with other data. The catch is that in addition to users manipulating this data, I also want the locations to change over time. My initial instinct was to set up a cron job that will adjust every lat/lon by a certain amount every day, but I realize that such an operation would be insanely inefficient.
So, any ideas on how to efficiently adjust a bunch of lat/lon pairs over time? My best thought so far is associating a "last changed" timestamp with each pair and have a process running that fires every few seconds, grabs n (maybe order 100? 1000?) pairs with the oldest timestamps, adjusts those pairs and updates the times. This way I'm constantly moving small amounts of data, instead of moving an overwhelming amount once a day. I'm still not convinced this is the best way to go, though.
Thanks in advance!
Store the amount that is added to each pair somewhere else, and rather than using the values in the database directly, add this stored offset amount whenever you retrieve and subtract it whenever you insert.
Your instinct is right, that probably is the best way of adjusting latitude longitude gradually. The update query shouldn't hammer the server too hard though, and your not moving- Just changing?
Related
I store readings in a database from sensors for a Temperature monitoring system.
There's 2 types of reading: air and product. The product temperature is represents the slow temperature change of an item of food versus the actual air temperature.
They 2 temperatures are taken from different sensors (different locations within the environment, usually a large controlled environment) so they are not related (i.e. I cannot derive the product temperature from the air temperature).
Initially the product temperature I was provided with was already damped by the sensor, however whoever wrote the firmware made a mistake so the damped value is incorrect, and now I instead have to take the un-damped reading from the product sensor and apply the damping myself based on the last few readings in the database.
When a new reading comes in, I look at the last few undamped readings, and the last damped reading, and determine a new damped reading from that.
My question is: Should I store this calculated reading as well as the undamped reading, or should I calculate it in a view leaving all physically stored readings undamped?
One thing that might influence this: The readings are critical; alarms rows are generated against the readings when they go out of tolerance: it is to prevent food poisoning and people can lose there jobs over it. People sign off the values they see, so those values must never change..
Normally I would use a view and put the calculation in the view, but I'm a little nervous about doing that this time. If the calculation gets "tweaked" I then have to make the view more complicated to use the old calculation before a certain timestamp, etc. (which is fine; I just have to be careful wherever I query the reading values - I don't like nesting views in other views as sometimes it can slow the query..).
What would you do in this case?
Thanks!
The underlying idea from the relational model is "logical data independence". Among other things, SQL views implement logical data independence.
So you can start by putting the calculation in a view. Later, when it becomes too complex to maintain that way, you can move the calculation to a SQL function or SQL stored procedure, or you can move the calculation to application code. You can store the results in a base table if you want to. Then update the view definition.
The view's clients should continue to work as if nothing had changed.
Here's one problem with storing this calculated value in a base table: you probably can't write a CHECK constraint to guarantee it was calculated correctly. This is a problem regardless of whether you display the value in a view. That means you might need some kind of administrative procedure to periodically validate the data.
I've got a highstock going where I query my database for data. My dataset isn't big yet, but I forsee it growing to have hundreds of thousands of data points. That's a lot of data! There's no way I'm going to pass that back through a call back.
I figure there's got to be a way to handle this. Perhaps you can pass arguments to the query function/.php? I've looked at #plotOptions.series.dataGrouping, but that still requires an original full data set.
How do the big cats do it? How do Yahoo & Googs store discrete stock ticker data for XX years? After N time, do they take the historic data, approximate it, and reduce the resolution? If someone could point me in the right direction, I'm sure it's already been covered before, but a few searches didn't turn anything up.
Thanks,
-CD
Usually it doesn't make sense to know the values from one year ago with minute-resolution. It depends on you application, but if the purpose is to provide finantial information, I think that it is OK to assume that fine resolution is only interesting on recent dates. The same consideration applies to logging performance counters (such as MRTG does), and many other things.
For instance, you can mantain:
A "last week" serie, with 1-minute resolution.
A "last trimester" serie, with hour-resoultion.
A historic serie, with daily resolution.
All but the finest series should store not only the average, but also the minimum and maximum values observed in each period, as well as other statistics indictators that might be of interest.
I'm thinking about building a web-based data logging and visualization service. The basic idea is that at some timed interval something (e.g. a sensor) reports a value (e.g. temperature) to the server. The server records this value into a database. There would be a web-based UI that allows me to view this data on a time-based graph. Ideally this graph would have various resolutions (last 30 seconds, last week, last year, etc). In a super ideal world, I would be able to zoom into the data for any point in time.
The problem is that the sensors are going to generate enormous amounts of data. For example, a sensor that reports a value every 5 seconds will generate about 18k values a day. I'm imagining a system that has thousands of sensors. Over time, this becomes lots of data.
The naive solution is to throw this data into a relational database and retrieve it in the various ways I want, but that won't scale.
The simple solution is to reduce the amount of data by performing periodic roll-ups of the data. New data might go into a table that has data points every 5 seconds. Every hour, some system pumps this data into another table that has data points every minute and the original data is deleted. This repeats for a few levels. The downside to this is that the further back in time you go, the less detailed the data is. That's probably fine. I would imagine that I would need enormous amounts of hardware to support full resolution of data over all time as compared to a system with this sort of rollup.
Is there a better way to do this? Is there an existing solution? I have to imagine this is a fairly common problem.
You probably want a fixed sized database like RRDTool: http://oss.oetiker.ch/rrdtool/
Also Graphite is built on top of a similar datastore implementation: http://graphite.wikidot.com/
I need some inspiration for a solution...
We are running an online game with around 80.000 active users - we are hoping to expand this and are therefore setting a target of achieving up to 1-500.000 users.
The game includes a highscore for all the users, which is based on a large set of data. This data needs to be processed in code to calculate the values for each user.
After the values are calculated we need to rank the users, and write the data to a highscore table.
My problem is that in order to generate a highscore for 500.000 users we need to load data from the database in the order of 25-30.000.000 rows totalling around 1.5-2gb of raw data. Also, in order to rank the values we need to have the total set of values.
Also we need to generate the highscore as often as possible - preferably every 30 minutes.
Now we could just use brute force - load the 30 mio records every 30 minutes, calculate the values and rank them, and write them in to the database, but I'm worried about the strain this will cause on the database, the application server and the network - and if it's even possible.
I'm thinking the solution to this might be to break up the problem some how, but I can't see how. So I'm seeking for some inspiration on possible alternative solutions based on this information:
We need a complete highscore of all ~500.000 teams - we can't (won't unless absolutely necessary) shard it.
I'm assuming that there is no way to rank users without having a list of all users values.
Calculating the value for each team has to be done in code - we can't do it in SQL alone.
Our current method loads each user's data individually (3 calls to the database) to calculate the value - it takes around 20 minutes to load data and generate the highscore 25.000 users which is too slow if this should scale to 500.000.
I'm assuming that hardware size will not an issue (within reasonable limits)
We are already using memcached to store and retrieve cached data
Any suggestions, links to good articles about similar issues are welcome.
Interesting problem. In my experience, batch processes should only be used as a last resort. You are usually better off having your software calculate values as it inserts/updates the database with the new data. For your scenario, this would mean that it should run the score calculation code every time it inserts or updates any of the data that goes into calculating the team's score. Store the calculated value in the DB with the team's record. Put an index on the calculated value field. You can then ask the database to sort on that field and it will be relatively fast. Even with millions of records, it should be able to return the top n records in O(n) time or better. I don't think you'll even need a high scores table at all, since the query will be fast enough (unless you have some other need for the high scores table other than as a cache). This solution also gives you real-time results.
Assuming that most of your 2GB of data is not changing that frequently you can calculate and cache (in db or elsewhere) the totals each day and then just add the difference based on new records provided since the last calculation.
In postgresql you could cluster the table on the column that represents when the record was inserted and create an index on that column. You can then make calculations on recent data without having to scan the entire table.
First and formost:
The computation has to take place somewhere.
User experience impact should be as low as possible.
One possible solution is:
Replicate (mirror) the database in real time.
Pull the data from the mirrored DB.
Do the analysis on the mirror or on a third, dedicated, machine.
Push the results to the main database.
Results are still going to take a while, but at least performance won't be impacted as much.
How about saving those scores in a database, and then simply query the database for the top scores (so that the computation is done on the server side, not on the client side.. and thus there is no need to move the millions of records).
It sounds pretty straight forward... unless I'm missing your point... let me know.
Calculate and store the score of each active team on a rolling basis. Once you've stored the score, you should be able to do the sorting/ordering/retrieval in the SQL. Why is this not an option?
It might prove fruitless, but I'd at least take a gander at the way sorting is done on a lower level and see if you can't manage to get some inspiration from it. You might be able to grab more manageable amounts of data for processing at a time.
Have you run tests to see whether or not your concerns with the data size are valid? On a mid-range server throwing around 2GB isn't too difficult if the software is optimized for it.
Seems to me this is clearly a job for chacheing, because you should be able to keep the half-million score records semi-local, if not in RAM. Every time you update data in the big DB, make the corresponding adjustment to the local score record.
Sorting the local score records should be trivial. (They are nearly in order to begin with.)
If you only need to know the top 100-or-so scores, then the sorting is even easier. All you have to do is scan the list and insertion-sort each element into a 100-element list. If the element is lower than the first element, which it is 99.98% of the time, you don't have to do anything.
Then run a big update from the whole DB once every day or so, just to eliminate any creeping inconsistencies.
I'm designing an application that receives information from roughly 100k sensors that measure time-series data. Each sensor measures a single integer data point once every 15 minutes, saves a log of these values, and sends that log to my application once every 4 hours. My application should maintain about 5 years of historical data. The packet I receive once every 4 hours is of the following structure:
Data and time of the sequence start
Number of samples to arrive (assume this is fixed for the sake of simplicity, although in practice there may be partials)
The sequence of samples, each of exactly 4 bytes
My application's main usage scenario is showing graphs of composite signals at certain dates. When I say "composite" signals I mean that for example I need to show the result of adding Sensor A's signal to Sensor B's signal and subtracting Sensor C's signal.
My dilemma is how to store this time-series data in my database. I see two options, assuming I use a relational database:
Store every sample in a row of its own: when I receive a signal, break it to samples, and store each sample separately with its timestamp. Assume the timestamps can be normalized across signals.
Store every 4-hour signal as a separate row with its starting time. In this case, whenever a signal arrives, I just add it as a BLOB to the database.
There are obvious pros and cons for each of the options, including storage size, performance, and complexity of the code "above" the database.
I wondered if there are best practices for such cases.
Many thanks.
Storing each sample in it's own row sounds simple and logical to me. Don't be too hasty to optimize unless there is actually a good reason for it. Maybe you should do some tests with dummy data to see if any optimization is really necessary.
I think storing the data in the form that makes it easiest to carry out your main goal is likely the least painful overall. In this case, it's likely the more efficient as well.
Since your main goal appears to be to display the information in interesting and flexible ways I'd go with separate rows for each data point. I presume most of the effort required to write this program well is likely on the display side, you should minimize the complexity on that side as much as possible.
Storing data in BLOBs is good if the content isn't relevent and you would never want to run queries against it. In this case, your data will be the contents of the database, and therefore, very relevent.
I think you should:
1.Store every sample in a row of its own: when I receive a signal, break it to samples, and store each sample separately with its timestamp. Assume the timestamps can be normalized across signals.
I see two database operations here: the first is to store the data as it comes in, and the second is to retrieve the data in a (potentially large) number of ways.
As Kieveli says, since you'll be using discrete parts of the data (as opposed to all of the data all at once), storing it as a blob won't help you when it comes time to read it. So for the first task, storing the data line by line would be optimal.
This might also be "good enough" when querying the data. However, if performance is an issue, and/or if you get massive amounts of volume [100,000 sensors x 1 per 15 minutes x 4 hours = 9,600,000 rows per day, x 5 years = 17,529,600,000 or so rows in five years]. To my mind, if you want to write flexible queries against that kind of data, you'll want some form of star schema structure (as gets used in data warehouses).
Whether you load the data directly into the warehouse, or let it build up "row by row" to be added to the warehouse ever day/week/month/whatever, depends on time, effort, available resources, and so on.
A final suggestion: when you set up a test environment for your new code, load it with several years of (dummy) data, to see how it will perform.