I have databases across geographical locations and there is a need to synchronize databases near real time.
As per my information, SSIS ETL is suitable only for batch updates. Real time updates can be achieved by Web services or Service Bus.
Further, only SSIS ETL can handle larger volumes.
I am looking for limits on the velocity or volumes of data beyond where I can not think of Web services or Service Bus and trade-off analysis.
What is the approach suitable if the requirement is Larger Volumes and near Real Time updates.
I'd suggest that you take a look at the SqlBulkCopy class. It lets you do fast high volume inserts (just inserts, not updates) from .Net code. So your code could grab a bunch of messages off the bus, and then insert them really quickly.
We are prototyping solutions to a problem similar to yours. SqlBulkCopy appears to be at least 10 times faster than normal insert statements, quite possibly more. It was the main, but not on only, factor in speeding up our process from taking 8 hours to taking only 15 minutes.
Related
I am working on an eCommerce website designed to present a large number of SKUs. The SQL Server schema describing these products is normalized to the extent that, a few years ago, it became unreasonably slow to retrieve the necessary information to present to customers, so we changed our infrastructure such that we would bear the cost of loading the data for each product once and then store that data in an AppFabric cache (previously Velocity).
Over time, the complexity of requirements placed on our AppFabric infrastructure has grown (imagine that), forcing us to spend a considerable amount of time writing code for handling data retrieval from our cache, data updates including incremental updates, etc.
We happen to have much of our product data stored in a denormalized form in a side database, so for experimentation's sake I wrote a console app to randomly select one of our ~150K SKUs at a time, and then retrieve the record for that product from our denormalized table.
I was surprised to find that I was able to select these records in about the same average time that I could select a record from our AppFabric cache, about 2.5 ms average in both cases. I'm sure in both cases the data is coming from an in-memory cache of one sort or another, be it AppFabric or disk cache, and the 2.5 ms is bumping against a bare minimum amount of time for a network round trip.
This makes me think we might be better off just using denormalized data in SQL Server for our high load/high performance needs. The management tools for SQL Server-based data are so much better. All of the devs on our team are adept at using Management Studio, whereas with AppFabric we have one dev who can use PowerShell to a) Give us a count of records stored in the cache and b) dump the cache. Any other management functionality we have to create ourselves.
This makes me ask why anyone would want to use AppFabric at all. We are not concerned with cost, because the cost of the development efforts we have to apply to an AppFabric-related solution vastly outweigh even the cost of SQL Server licensing.
Thank you for whatever feedback you can provide to help our team decide the best direction to move forward.
Deciding to use a caching mechanism should be a very thought out process -- and isn't really always the right choice. However, the primary reason for using caching over a durable persistance model is to manage an extremely high transaction load.
In AppFabric Cache I can setup a distributed set of servers to work off of one logical repository -- with built in load balancing. So, unlike Microsoft SQL Server which has no way of providing clustered instances for the purpose of load balancing -- if I'm reading and writing 50 to 100 million times a day the cache is a more viable solution for sharing those resources. Then those writes can be queued to the durable persistence model over time ensuring that there are no real peaks in usage because it's spread out both across the caching fabric and the durable store.
Using AppFabric rather than a dedicated cache-aside database containing a denormalised schema also provides the benefit of fine grained control over cache key expiry, eviction, and tuned region policies. You would have to roll this yourself if you used SqlServer. I also agree with #mperrenoud03 comments about load balancing and high transaction rate support. Also, if you use a good ORM tool like NHibernate, it can be configured to use Appfabric (or other distributed cache platforms) as a 2nd level cache. We are leveraging this in our project and getting good results.
I'm developing a project which gets some data from a hardware every 100 milliseconds. I'm using Visual Studio 2010 and C#. The data size is about 50KB in each round. The customer wants to log all the data in the database for statistical purposes.
I prefer using SQL Server 2005+ since I'm familliar with it and the project should be done in about 15 days it's a small size project.
Is this a reasonable speed for such a data size to be inserted into db? Do you suggest any generic approaches to speed up the interactions? (using sql commands, EF, other technologies which could have a positive effect on speed).
If this is way too fast for SQL Server to handle, what do you suggest I should use which:
1-has a quick learning curve.
2-could accept queries for statistical data.
3- could satisfy my speed interaction needs.
I'm thinking about System.Data.SQLite If it's a no go on SQL Server. But I don't know about the learning curve and speed enhancements.
500kb per second is nothing. I work with Sql databases that does gigabytes per second, it all depends on the hardware and server configuration underneath, but lets say you were to run this on a standard office desktop, you will be fine. Even then I would say you can start thinking about new hardware if you look at 20Mb per second or more.
Second part of your question. Since you are using c#, I suggest you use SQL 2008 and then use a table valued parameter (TVP), and then buffer the data in the application, in a dataset or datatable until you have say 10K rows, and then you call the proc to do the insert, and all you do is pass it the datatable as a parameter. This will save hundreds or thousands of ad-hoc inserts.
Hope this is clear, if not, ask an I will try to explain further.
50kB every 100 millseconds is 500kB a second. These days networks run at gigabit speeds (many megabytes per second) and hard drives can cope with hundreds of MB per second. 500kB is a tiny amount of data, so I'd be most surprised if SQL server can't handle it.
If you have a slow network connection to the server or some other problem that means it struggles to keep up, then you can try various strategies to improve things. Ideas might be:
Buffer the data locally (and/or on the server) and write it into the database with a separate thread/process. If you're not continually logging 24 hours a day, then a slow server would catch up when you finish logging. Even if you are logging continuously, this would smooth out any bumps (e.g. if your server has periods of "busy time" where it is doing so much else that it struggles to keep up with the data from your logger)
Compress the data that is going to the server so there's less data to send/store. If the packets are similar you may find you can get huge compression ratios.
If you don't need everything in each packet, strip out anything "uninteresting" from the data before uploading it.
Possibly batching the data might help - by collecting several packets and sending them all at once you might be able to minimise transmission overheads.
Just store the data directly to disk and use the database just to index the data.
... So I'd advise writing a prototype and see how much data you can chuck at your database over your network before it struggles.
I'm starting to design a new application that will be used by about 50000 devices. Each device generates about 1440 registries a day, this means that will be stored over 72 million of registries per day. These registries keep coming every minute, and I must be able to query this data by a Java application (J2EE). So it need to be fast to write, fast to read and indexed to allow report generation.
Devices only insert data and the J2EE application will need to read then occasionally.
Now I'm looking to software alternatives to support this kind of operation.
Putting this data on a single table would lead to a catastrophic condition, because I won't be able to use this data due to its amount of data stored over a year.
I'm using Postgres, and database partitioning seems not to be a answer, since I'd need to partition tables by month, or may be more granular approach, days for example.
I was thinking on a solution using SQLite. Each device would have its own SQLite database, than the information would be granular enough for good maintenance and fast insertions and queries.
What do you think?
Record only changes of device positions - most of the time any device will not move - a car will be parked, a person will sit or sleep, a phone will be on unmoving person or charged etc. - this would make you an order of magnitude less data to store.
You'll be generating at most about 1TB a year (even when not implementing point 1), which is not a very big amount of data. This means about 30MB/s of data, which single SATA drive can handle.
Even a simple unpartitioned Postgres database on not too big hardware should manage to handle this. The only problem could be when you'll need to query or backup - this can be resolved by using a Hot Standby mirror using Streaming Replication - this is a new feature in soon to be released PostgreSQL 9.0. Just query against / backup a mirror - if it is busy it will temporarily and automatically queue changes, and catch up later.
When you really need to partition do it for example on device_id modulo 256 instead of time. This way you'd have writes spread out on every partition. If you partition on time just one partition will be very busy on any moment and others will be idle. Postgres supports partitioning this way very well. You can then also spread load to several storage devices using tablespaces, which are also well supported in Postgres.
Time-interval partitioning is a very good solution, even if you have to roll your own. Maintaining separate connections to 50,000 SQLite databases is much less practical than a single Postgres database, even for millions of inserts a day.
Depending on the kind of queries that you need to run against your dataset, you might consider partitioning your remote devices across several servers, and then query those servers to write aggregate data to a backend server.
The key to high-volume tables is: minimize the amount of data you write and the number of indexes that have to be updated; don't do UPDATEs or DELETEs, only INSERTS (and use partitioning for data that you will delete in the future—DROP TABLE is much faster than DELETE FROM TABLE!).
Table design and query optimization becomes very database-specific as you start to challenge the database engine. Consider hiring a Postgres expert to at least consult on your design.
Maybe it is time for a db that you can shard over many machines? Cassandra? Redis? Don't limit yourself to sql db's.
Database partition management can be automated; time-based partitioning of the data is a standard way of dealihg with this type of problem, and I'm not sure that I can see any reason why this can't be done with PostgreSQL.
You have approximately 72m rows per day - assuming a device ID, datestamp and two floats for coordinates you will have (say) 16-20 bytes per row plus some minor page metadata overhead. A back-of-fag-packet capacity plan suggests around 1-1.5GB of data per day, or 400-500GB per year, plus indexes if necessary.
If you can live with periodically refreshed data (i.e. not completely up to date) you could build a separate reporting table and periodically update this with an ETL process. If this table is stored on separate physical disk volumes it can be queried without significantly affecting the performance of your transactional data.
A separate reporting database for historical data would also allow you to prune your operational table by dropping older partitions, which would probably help with application performance. You could also index the reporting tables and create summary tables to optimise reporting performance.
If you need low latency data (i.e. reporting on up-to-date data), it may also be possible to build a view where the lead partitions are reported off the operational system and the historical data is reported from the data mart. This would allow the bulk queries to take place on reporting tables optimised for this, while relatively small volumes of current data can be read directly from the operational system.
Most low-latency reporting systems use some variation of this approach - a leading partition can be updated by a real-time process (perhaps triggers) and contains relatively little data, so it can be queried quickly, but contains no baggage that slows down the update. The rest of the historical data can be heavily indexed for reporting. Partitioning by date means that the system will automatically start populating the next partition, and a periodic process can move, re-index or do whatever needs to be done for the historical data to optimise it for reporting.
Note: If your budget runs to PostgreSQL rather than Oracle, you will probably find that direct-attach storage is appreciably faster than a SAN unless you want to spend a lot of money on SAN hardware.
That is a bit of a vague question you are asking. And I think you are not facing a choice of database software, but an architectural problem.
Some considerations:
How reliable are the devices, and how
well are they connected to the
querying software?
How failsafe do
you need the storage to be?
How much extra processing power do the devices
have to process your queries?
Basically, your idea of a spatial partitioning is a good idea. That does not exclude a temporal partition, if necessary. Whether you do that in postgres or sqlite depends on other factors, like the processing power and available libraries.
Another consideration would be whether your devices are reliable and powerful enough to handle your queries. Otherwise, you might want to work with a centralized cluster of databases instead, which you can still query in parallel.
I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.)
I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea).
I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed).
Does anyone have any suggestions as to which databases would be a good fit?
P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?
I'm working on a very similar process at the present (a web domain crawlling database) with the same significant transaction rates.
At these ingest rates, it is critical to get the storage layer right first. You're going to be looking at several machines connecting to the storage in a SAN cluster. A singe database server can support millions of writes a day, it's the amount of CPU used per "write" and the speed that the writes can be commited.
(Network performance also often is an early bottleneck)
With clever partitioning, you can reduce the effort required to summarise the data. You don't say how up-to-date the summaries need to be, and this is critical. I would try to push back from "realtime" and suggest overnight (or if you can get away with it monthly) summary calculations.
Finally, we're using a 2 CPU 4GB RAM Windows 2003 virtual SQL Server 2005 and a single CPU 1GB RAM IIS Webserver as our test system and we can ingest 20 million records in a 10 hour period (and the storage is RAID 5 on a shared SAN). We get ingest rates upto 160 records per second batched in blocks of 40 records per network round trip.
Cassandra + Hadoop does sound like a good fit for you. 200M/8h is 7000/s, which a single Cassandra node could handle easily, and it sounds like your aggregation stuff would be simple to do with map/reduce (or higher-level Pig).
Greenplum or Teradata will be a good option. These databases are MPP and can handle peta-scale data. Greenplum is a distributed PostgreSQL db and also has it own mapreduce. While Hadoop may solve your storage problem but it wouldn't be helpful for performing summary queries on your data.
we have a 500gb database that performs about 10,000 writes per minute.
This database has a requirements for real time reporting. To service this need we have 10 reporting databases hanging off the main server.
The 10 reporting databases are all fed from the 1 master database using transactional replication.
The issue is that the server and replication is starting to fail with PAGEIOLATCH_SH errors - these seem to be caused by the master database being overworked. We are upgrading the server to a quad proc / quad core machine.
As this database and the need for reporting is only going to grow (20% growth per month) I wanted to know if we should start looking at hardware (or other 3rd party application) to manage the replication (what should we use) OR should we change the replication from the master database replicating to each of the reporting databases to the Master replicating to reporting server 1, reporting server 1 replicating to reporting server 2
Ideally the solution will cover us to a 1.5tb database, with 100,000 writes per minute
Any help greatly appreciated
One common model is to have your main database replicate to 1 other node, then have that other node deal with replicating the data out from there. It takes the load off your main server and also has the benefit that if, heaven forbid, your reporting system's replication does max out it won't affect your live database at all.
I haven't gone much further than a handful of replicated hosts, but if you add enough nodes that your distribution node can't replicate it all it's probably sensible to expand the hierarchy so that your distributor is actually replicated to other distributors which then replicate to the nodes you report from.
How many databases you can have replicated off a single node will depend on how up-to-date your reporting data needs to be (EG: Whether it's fine to have it only replicate once a day or whether you need to the second) and how much data you're replicating at a time. Might be worth some experimentation to find out exactly how many nodes 1 distributor could power if it didn't have the overhead of actually running your main services.
Depending on what you're inserting, a load of 100,000 writes/min is pretty light for SQL Server. In my book, I show an example that generates 40,000 writes/sec (2.4M/min) on a machine with simple hardware. So one approach might be to see what you can do to improve the write performance of your primary DB, using techniques such as batch updates, multiple writes per transaction, table valued parameters, optimized disk configuration for your log drive, etc.
If you've already done as much as you can on that front, the next question I have is what kind of queries are you doing that require 10 reporting servers? Seems unusual, even for pretty large sites. There may be a bunch you can do to optimize on that front, too, such as offloading aggregation queries to Analysis Services, or improving disk throughput. While you can, scaling-up is usually a better way to go than scaling-out.
I tend to view replication as a "solution of last resort." Once you've done as much optimization as you can, I would look into horizontal or vertical partitioning for your reporting requirements. One reason is that partitioning tends to result in better cache utilization, and therefore higher total throughput.
If you finally get to the point where you can't escape replication, then the hierarchical approach suggested by fyjham is definitely a reasonable one.
In case it helps, I cover most of these issues in depth in my book:
Ultra-Fast ASP.NET.
Check that your publisher and distributor's transaction log files don't have too many VLFs (Virtual Log Files) as detailed here (step 8):
http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx
If your distribution database is co-located with you publisher database, consider moving it to its own dedicated server.