Related
I'm about to write an application for Android, and it will use Mysql.
I know that access to DB is really expensive in terms of time, and would like to know how often do applications like instant messaging, online gaming access to databases?
For example in a game, we would like to save the positions of a player in the world, when he's moving all the time.
Is the database access actually not expensive, and there is a way to be connected to it all the time and just do request that are actually not expensive?
Or is IT really expensive in anyway, and there are techniques to access to it for example every X interval of time, and saving it locally in the meantime?
I Know that my question is really general, and it depends always on what we need and want.
My question came out because i made a really simple login application that connects and does 1 request to database, and it takes 1 second (a lot!!) to get the result, so how online applications can be so fast?
Thank you
Before answering this I would recommend simulating the process as much as possible, benchmarking and you can work towards the best solution for your use case.
e.g. If I have an application submitting data to a database simulate the submission so I can easily run multiple submissions at the same time and see what the bottle neck is...and see how it compares when I using caching, replication, indexes, etc.
Also reading company blogs can be helpful as they often share success stories that support the usage of a particular approach
How expensive is access to database?
Accessing a database can be a pretty quick operation
SELECT 1; // 0.005 Secs :D
However there are situations that can lead to poor performance (slow reads, writes and updates) but there are some relatively simple ways to combat this
Indexes
The best way to improve the performance of SELECT operations is to
create indexes on one or more of the columns that are tested in the
query. The index entries act like pointers to the table rows, allowing
the query to quickly determine which rows match a condition in the
WHERE clause, and retrieve the other column values for those rows.
Replication
spreading the load among multiple slaves to improve performance. In
this environment, all writes and updates must take place on the master
server. Reads, however, may take place on one or more slaves. This
model can improve the performance of writes (since the master is
dedicated to updates), while dramatically increasing read speed across
an increasing number of slaves.
How often do we access to it?
If you are solely using a database you will access it every time you n position and every time you need to find out their position.
This is where you would explore options to prevent accessing the database.
Memory caches such as redis or memcache
Replication - Only read from slaves
It depends on your design and requirement.
1) Most of the applications manage Connection Pools to minimize the initialization time.
2) Most of the ORM frameworks have external Cache to improve the reading performance. So if you do heavy data reading in your application then don't worry about storing it in locally. The Cache will be effective in this case.
3) When you store locally either in File (or) some format, then it will also add extra performance delay.
4) If you keep the data in primary memory, then obviously Game performance would be better. That's why Gamers prefer high end graphics card, and huge RAM.
For most databases there is the option of batch insertions. Obviously even a small overhead will accumulate if you have to many connections over time. And performing single insertions will have a greater overhead than on batch. The only issue is how often?.... And you should test how often you wan't to insert and how much information you should store locally before doing a batch insertion.
I have inherited a legacy Delphi application that uses ADO to connect to SQL Server.
The application has a notion of a "Global Connection" -- that is a single connection that it opens at the start, and then keeps open all throughout the running of the application (which can be days, weeks, or longer....)
So my question is this: Should I keep this way of doing things or should I switch to a "connect-query-disconnect" mode of doing things? Does it matter?
Switching would be a non-trivial task, but I'll do it if it means better performance, data management, etc.
Well, it depends on what you're expecting to get out of it, and what kind of application it is.
There's nothing in particular wrong with using a single long-running connection, as long as the application can gracefully handle disconnections and recover or log/notify when it can't reconnect.
The problem with a connect-query-disconnect setup is that you're adding the overhead of connecting and disconnecting on every query. That's going to slow things down, and in an interactive GUI application users may notice the additional overhead. You also have to make sure that authorization is transparently handled if it isn't already.
At the same time, there may be interactive performance gains to be had if you can push all the queries off onto background threads and asynchronously update the GUI. If contention appears because the queries are serialized, you can migrate to a connection-pool system fairly readily as well and improve things even more. This has a fairly high complexity cost to it though, so now you're looking to balancing what the gains are compared to the work involved.
Right now, my ultimate response is "if it ain't broke, don't fix it." Changes along the lines you propose are a lot of work -- how much do the users of this application stand to gain? Are there other problems to solve that might benefit them more?
Edit: Okay, so it's broke. Well, slow at least, which is all the same to me. If you've ruled out problems with the SQL Server itself, and the queries are performing as fast as they can (i.e. DB schema is sane, the right indexes are available, queries aren't completely braindead, server has enough RAM and fast enough I/O, network isn't flaky, etc.), then yes, it's time to find ways to improve the performance of the app itself.
Simply moving to a connect-query-disconnect is going to make things worse, and the more queries you're issuing the bigger the drop off is going to be. It sounds like you're going to need to rearchitect the app so that you can run fewer queries, run them in the background, cache more aggressively on the client, or some combination of all 3.
Don't forget the making the clients perform better means that server side performance gets more important since it's probably going to be handling a higher load if clients start making multiple connections and issuing multiple queries in parallel.
As mr Frazier told before - the one global connection is not bad per se.
If you intend to change, first detect WHAT is the problem. Let's see some scenarios:
1
Some screens(IOW: an set of 1..n forms to operate in a business entity) are slow. Possible causes:
insuficient filtering resulting in a pletora of records being pulled from database without necessity.
the number of records are ok, but takes too much to render it. Solution: faster controls or intelligent rendering (ex.: Virtual list views)
too much queries each time you open an screen. Possible solutions: use TClientDatasets (or any in-memory dataset) to hold infrequently modified lookup tables. An more sophisticated cache for more extensive tables or opening those datasets in other threads can improve response times.
Scrolling on datasets with controls bound can be slow (just to remember, because those little details can be easily forgotten).
2
Whole app simply slows down. Checklist:
Network cards are ok? An few net cards mal-functioning can wreak havoc even on good structured networks as they create unnecessary noise on the line.
[MSSQL DBA HAT ON] The next on the line of attack is SQL Server. Ask the DBA to trace blocks and deadlocks. Register slow queries and work on them speed up. This relate directly to #1.1 and #1.3
Detect if some naive developer have done SELECT inside transactions. In read committed isolation, it's just overhead, as it'll create more network traffic. Open the query, retrieve the data and close the dataset.
Review the database schema, if you can.
Are any data-bound operations on a bulk of records (let's say, remarking the price of some/majority/all products) being done on the app? Make an SP or refactor the operation on an query, it'll be much faster and will reduce the load of the entire server.
Extensive operations on a group of records? Learn how to do that operations at once on the server instead of one-by-one record. See an examination of most used alternatives on the MSSQL MVP Erland Sommarskog's article on array and list on MSSQL.
Beware of queries with WHERE like : WHERE SomeFunction(table1.blabla) = #SomeParam . Most of time, that ones will not use an index causing to read the entire table to select the desired data. If is a big table.... Indexing on a persisted computed columns can make miracles...[MSSQL HAT OFF]
That's what I can think of without a little more detail... ;-)
Database connections are time consuming resources to create and the rule of thumb should be create as little as possible and reuse as much as possible. That's why some other technologies have database connection pools, which are typically established at application/service startup and then kept as long as possible and shared among threads.
From your comment, the application has performances issues, but it's difficult without more details to make any recommendation.
Should try to nail down what is slow - are all queries slow or just some specific ones?
If just some specific ones is there some correlation.
My 2 cents.
I am developing an application which involves multiple user interactivity in real time. It basically involves lots of AJAX POST/GET requests from each user to the server - which in turn translates to database reads and writes. The real time result returned from the server is used to update the client side front end.
I know optimisation is quite a tricky, specialised area, but what advice would you give me to get maximum speed of operation here - speed is of paramount importance, but currently some of these POST requests take 20-30 seconds to return.
One way I have thought about optimising it is to club POST requests and send them out to the server as a group 8-10, instead of firing individual requests. I am not currently using caching in the database side, and don't really have too much knowledge on what it is, and whether it will be beneficial in this case.
Also, do the AJAX POST and GET requests incur the same overhead in terms of speed?
Rather than continuously hitting the database, cache frequently used data items (with an expiry time based upon how infrequently the data changes).
Can you reduce your communication with the server by caching some data client side?
The purpose of GET is as its name
implies - to GET information. It is
intended to be used when you are
reading information to display on the
page. Browsers will cache the result
from a GET request and if the same GET
request is made again then they will
display the cached result rather than
rerunning the entire request. This is
not a flaw in the browser processing
but is deliberately designed to work
that way so as to make GET calls more
efficient when the calls are used for
their intended purpose. A GET call is
retrieving data to display in the page
and data is not expected to be changed
on the server by such a call and so
re-requesting the same data should be
expected to obtain the same result.
The POST method is intended to be used
where you are updating information on
the server. Such a call is expected to
make changes to the data stored on the
server and the results returned from
two identical POST calls may very well
be completely different from one
another since the initial values
before the second POST call will be
differentfrom the initial values
before the first call because the
first call will have updated at least
some of those values. A POST call will
therefore always obtain the response
from the server rather than keeping a
cached copy of the prior response.
Ref.
The optimization tricks you'd use are generally the same tricks you'd use for a normal website, just with a faster turn around time. Some things you can look into doing are:
Prefetch GET requests that have high odds of being loaded by the user
Use a caching layer in between as Mitch Wheat suggests. Depending on your technology platform, you can look into memcache, it's quite common and there are libraries for just about everything
Look at denormalizing data that is going to be queried at a very high frequency. Assuming that reads are more common than writes, you should get a decent performance boost if you move the workload to the write portion of the data access (as opposed to adding database load via joins)
Use delayed inserts to give priority to writes and let the database server optimize the batching
Make sure you have intelligent indexes on the table and figure out what benefit they're providing. If you're rebuilding the indexes very frequently due to a high write:read ratio, you may want to scale back the queries
Look at retrieving data in more general queries and filtering the data when it makes to the business layer of the application. MySQL (for instance) uses a very specific query cache that matches against a specific query. It might make sense to pull all results for a given set, even if you're only going to be displaying x%.
For writes, look at running asynchronous queries to the database if it's possible within your system. Data synchronization doesn't have to be instantaneous, it just needs to appear that way (most of the time)
Cache common pages on disk/memory in a fully formatted state so that the server doesn't have to do much processing of them
All in all, there are lots of things you can do (and they generally come down to general development practices on a more bite sized scale).
The common tuning tricks would be:
- use more indexing
- use less indexing
- use more or less caching on filesystem, database, application, or content
- provide more bandwidth or more cpu power or more memory on any of your components
- minimize the overhead in any kind of communication
Of course an alternative would be to:
0 develop a set of tests, preferable automatic that can determine, if your application works correct.
1 measure the 'speed' of your application.
2 determine how fast it has to become
3 identify the source of the performane problems:
typical problems are: network throughput, file i/o, latency, locking issues, insufficient memory, cpu
4 fix the problem
5 make sure it is actually faster
6 make sure it is still working correct (hence the tests above)
7 return to 1
Have you tried profiling your app?
Not sure what framework you're using (if any), but frankly from your questions I doubt you have the technical skill yet to just eyeball this and figure out where things are slowing down.
Bluntly put, you should not be messing around with complicated ways to try to solve your problem, because you don't really understand what the problem is. You're more likely to make it worse than better by doing so.
What I would recommend you do is time every step. Most likely you'll find that either
you've got one or two really long running bits or
you're running a shitton of queries because of an n+1 error or the like
When you find what's going wrong, fix it. If you don't know how, post again. ;-)
All things being equal, and in the most simple form, which is faster?
1.) A call to a web service method
2.) A call to a database
For example, assume that you have a simple web service that just returns an integer that is calculated in X time. You also have a database that, when queried in th right way, also takes X time to calculate the answer. (So the compute time is the same in both cases) In both cases, assume the amount of data both directions is the same, say, a single 32-bit integer, for simplicity.
Thus far, the calculation times of both the web service and the database are exactly the same.
The environment is 1 application server, where the app resides, and 1 other server that is holding both the web service and the database. There is nothing else going on in the environment other than the application calling either the web service or database repeatedly. This all within one single LAN, so any network latency is equal.
From an application, which will be faster, the call to the database, or the call to the web service?
What I am trying to isolate, I guess, is which is more heavy-weight. Does the set up, open, close, tear down of a database connection end up slower than that for a web service, or is it the same? Additionally, if there are other things, such as parsing the result from a web service, how do they affect the speed?
O(1) doesn't refer to any length of time. A single operation could take .001 ms on a webservice and 100 seconds in a database and they both could be using O(1) functions:
http://en.wikipedia.org/wiki/Big_O_notation
It's hard to know quite what you're asking. If you're asking whether accessing a local database is generally faster than accessing a similar service over the internet, then I expect that, generally, the answer is that the local database will be faster. The call over the internet to the web service has a lot of overhead and communication over internet is relatively slow. Evan on a slow computer a databases can perform many thousands of simple queries per second. Contrast that with access over the internet, where you'd be lucky to get 50 round trip requests per second, not even accounting for time it takes to perform the requested operation on the server.
If you're asking whether a server on the web can serve data faster by avoiding a database and calculating results directly, then the answer is it depends. The call to the database in this case adds unnecessary overhead if the data in it can be easily calculated in a stand-alone function. The answer to this question doesn't really have anything to do with a "web service". Is it faster to calculate an answer in a function or to access the answer using a query on a database? As I said, the answer would depend on the complexity of the particular function you had to use, and weighing its computation time against the overhead of accessing the answer (or part of the answer) directly from a database.
In short, the answer to your question depends on what exactly you're asking. It would also probably help to know why you're asking the question. I have a suspicion that the real answer is that this probably isn't something you need to worry about, not really a practical concern unless you have a particular situation requiring optimization.
If you're concerned about comparison of speed when webservice and database are both on a lan, I'm pretty sure the overhead of the db is a less than the webservice. The application typically maintains a stateful connection(s) to the db, while requests to a webservice are via http, which is stateless, relatively higher overhead, and slower. Could be wrong, though. Best answer would be to whip up a simple webservice, query, and (1) measure time it takes to retrieve results using both methods, and compare, and/or (2) create an app that opens a lot of threads and do some load testing.
A caveat: If your app doesn't maintain an open connection or have access to a pool of connections with the db, then the db alternative may well be slower. Initial creation of a db connection can be relatively slow. But that shouldn't figure into things, since you should write your app so that an open connection is always maintained.
Based on practical experience, I would say that the database call is significantly faster.
It all depends on the network topology and languages you're using. If you're talking C#...my money would be on the database call being faster almost every time.
Your calls to the database server are going to be made over the native protocol. Everything is going to be optimized.
If you're calling a web service, you're going to need some mechanism to send the request to the web server, wait for the web server to respond, and then something to parse the result of the web service call back into your code.
One could say that generally, latency of the network in a web service (which will typically be over the internet) is going to be slower than the call to a database (which is typically on a LAN or something, which is faster than one's connection to the internet).
Of course, this makes a LOT of assumptions about setups/software/etc, etc which effectively reduces it to an apples and oranges comparison, which there is never a good answer for.
O(1) doesn't specify the speed, it specifies the 'growth' in time required as the underlying data gets larger. The constants are dropped from the equation. What this means is that O(N^2) can be less than O(N) for some really small N.
A web service is a way to connect to some functionality. Besides the network latency, the real time is bound by what the service is actually doing. There could be a database underneath for example. If it is something that just returns an Integer, the computational time is mostly trivial, the request is bounded by the network.
A database needs to parse the query, build a query tree, optimized it, then apply some search algorithms against a series of caches and files. If you just plopped an Integer into a trivial table, or a tableless SQL call, then fetching the data is probably trivial, its the whole transactional packaging that will eat CPU.
Can you get a packet back and forth to a server before you can parse trivial SQL and punch back a tabled result? Mostly, these days I say it was a toss up. Some networks are faster than others, while some databases and servers are pretty good. Nothing is certain.
In general, is a web service faster than a database? Yes, if and only if the service is trivial (if it's hiding a database, then it's obviously just additional time). Databases are big bulky engines, and while they've gotten much faster over the years, their base level of transactional integrity specifies an awful lot of minimum CPU usage. They're slower because they are doing so much more work. Contrast that with some explicit minimal computation hidden behind network access. A fibber or gigabit network can rapidly move data. It's just so much less work to get accomplished.
Of course the reason we don't replace databases with custom written web services is time. It takes too long to write it, and then keep it up to date. Way more effort than just slamming it into a database and accepting it's performance.
Paul.
IMHO I would say the database call would be faster hands down. I say this because there is much less overhead. With the verbosity of the HTTP protocol and SOAP markup incurred you have a lot more bloat in your data. This bloat data has extra cost for packaging and un-packaging. with a stored procedure call you could use an output parameter to return a single int instead of a result set to make it even lighter.
Algorithmic complexity is just one variable that impacts the overall performance of a system. Other factors might include network latency or network bandwidth, especially when the size of the returned data is different.
If you run the same O(1) algorithm on a local machine, you will get the results faster than if you run the algorithm on a machine on another continent and need the same results sent over the network.
Other factors might include raw CPU speed if the calls are done on physically different machines.
That's why premature optimisation is the root of all evil.
EDIT:
I'd say it depends even more now on the details of the system, i.e., what database software you are using, or whether or not your web service is reading data from a static web page or dynamically generating the data.
But I am beginning to lose sight of why you are asking the question. You seem to say that both methods take the same amount of time. So if they take the same amount of time, how can you ask which is faster? Clearly they are equally fast. You need to tell us more about how and when they stop taking the same amount of time.
If we are assuming that you are communicating to a different server for both the web and database calls, wouldn't they be pretty much the same, since both requests are transferred through TCP/IP? The only thing then that could be compared is how big the actual results are that are sent back in terms of bits across the wire.
Further to my previous question about the Optimal RAID setup for SQL server, could anyone suggest a quick and dirty way of benchmarking the database performance on the new and old servers to compare them? Obviously, the proper way would be to monitor our actual usage and set up all sorts of performance counters and capture the queries, etc., but we are just not at that level of sophistication yet and this isn't something we'll be able to do in a hurry. So in the meanwhile, I'm after something that would be a bit less accurate, but quick to do and still better than nothing. Just as long as it's not misleading, which would be worse than nothing. It should be SQL Server specific, not just a "synthetic" benchmark. It would be even better if we could use our actual database for this.
Measure the performance of your application itself with the new and old servers. It's not necessarily easy:
Set up a performance test environment with your application on (depending on your architecture this may consist of several machines, some of which may be able to be VMs, but some of which may not be)
Create "driver" program(s) which give the application simulated work to do
Run batches of work under the same conditions - remember to reboot the database server between runs to nullify effects of caching (Otherwise your 2nd and subsequent runs will probably be amazingly fast)
Ensure that the performance test environment has enough hardware machines in to be able to load the database heavily - this may mean swapping out some VMs for real hardware.
Remember to use production-grade hardware in your performance test environment - even if it is expensive.
Our database performance test cluster contains six hardware machines, several of which are production-grade, one of which contains an expensive storage array. We also have about a dozen VMs on a 7th simulating other parts of the service.
you can always insert, read, and delete a couple of million rows - it's not a realistic mix of operations but it should strain the disks nicely...
Find at least a couple of the queries that are taking some time, or at least that you suspect are taking time, insert a lot of data if you don't have it already, and run the queries having set:
SET STATISTICS IO ON
SET STATISTICS TIME ON
SET STATISTICS PROFILE ON
Those should give you a rough idea of the resources being consumed.
You can also run SQL Server Profiler to get a general idea of what queries are taking a long time and how long they are taking plus other statistics. It outputs a lot of data so try to filter it down a little bit, possibly by long duration or one of the other performance statistics.