Quering a huge database using cfquery - sql-server

Well, I am going to query a 4 GB data using a cfquery. It's gonna be pain to query
the whole database as it's gonna take very long time to get the data back.
I tried stored procedure when the data was 2 GB and it wasn't really fast at that time either.
The data pulling will be done based on the date range user is gonna select from a HTML page.
I have been suggested to follow data archiving in order to speed up querying the database.
Do you think that I'll have to create a separate table with only fields that are required and then query this newly created table?
Well, the size of the current table is 4GB but it is increasing day by day, basically, it's a response database ( getting the information stored from somewhere
else). After doing some research, I am wondering if writing a Trigger could be one option? So, if I do this, then as soon as a new entry (row) will be added
into the current 4GB table , the trigger will initiate some SQL Query which will transfer the contents of the required fields into the newly created table.
This will keep on happening as long as I keep on getting new values in my original 4GB database.
Does above approach sounds good enough to tackle my problem? I have one more concern, even though I am filtering out the only fields required to querying into
a new table, at some point of time, the size of my new database will also increase and that could alsow slower the speed of querying the new table?
Please correct me if Iam wrong somewhere.
Thanks
More Information:
I am using SQL Server. Indexing is currently done but it's not effective.

Archiving the data will be farting against thunder. The data has to travel from your database to your application. Then your application has to process it to build the chart. The more data you have, the longer that will take.
If it is really necessary to chart that much data, you might want to simply acknowledge that your app will be slow and do things to deal with it. This includes code to prevent multiple page requests, displays to the user, and such.

Related

Realtime Database returning deleted values [EDITED]

I deleted everything in my database. Then, I queried the data that was just deleted and I get results! How is this possible?
I am using the Realtime Database Unity SDK. For testing purposes, I want to regularly purge the whole database and populate it with new data. Imagine my surprise when my queries returned some old, deleted data. It is as if the deleted data persists in some void that can still be accessed.
I have been tinkering with this issue for days now. Here are my steps:
I'm using GetReference(item).Push().Key; which auto-generates a unique key.
I write the new item to the database with GetReference(item).SetValueAsync().
I check my Firebase console, and indeed, the data was correctly recorded.
I create a query that returns the JSON value of item. I works fine.
I delete item from the data base.
I run the query again and item is returned. ITEM IS NOT SUPPOSED TO EXIST ANYMORE!
Out of curiosity, I write a query to return all the data in my database (which should be empty) and it returns every object I have create over the last few days. This is literally hundreds of items....from an empty database.
It seems like data persists for a few days after it is deleted.
Realizing this I decided to test what would happen if I manually made an object that uses an existing key from one of the deleted objects.
My query returns the new object. Yay!
I take a break and come back 15 minutes later. I run the exact query again. I get the old, deleted object and not the new one. WHAT THE HECK IS GOING ON?
At this point I am questioning whether Realtime Database is even a real database. It seems to break the rules of both consistency and integrity.
I've also considered that I might be deleting the data incorrectly. I was mostly doing it manually, through the browser. I also have tried RemoveValueAsync() and SetRawJsonValueAsync(null). Nothing seems to make a difference.
Please, please, please can someone tell me what is going on? I will be forever grateful.
EDIT: It turns out that the phantom data was coming from the cache on my device. Turning persistence off solved the problem. Apparently, performing the same query multiple times only retrieves the data for the database the first time. The subsequent queries look into the cache.
It turns out that the phantom data was coming from the cache on my device. Turning persistence off solved the problem. Apparently, performing the same query multiple times only retrieves the data for the database the first time. The subsequent queries look into the cache.

SQLite record update speed improvements

I currently have trouble updating SQLite database records at scale within a healthy amount of time.
I have a small database of about 70,000 records and I have some office personal using Navicat to filter the records and make some bulk edits to commit. When trying to perform an UPDATE on a large amount of records in one field everything comes to a crawl, when I look at the raw SQL query I can see that the program is using an UPDATE ..SET.. WHERE to perform the operations.
My question is what can I do to help this query run faster? I have a SQLite auto index on the column used for matching the record to update, but I have read and searched all over and have yet to see any kind of resolve other then what I already have in place. Updating one field in 20,000 records is literally taking close to 3 hours...regardless of using Navicat or not. The whole database is only 30mb so I have to be doing something wrong.
All other database operations run nice and speedy, not sure whats wrong and looking for some guidance from a SQLite vet.
There are very few reasons why SQLite can perform as poorly as you describe, but I've experienced pretty much the same thing.
First, if you are performing a table-wide update on an indexed column, you're gonna thrash the whole index, all while making the index less useless. Delete the index before you update, then recreate the index.
Additionally, you can get some serious perf boost by changing the synchronous PRAGMA, but it comes with a non-zero amount of extra risk of corrupt data on a crash (extremely unlikely in my experience)
PRAGMA schema.synchronous = 0

Not able to find the right technique to increase the performance of database retrieving

I have an 2 tables from 12 tables and these 2 tables having millions of records , and when I retrieve the data from these tables it takes much more time . I have heard about indexing , but I think indexing is not a right approach which can be used here . Because each time , I need to fetch whole record instead of 2-3 columns of a record. I also applied indexing , but it took more execution time than without indexing because I fetched whole record.
So , what should be the right approach can be used here?
I'm basing my arguments on Oracle, but similar principles probably apply to other RDBMSs. Please tag your question with the system used.
For indexing the number of columns is mostly irrelevant. More important is the number of rows. But I guess you need all or mostly all of those as well. Indexing won't help in this case, since it would just add another step in the process without reducing the amount of work getting done.
So what you seem to do are large table scans. Those are normally not cached, because they would basically flush the whole cache from all the other useful data being stored there. So every time you select this kind of data you have to scratch in from disc, probably sending it over a wire also. This is bound to take some time.
From what you describe probably the best approach is to cut down on disc reads and network traffic by caching the data as near as possible to the application as possible. Try to setup a cache on the machine of your application possibly as part of your application. Read the data once, put it in the cache and read it from their afterwards. An in memory database would allow you to keep your SQL based access path if this is of any value for you.
Possibly try to fill the cache in the background before anybody is trying to use it.
Of course this will eat up quite some memory and you have to judge if this is feasible.
Second approach would be to tune the caching settings to make the database cache those tables in memory. But be warned that this will affect the performance of the database as a whole and not in a positive way.
Third option might be to move your processing logic into the database. It won't reduce the amount of disc I/O, but at least you would take the network out of the loop (assuming this is part of the issue)
There are few ways you can try things out :-
enable/increase query cache size of the database.
memcached at application level will increase your performance (for sure).
tweak your queries to get the best performance, and configure the best working indexes.
Hope it helps. I had tested all three for MySQL database - django(python) applicaton and they show good results.

Versioning a dataset in an RDBMS using initials and deltas

I'm working on a system that mirrors remote datasets using initials and deltas. When an initial comes in, it mass deletes anything preexisting and mass inserts the fresh data. When a delta comes in, the system does a bunch of work to translate it into updates, inserts, and deletes. Initials and deltas are processed inside long transactions to maintain data integrity.
Unfortunately the current solution isn't scaling very well. The transactions are so large and long running that our RDBMS bogs down with various contention problems. Also, there isn't a good audit trail for how the deltas are applied, making it difficult to troubleshoot issues causing the local and remote versions of the dataset to get out of sync.
One idea is to not run the initials and deltas in transactions at all, and instead to attach a version number to each record indicating which delta or initial it came from. Once an initial or delta is successfully loaded, the application can be alerted that a new version of the dataset is available.
This just leaves the issue of how exactly to compose a view of a dataset up to a given version from the initial and deltas. (Apple's TimeMachine does something similar, using hard links on the file system to create "view" of a certain point in time.)
Does anyone have experience solving this kind of problem or implementing this particular solution?
Thanks!
have one writer and several reader databases. You send the write to the one database, and have it propagate the exact same changes to all the other databases. The reader databases will be eventually consistent and the time to update is very fast. I have seen this done in environments that get upwards of 1M page views per day. It is very scalable. You can even put a hardware router in front of all the read databases to load balance them.
Thanks to those who tried.
For anyone else who ends up here, I'm benchmarking a solution that adds a "dataset_version_id" and "dataset_version_verb" column to each table in question. A correlated subquery inside a stored procedure is then used to retrieve the current dataset_version_id when retrieving specific records. If the latest version of the record has dataset_version_verb of "delete", it's filtered out of the results by a WHERE clause.
This approach has an average ~ 80% performance hit so far, which may be acceptable for our purposes.

Importing new database table

Where I'm at there is a main system that runs on a big AIX mainframe. To facility reporting and operations there is nightly dump from the mainframe into SQL Server, such that each of our 50-ish clients is in their own database with identical schemas. This dump takes about 7 hours to finish each night, and there's not really anything we can do about it: we're stuck with what the application vendor has provided.
After the dump into sql server we use that to run a number of other daily procedures. One of those procedures is to import data into a kind of management reporting sandbox table, which combines records from a particularly important table from across the different databases into one table that managers who don't know sql so can use to run ad-hoc reports without hosing up the rest of the system. This, again, is a business thing: the managers want it, and they have the power to see that we implement it.
The import process for this table takes a couple hours on it's own. It filters down about 40 million records spread across 50 databases into about 4 million records, and then indexes them on certain columns for searching. Even at a coupld hours it's still less than a third as long as the initial load, but we're running out of time for overnight processing, we don't control the mainframe dump, and we do control this. So I've been tasked with looking for ways to improve one the existing procedure.
Currently, the philosophy is that it's faster to load all the data from each client database and then index it afterwards in one step. Also, in the interest of avoiding bogging down other important systems in case it runs long, a couple of the larger clients are set to always run first (the main index on the table is by a clientid field). One other thing we're starting to do is load data from a few clients at a time in parallel, rather than each client sequentially.
So my question is, what would be the most efficient way to load this table? Are we right in thinking that indexing later is better? Or should we create the indexes before importing data? Should we be loading the table in index order, to avoid massive re-ordering of pages, rather than the big clients first? Could loading in parallel make things worse by causing to much disk access all at once or removing our ability to control the order? Any other ideas?
Update
Well, something is up. I was able to do some benchmarking during the day, and there is no difference at all in the load time whether the indexes are created at the beginning or at the end of the operation, but we save the time building the index itself (it of course builds nearly instantly with no data in the table).
I have worked with loading bulk sets of data in SQL Server quite a bit and did some performance testing on the Index on while inserting and the add it afterwards. I found that BY FAR it was much more efficient to create the index after all data was loaded. In our case it took 1 hour to load with the index added at the end, and 4 hours to add it with the index still on.
I think the key is to get the data moved as quick as possible, I am not sure if loading it in order really helps, do you have any stats on load time vs. index time? If you do, you could start to experiment a bit on that side of things.
Loading with the indexes dropped is better as a live index will generate several I/O's for every row in the database. 4 million rows is small enough that you would not expect to get a significant benefit from table partitioning.
You could get a performance win by using bcp to load the data into the staging area and running several tasks in parallel (SSIS will do this). Write a generic batch file wrapper for bcp that takes the file path (and table name if necessary) and invoke a series of jobs in half a dozen threads with 'Execute Process' tasks in SSIS. For 50 jobs it's probably not worth trying to write a data-driven load controller process. Wrap these tasks up in a sequence container so you don't have to maintain all of the dependencies explicitly.
You should definitely drop and re-create the indexes as this will greatly reduce the amount of I/O during the process.
If the 50 sources are being treated identically, try loading them into a common table or building a partitioned view over the staging tables.
Index at the end, yes. Also consider setting the log level setting to BULK LOGGED to minimize writes to the transaction log. Just remember to set it back to FULL after you've finished.
To the best of my knowledge, you are correct - it's much better to add the records all at once and then index once at the end.

Resources