I have a spotfire project that references several large SQL Server based tables (One has 700,000 rows with 200 columns, another is 80,000,000 rows with 10 columns, a few others that are much smaller by comparison). Currently I use information links with prompts to narrow down the data before loading into spotfire. Still have issues sometimes with RAM usage creeping up and random CPU spikes after data has been loaded.
My questions are if I add indexes to the SQL tables:
Will the amount of RAM/CPU usage by spotfire get better (lower)?
Will it help speed up the initial data load time?
Should I even bother?
I'm using SQL Server 2016 and Tibco Spotfire Analyst 7.7.0 (build version 7.7.0.39)
Thanks
If you add indexes without logical reason, it actually makes your system slower because indexes constantly update themselves after each INSERT, UPDATE, DELETE. You can ignore my statement if your DB has static data and you won't change the content usually.
You need to understand what parts of your queries consume most of resources, then create indexes accordingly.
Following URLs will help you:
https://www.liquidweb.com/kb/mysql-performance-identifying-long-queries/
https://www.eversql.com/choosing-the-best-indexes-for-mysql-query-optimization/
Related
We have a web service that pumps data into 3 database tables and a web application that reads that data in aggregated format in a SQL Server + ASP.Net environment.
There is so much data arriving to the database tables and so much data read from them and at such high velocity, that the system started to fail.
The tables have indexes on them, one of them is unique. One of the tables has billions of records and occupies a few hundred gigabytes of disk space; the other table is a smaller one, with only a few million records. It is emptied daily.
What options do I have to eliminate the obvious problem of simultaneously reading and writing from- and to multiple database tables?
I am interested in every optimization trick, although we have tried every trick we came across.
We don't have the option to install SQL Server Enterprise edition to be able to use partitions and in-memory-optimized tables.
Edit:
The system is used to collect fitness tracker data from tens of thousands of devices and to display data to thousands of them on their dashboard in real-time.
Way too broad of requirements and specifics to give a concrete answer. But a suggestion would be to setup a second database and do log shipping over to it. So the original db would be the "write" and the new db would be the "read" database.
Cons
Diskspace
Read db would be out of date by the length of time for log tranfser
Pro
- Could possible drop some of the indexes on "write" db, this would/could increase performance
- You could then summarize the table in the "read" database in order to increase query performance
https://msdn.microsoft.com/en-us/library/ms187103.aspx
Here's some ideas, some more complicated than others, their usefulness depending really heavily on the usage which isn't fully described in the question. Disclaimer: I am not a DBA, but I have worked with some great ones on my DB projects.
[Simple] More system memory always helps
[Simple] Use multiple files for tempdb (one filegroup, 1 file for each core on your system. Even if the query is being done entirely in memory, it can still block on the number of I/O threads)
[Simple] Transaction logs on SIMPLE over FULL recover
[Simple] Transaction logs written to separate spindle from the rest of data.
[Complicated] Split your data into separate tables yourself, then union them in your queries.
[Complicated] Try and put data which is not updated into a separate table so static data indices don't need to be rebuilt.
[Complicated] If possible, make sure you are doing append-only inserts (auto-incrementing PK/clustered index should already be doing this). Avoid updates if possible, obviously.
[Complicated] If queries don't need the absolute latest data, change read queries to use WITH NOLOCK on tables and remove row and page locks from indices. You won't get incomplete rows, but you might miss a few rows if they are being written at the same time you are reading.
[Complicated] Create separate filegroups for table data and index data. Place those filegroups on separate disk spindles if possible. SQL Server has separate I/O threads for each file so you can parallelize reads/writes to a certain extent.
Also, make sure all of your large tables are in separate filegroups, on different spindles as well.
[Complicated] Remove inserts with transactional locks
[Complicated] Use bulk-insert for data
[Complicated] Remove unnecessary indices
Prefer included columns over indexed columns if sorting isn't required on them
That's kind of a generic list of things I've done in the past on various DB projects I've worked on. Database optimizations tend to be highly specific to your situation...which is why DBA's have jobs. Some of the 'complicated' answers could be simple if your architecture supports it already.
We use a SQL Server 2008 Web Edition on a Windows 2012 R2 server (32 GB RAM) to store data for an ASP.NET based web application. There are several dabases with news tables and different views which we query regularly (SqlDataReader, Linq-to-SQL) with different joins and filter conditions. The queries itself are longer and domain-specific so I skip an example.
So far everything worked fine.
Now we had to change such a query and extend it with a simple OR condition.
The result was that the number of reads and writes in the TempDB increased dramatically. Dramatically means 1000 writes of more than 100 MB per minute which results in a total tempdb file size of currently 1.5 GB.
If we remove the OR filter statement from the original query the TempDB file I/O normalizes instantly.
However, we do not have a clue what's going on within the TempDB. We ran the Query Analyzer several times and compared the results but its index optimization recommendations were only related to other databases stats and did not have any effect.
How would you narrow down this issue? Does anyone else experienced such a behavior in the past? Is it likely to be a problem with the news query itself or is it possible that we simply have to change some TempDB database properties to improve its I/O performance, e.g. autogrowth?
Start by analyzing your execution plans and run your queries with statistics (use the profiler). The problem is not in de tempdb, but in your queries. Then you will see where you select to many row which are temporary saved in de tempdb. Then you can change the queries or add the index you are missing.
One of my application has the following use-case:
user inputs some filters and conditions about orders (delivery date ranges,...) to analyze
the application compute a lot of data and save it on several support tables (potentially thousands of record for each analysis)
the application starts a report engine that use data from these tables
when exiting, the application deletes computed record from support tables
Actually I'm analyzing how to ehnance queries performance adding indexes/stastics to support tables and the SQL Profiler suggests me to create 3-4 indexes and 20-25 statistics.
The record in supports tables are costantly created and removed: it's correct to create all this indexes/statistics or there is the risk that all these data will be easily outdated (with the only result of a costant overhead for maintaining indexes/statistics)?
DB server: SQL Server 2005+
App language: C# .NET
Thanks in advance for any hints/suggestions!
First seems like a good situation for a data cube. Second, yes you should update stats before running your query once the support tables are populated. You should disable your indexes when inserting the data. Then the rebuild command will bring your indexes and stats up to date in one go. Profiler these days is usually quite good at these suggestions, but test the combinations to see what actully gives the best performance gains. To look as os cubes here What are the open source tools and techniques to build a complete data warehouse platform?
I am starting a new project using SQL Server for a medical office. Their current database (SQL Server 2008) have over 500,000 rows that span across 15+ tables. Currently they are complaining that their data entry application is very slow to generate reports and insert new data.
For my new system I was thinking of developing a two tiered database approach where the primary used SQL Server 2012 will only contain 3 months worth of rows and the second SQL Server 2012 would maintain all the data for the system. This way when users insert new data it will be entered into a much smaller system and when they query recent data the query should execute much faster. This system will also have reporting, but I think the reports will have to be generated from the larger data set.
My questions are as follows
Will a solution like this improve the overall performance of the database
Are there any scalability concerns with this solution?
What is the best way to transfer that data between the two servers each night?
If my solution makes no sense please feel free to offer any other solutions.
Don't do this. Splitting your app into multiple databases will be a management nightmare. Plus, 500k records isn't that many, assuming that the records are of reasonable size.
Instead, go after the low-hanging fruit. Turn on logging and look at the access patterns. Which queries are slow? Figure out why. Do they lack indexes? Can the queries be simplified? Debug the problem.
Keep in mind that sometimes throwing hardware at the problem is the right solution. If you can solve the problem with an $800 server, do it. That's a lot cheaper than your time.
To chime in: 500K records is not so big. You ought to be able to make the db work very speedily as is with some tuning.
What is the fastest method to fill a database table with 10 Million rows? I'm asking about the technique but also about any specific database engine that would allow for a way to do this as fast as possible. I"m not requiring this data to be indexed during this initial data-table population.
Using SQL to load a lot of data into a database will usually result in poor performance. In order to do things quickly, you need to go around the SQL engine. Most databases (including Firebird I think) have the ability to backup all the data into a text (or maybe XML) file and to restore the entire database from such a dump file. Since the restoration process doesn't need to be transaction aware and the data isn't represented as SQL, it is usually very quick.
I would write a script that generates a dump file by hand, and then use the database's restore utility to load the data.
After a bit of searching I found FBExport, that seems to be able to do exactly that - you'll just need to generate a CSV file and then use the FBExport tool to import that data into your database.
The fastest method is probably running an INSERT sql statement with a SELECT FROM. I've generated test data to populate tables from other databases and even the same database a number of times. But it all depends on the nature and availability of your own data. In my case i had enough rows of collected data where a few select/insert routines with random row selection applied half-cleverly against real data yielded decent test data quickly. In some cases where table data was uniquely identifying i used intermediate tables and frequency distribution sorting to eliminate things like uncommon names (eliminated instances where a count with group by was less than or equal to 2)
Also, Red Gate actually provides a utility to do just what you're asking. It's not free and i think it's Sql Server-specific but their tools are top notch. Well worth the cost. There's also a free trial period.
If you don't want to pay or their utility you could conceivably build your own pretty quickly. What they do is not magic by any means. A decent developer should be able to knock out a similarly-featured though alpha/hardcoded version of the app in a day or two...
You might be interested in the answers to this question. It looks at uploading a massive CSV file to a SQL server (2005) database. For SQL Server, it appears that a SSIS DTS package is the fastest way to bulk import data into a database.
It entirely depends on your DB. For instance, Oracle has something called direct path load (http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch09.htm), which effectively disables indexing, and if I understand correctly, builds the binary structures that will be written to disk on the -client- side rather than sending SQL over.
Combined with partitioning and rebuilding indexes per partition, we were able to load a 1 billion row (I kid you not) database in a relatively short order. 10 million rows is nothing.
Use MySQL or MS SQL and embedded functions to generate records inside the database engine. Or generate a text file (in cvs like format) and then use Bulk copy functionality.