SQL Server Database Optimization Strategy - sql-server

I am starting a new project using SQL Server for a medical office. Their current database (SQL Server 2008) have over 500,000 rows that span across 15+ tables. Currently they are complaining that their data entry application is very slow to generate reports and insert new data.
For my new system I was thinking of developing a two tiered database approach where the primary used SQL Server 2012 will only contain 3 months worth of rows and the second SQL Server 2012 would maintain all the data for the system. This way when users insert new data it will be entered into a much smaller system and when they query recent data the query should execute much faster. This system will also have reporting, but I think the reports will have to be generated from the larger data set.
My questions are as follows
Will a solution like this improve the overall performance of the database
Are there any scalability concerns with this solution?
What is the best way to transfer that data between the two servers each night?
If my solution makes no sense please feel free to offer any other solutions.

Don't do this. Splitting your app into multiple databases will be a management nightmare. Plus, 500k records isn't that many, assuming that the records are of reasonable size.
Instead, go after the low-hanging fruit. Turn on logging and look at the access patterns. Which queries are slow? Figure out why. Do they lack indexes? Can the queries be simplified? Debug the problem.
Keep in mind that sometimes throwing hardware at the problem is the right solution. If you can solve the problem with an $800 server, do it. That's a lot cheaper than your time.

To chime in: 500K records is not so big. You ought to be able to make the db work very speedily as is with some tuning.

Related

SQL Server In Memory Tables performance for autocomplete search performance

I am using SQL Server and sometimes the performance of it is not enough.
I just want to know if MS SQL Server could handle searches on tables with a couple millions of rows with a good performance (below 1 sec).
For example, I need to do an autocomplete search that should work fluently and fastly so that the user will have a good user experience. In order to do it, I need to have a fast query mechanism. I am not sure that with SQL Server it will have a great performance.
I know that there are In Memory tables which improve the performance, but would it be enough?
I also thought to use Redis as the search engine which I know is really fast, but then it means that I will have to have 2 databases (SQL Server and Redis) and I want to avoid it if possible.
Thanks in advance.

Index/Statistics on volatile tables

One of my application has the following use-case:
user inputs some filters and conditions about orders (delivery date ranges,...) to analyze
the application compute a lot of data and save it on several support tables (potentially thousands of record for each analysis)
the application starts a report engine that use data from these tables
when exiting, the application deletes computed record from support tables
Actually I'm analyzing how to ehnance queries performance adding indexes/stastics to support tables and the SQL Profiler suggests me to create 3-4 indexes and 20-25 statistics.
The record in supports tables are costantly created and removed: it's correct to create all this indexes/statistics or there is the risk that all these data will be easily outdated (with the only result of a costant overhead for maintaining indexes/statistics)?
DB server: SQL Server 2005+
App language: C# .NET
Thanks in advance for any hints/suggestions!
First seems like a good situation for a data cube. Second, yes you should update stats before running your query once the support tables are populated. You should disable your indexes when inserting the data. Then the rebuild command will bring your indexes and stats up to date in one go. Profiler these days is usually quite good at these suggestions, but test the combinations to see what actully gives the best performance gains. To look as os cubes here What are the open source tools and techniques to build a complete data warehouse platform?

Hive vs SQL Server performance

1) I started using hive from last 2 months. I have a same task as that in SQL. I found that Hive is slow and takes more time to execute queries while SQL executes it in very few minutes/seconds.
After executing the task in Hive when I cross check the result in both (SQL and Hive), I found some difference in results (Not all but in some tables).
e.g. : I have one table which has 2012 records, when I executed a task in Hive in the same table in Hive I got 2007 records.
Why it is happening?
2) If I think to speed up my execution in Hive then what should I do for it?
(Currently I am executing all this stuff on single cluster only. If I think to increase the clusters then how many cluster should I need it to increase the performance)
Please suggest me some solution or some good practices so that I can do it keenly.
Thanks.
Hive and SQL Server are not comparable in any way other than the similarity in the syntax of the query language.
While SQL Server is built to be able to respond in realtime from a single machine, hive is for processing large data sets that may span hundreds or thousands of machines.
Hive (via hadoop) has a lot of overhead for starting up a job.
Hive and hadoop will not cache data in memory like sql server does.
Hive has only recent added indexes so most queries end up being a table scan.
If your dataset fits on a single computer you probably want to stick with SQL Server and not hive. Hive performance tuning is mostly based in Hadoop performance tuning although depending on the types of queries you run there can be free performance from using the LazyBinarySerDe.
Hive does have some differences from regular SQL that may be effecting your query. Without more details I can't speculate as to why.
Ignore the "they aren't comparable in any way" comment. If it stores data, it is comparable to any other method of storing data.
But be aware that SQL Server, 13 years ago, had 1000+ people being paid full-time to improve their product. So while that doesn't "Prove" anything, it does increase ones confidence that more work = more results.
More importantly, look for any non-trivial benchmark done on an open source and/or non-relational method of storing data vs one of the mainstream relational databases. You won't find them. That says a lot to me. (Also, mainstream isn't necessary since the current world's fastest data engine isn't even mainstream. But if that level is needed, look at ExoSol.)
If your need is to learn to work with technology at your job and that technology is Hive, my recommendation is to find someone who is really focused on getting the most out of Hive query performance as possible. If there is a Hive query guru out there, find them. But if you need a lot more than what they can give you, you're using the wrong technology.
And if Hive isn't a requirement, I would avoid it and other technologies lacking the compelling business model that will guarantee their survival past 5 years and move them out of niche category they currently exist in (currently 20 times less popular than any mainstream data engine - https://db-engines.com/en/ranking).

Recommended database type to handle billion of records

I started to work in a project which must reuse a Microsoft SQL Server 2008 old database that has a table with more than 7,000,000 records.
Queries to that table last minutes and I was wondering if a different type of database (i.e. not relational) would be better to handle this.
What do you recommend? In any case, is there a way to improve the performance of a relational database?
Thanks
UPDATE:
I am using Navicat to perform this simple query:
SELECT DISTINCT [NROCAJA]
FROM [CAJASE]
so complex stuff and subqueries are not a problem. I was also wondering if a lack of indexes was the problem, but the table seems to be indexed:
EPIC FAIL:
The database was in a remote server!! The query actually takes 5 seconds (I still think it's much time, but now the issue is different). 99% of elapsed time was network transfer. Thanks for your answers anyway :)
7 million is a tiny database for SQL Server, it easily handles terrabytes of data with proper design. Likely you have a poor design combined with missing indexes combined with poor hardware, combined with badly performing queries. Don't blame the incompetence of your database developers on SQL Server.
Profile your queries - 7 million records isn't that great a number, so chances are you're missing an index or performing complex sub-queries that are not performing well as the dataset scales.
I don't think you need to re-architect the entire system yet.
The fact that you are selecting "distinct" could be a problem. Maybe move those distinct value into it's own table to avoid duplication.

Can SQL server 2008 handle 300 transactions a second?

In my current project, the DB is SQL 2005 and the load is around 35 transactions/second. The client is expecting more business and are planning for 300 transactions/second. Currently even with good infrastructure, DB is having performance issues. A typical transaction will have at least one update/insert and a couple of selects.
Have you guys worked on any systems that handled more than 300 txn/s running on SQL 2005 or 2008, if so what kind of infrastructure did you use how complex were the transactions? Please share your experience. Someone has already suggested using Teradata and I want to know if this is really needed or not. Not my job exactly, but curious about how much SQL can handle.
Its impossible to tell without performance testing - it depends too much on your environment (the data in your tables, your hardware, the queries being run).
According to tcp.org its possible for SQL Server 2005 to get 1,379 transactions per second. Here is a link to a system that's done it. (There are SQL Server based systems on that site that have far more transactions... the one I linked was just the first I one I looked at).
Of course, as Kragen mentioned, whether you can achieve these results is impossible for anyone here to say.
Infrastructure needs for high performance SQL Servers may be very differnt than your current structure.
But if you are currently having issues, it is very possible the main part of your problem is in bad database design and bad query design. There are many way to write poorly performing queries. In a high transaction system, you can't afford any of them. No select *, no cursors, no correlated subqueries, no badly performing functions, no where clauses that aren't sargeable and on and on.
The very first thing I'd suggest is to get yourself several books on SQl Server peroformance tuning and read them. Then you will know where your system problems are likely to be and how to actually determine that.
An interesting article:
http://sqlblog.com/blogs/paul_nielsen/archive/2007/12/12/10-lessons-from-35k-tps.aspx

Resources