Sql Server License choice optimization - sql-server

I am trying to figure out which is the best option to use for SQL server License.
I have done my research but still cannot find out the best option for my application, here are my search results.
Enterprise - Operating system maximum in all aspects, very costly.
Standard - Limited to lesser of 4 sockets or 24 cores, 128 GB RAM limitation. Much cheaper than Enterprise edition but still fairly pricey.
Web - Limited to lesser of 4 sockets or 16 cores, 64 GB RAM limitation. A bit cheaper than Standard.
I am just wondering about the CPU limitation, will it be abled to handle 20k + users total and up to 5k concurrent users with this CPU limit on both Web and Standard?
or I absolutely need to buy the enterprise edition for this number of users?
Given the application is fairly heavy on database consumption,
also given that the server has 32 cores.
in Brief what I am asking "is 16 Cores sufficient to handle 5k concurrent users in a heavy database consumption system? , and if not 24 cores Will be sufficient? "

Related

MariaDb one instance vs. many having hundreds of databases on Ubuntu server

I'm finishing a web app that consumes a lot of data. Currently there are 150 tables per db, many tables with 1,000 or less records, some with 10,000 or less records and just a few with 100,000 or less records and having a database per customer. I'm setting up a server with 64GB RAM, 1 TB SSD NVMe, Xeon 4 cores at 4GHz and 1GB bandwidth. I'm planning to make 10 instances of MariaDb with 6GB RAM each and the rest for the OS (Ubuntu 18 64-bit) putting 10-15 databases per instance.
Do you think this could be a good approach for the project?
Although 15000 tables is a lot, I see no advantage in setting up multiple instances on a single machine. Here's the rationale:
Unless you have lousy indexes, even a single instance will not chew up all the CPUs.
Having 10 instance means the ancillary data (tables, caches, other overhead) will be repeated 10 times.
When one instance is busy, it cannot rob RAM from the other instances.
If you do a table scan on 100K rows in a 6G instance (perhaps 4G for innodb_buffer_pool_size), it may push the other users in that instance out of the buffer_pool. Yeah, there are DBA things you could do, but remember, you're not a DBA.
150 tables? Could it be that there is some "over-normalization"?
I am actually a bit surprised that you are worried about 150 tables where the max size of some tables is 100k or fewer rows?
That certainly does not look like much of a problem for modern hardware that you are planning to use. I will, however, suggest allocating more RAM and use a large enough InnoDB bufferpool size, 70%, of your total ram is the suggested number by MariaDB.
I have seen clients running MariaDB with much larger data sets and a lot more tables + very high concurrency!
Anyways, the best way to proceed is, of course, to test during your UAT cycles with the similar to production data loads. I am still sure it's not going to be a problem.
If you can provide a Master/Slave setup and use MariaDB MaxScale on top, while MaxScale can provide you with automatic db server, connection, failover & transaction replay which gives you the highest level of availability, it also takes care of load balancing with its cool "Read/Write" splitting service. Your servers load will be balanced and overall a smooth experience :)
Not too sure if the above is something you have planned for in your design but just putting it here for a second look that your application will certainly benefit.
Hope this helps.
Cheers,
Faisal.

most impactful Postgres settings to tweak when host has lots of free RAM

My employer runs Postgres on a decently "large" VM. It is currently configured with 24 cores and 128 GB physical RAM.
Our monitoring solution indicates that the Postgres processes never consume more than about 11 GB of RAM even during periods of heaviest load. Presumably all the remaining free RAM is used by the OS to cache the filesystem.
My question: What configuration settings, if tweaked, are most likely to provide performance gains given a workload that's a mixture of transactional and analytical?
In other words, given there's an embarassingly large amount of free RAM, where am I likely to derive the most "bang for my buck" settings-wise?
EDITED TO ADD:
Here are the current values for some settings frequently mentioned in tuning guides. Note: I didn't set these values; I'm just reading what's in the conf file:
shared_buffers = 32GB
work_mem = 144MB
effective_cache_size = 120GB
"sort_mem" and "max_fsm_pages" weren't set anywhere in the file.
The Postgres version is 9.3.5.
The setting that controls Postgres memory usage is shared_buffers. The recommended setting is 25% of RAM with a maximum of 8GB.
Since 11GB is close to 8GB, it seems your system is tuned well. You could use effective_cache_size to tell Postgres you have a server with a large amount of memory for OS disk caching.
Two good places for starting Postgres performance tuning:
Turn on SQL query logging and explain analyze slow or frequent queries
Use pg_activity (a "top" for Postgres) to see what keeps your server busy

SQL Server long running query taking hours but using low CPU

I'm running some stored procedures in SQL Server 2012 under Windows Server 2012 in a dedicated server with 32 GB of RAM and 8 CPU cores. The CPU usage is always below 10% and the RAM usage is at 80% because SQL Server has 20 GB (of 32 GB) assigned.
There are some stored procedures that are taking 4 hours some days and other days, with almost the same data, are taking 7 or 8 hours.
I'm using the least restrictive isolation level so I think this should not be a locking problem. The database size is around 100 GB and the biggest table has around 5 million records.
The processes have bulk inserts, updates and deletes (in some cases I can use truncate to avoid generating logs and save some time). I'm making some full-text-search queries in one table.
I have full control of the server so I can change any configuration parameter.
I have a few questions:
Is it possible to improve the performance of the queries using
parallelism?
Why is the CPU usage so low?
What are the best practises for configuring SQL Server?
What are the best free tools for auditing the server? I tried one
from Microsoft called SQL Server 2012 BPA but the report is always
empty with no warnings.
EDIT:
I checked the log and I found this:
03/18/2015 11:09:25,spid26s,Unknown,SQL Server has encountered 82 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL11.HLSQLSERVER\MSSQL\DATA\templog.ldf] in database [tempdb] (2). The OS file handle is 0x0000000000000BF8. The offset of the latest long I/O is: 0x00000001fe4000
Bump up max memory to 24 gb.
Move tempdb off the c drive and consider mult tempdb files, with auto grow at least 128 Mbps or 256 Mbps.
Install performance dashboard and run performance dashboard report to see what queries are running and check waits.
If you are using auto grow on user data log and log files of 10%, change that to something similar to tempdb growth above.
Using performance dashboard check for obvious missing indexes that predict 95% or higher improvement impact.
Disregard all the nay Sayers who say not to do what I'm suggesting. If you do these 5 things and you're still having trouble post some of the results from performance dashboard, which by the way is free.
One more thing that may be helpful, download and install the sp_whoisactive stored proc, run it and see what processes are running. Research the queries that you find after running sp_whoisactive.
query taking hours but using low CPU
You say that as if CPU would matter for most db operations. HINT: They do not.
Databases need IO. RAM sin some cases helps mitigate this, but at the end it runs down to IO.
And you know what I see in your question? CPU, Memory (somehow assuming 32gb is impressive) but NO WORD ON DISC LAYOUT.
And that is what matters. Discs, distribution of files to spread the load.
If you look into performance counters then you will see latency being super high on discs - because whatever "pathetic" (in sql server terms) disc layout you have there, it simply is not up to the task.
Time to start buying. SSD are a LOT cheaper than discs. You may say "Oh, how are they cheaper". Well, you do not buy GB - you buy IO. And last time I checked SSD did not cost 100 times the price of discs - but they have 100 times or more the IO. and we talk always of random IO.
Then isolate Tempdb on separate SSD - tempdb either does no a lot or a TON and you want to see this.
Then isolate the log file.
Make multiple data files, for database and tempdb (particularly tempdb - as many as you have cores).
And yes, this will cost money. But at the end - you need IO and like most developers you got CPU. Bad for a database.

MS-SQL Express 2005/2008 multi instance CPU and memory utilization

With SQL Express (either 2005 or 2008 edition) there is a limit of 1GB memory and 1 CPU that can be used. What I'm wondering, is if two instances are installed on the same machine, would they use the same CPU and same 1GB of memory? Or, would they use potentially two different CPU's and 2GB memory?
The limitations are per-instance. Each instance is limited to its own 1 CPU and 1GB RAM.
You can have up to 16 instances of SQL Server Express Edition on a system.
Also in MSDE, the predecessor to SQL Server Express, the limitations were per-instance.
I can't speak for sure, but most likely they would be able to use separate CPUs and memory. It would be pretty tricky for them to coordinate to share memory like that. My suspicion is that each instance will run the same irrespective of other instances being present (at least in the respect you're talking about).
Suppose your system has 4 CPU and suppose 4 GB memory, and 2 instances of sql express, then each express will make utilization of one CPU and 1 GB Memory .
Hope it helps

High performance machine executes SQL queries slower than a normal one?

We have an 8 CPU 2.5GHz machine with 8 GB of RAM than executes SQL queries in slower fashion than a dual core 2.19 GHz with 4GB of RAM.
Why is this the case, given that Microsoft SQL Server 2000 is installed on both machines?
Just check these links to indicate where the bottleneck is situated
http://www.brentozar.com/sql/
I think the disk layout and the location where which SQL server database files are causing the trouble.
Our solution for multicore servers (our app executes many very complex queries, which tend to create many threads and these start to interlock and even deadlock sometimes):
sp_configure 'show advanced options', 1
reconfigure
go
sp_configure 'max degree of parallelism', 1
reconfigure
This is not ideal solution, but we haven't noticed any performance loss for other actions.
Of course you should optimize disk layout too and sometimes limit SQL server memory for 64bit server.
Also, you may have different settings of SQL Server (memory assignments and AWE memory, threads, maximum query memory, processor affinity, priority boost).
Check the execution plans for the same query on both machines and, if possible, post it here.
Most probably it will be the case.
Keep in mind that just be cause one machine has more CPUs running at a higher clock speed and memory than another, it's not necessarily going to solve a given problem faster than another.
Though you don't provide details, it's possible that the 8-CPU machine has 8 sockets, each with a single-core CPU (say, a P4-era Xeon) and 1 GB of local (say RDRAM) RAM. The second machine is a modern Core 2 Duo with 4GB of DDR2 RAM.
While each CPU in machine #1 has a higher individual frequency, the netburst architecture is much slower clock-for-clock than the Core 2 architecture. Additionally, if you have a light CPU load, but memory-intensive load that doesn't fit in the 1GB local to the CPU on the first machine, your memory accesses may be much more expensive on the first machine (as they have to happen via the other CPUs). Additionally, the DDR2 on the Core 2 machine is much quicker than the RDRAM in the Xeon.
CPU frequency and total memory aren't everything -- the CPU architecture, Memory types, and CPU and memory hierarchy also matter.
Of course, it may be a much simpler answer as the other answers suggest -- SQL Server tripping over itself trying to parallelize the query.

Resources