Everybody. I really need to know this things. Is it possible to reduce power consumption by increasing CPU utilization in modern multi-core servers?And why it is possible or not?
Related
Compiler options providing CPU cache line alignment are considered to achieve more cache hits. But in multi threaded or multi processing systems what guarantees to cache usage of the optimized code will obtain expected cache usage at run time while other processes or threads also use the same cache. In especially, multi core systems provide multiple cpu threads in parallel using cpu cache.
Cache line alignment can improve the hit rate because data is less likely to be split between lines, reducing the number of lines whose invalidation could cause a cache miss for that data. That's true whether it's a single thread or multiple threads sharing the cache.
I am doing performance comparisons of ScyllaDB and Cassandra, specifically looking at the impact of memory. The machines I am using each have 16GB and 8 cores.
Based on the docs, Cassandra will default to 4GB Xmx and use the remaining 12GB as file system cache.
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsTuneJVM.html
ScyllaDB instead will use all 16GB for itself.
http://docs.scylladb.com/faq/#scylla-is-using-all-of-my-memory-why-is-that-what-if-the-server-runs-out-of-memory
What I'm wondering is if this is a fair comparison setup (4GB Xmx for Cassandra vs 16GB for Scylla)? I realize this is what each recommend, but would a more fair test be 8GB Xmx for Cassandra and --memory 8G for ScyllaDB? My workload is mostly write intensive and I don't expect file system caching to always be able to help Cassandra. It's odd to me that ScyllaDB does not expect almost any file system caching compared to Cassandra's huge reliance on it.
Cassandra will always use all of the system memory; the heap size (-Xmx) setting just determines how much is used by the heap and how much by other memory consumers (off-heap structures and the page cache). So if you limit Scylla's memory usage, it will be at a disadvantage compared to Cassandra.
Scylla will use ~1/2 of the memory for MemTable, and the other half for Key/Partition caching.
If your workload is mostly write, more memory will have less of effect on performance, and should be bounded by either I/O or CPU.
I would recommend reading:
http://www.scylladb.com/2017/10/05/io-access-methods-scylla/
To understand the way Scylla is writing information.
And
http://www.scylladb.com/2016/12/15/sswc-part1/
To understand the way Scylla is balancing I/O workloads
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am configuring a new Dell server to run SQLServer 2012 Enterprise. I have a option to purchase a server that has 12 cores per physical socket (CPU) at 2.7Ghz or a server having 8 cores per CPU at 3.3GHz. I will hae 2 physical sockets (CPUS)
The server runs both OLTP and OLAP processing. What would give me the best performance for SQL?
Thanks in advance,
Max
For licensing reasons your company will likely prefer less, faster CPUs as costs rise linearly with the number of cores. A license costs >10x as much as a core.
Comparing cycles per seconds:
2.7*12=32.4 //more
3.3*8=26.4
The 12-core variant seems far superior. Note, that some workloads do not scale linearly with the number of cores. They scale less. This might turn performance in favor of the CPU with less cores. You'd need to measure with a realistic workload. Do not make the mistake of measuring a toy workload.
It really depends on your workload. If you expect to be calculation heavy (i.e. CPU intensive) or to be dealing with large datasets.
Generally it's easier for cores on a single CPU die to share cache and memory resources. This, of course, depends on specifically which CPU make/model you are talking about. Some architectures have the memory controller on the die (Intel i7 and AMD Athlon 64, for example). This makes for less memory latency, but more overhead when sharing memory between CPUs. Some architectures have a controller separate from the die (northbridge). However, the higher clock cycles might make-up for any memory context overhead.
So, if you're expecting a more memory intensive workload, go with fewer CPU sockets and more cores (and more memory). If you're expecting a more CPU intensive workload, go with higher clock cycles.
This is going to depend largely on your hardware platform, so your mileage may vary. Honestly, I find that more memory and faster disks make a much larger impact than CPU anyway.
Unfortunately, the only way to really know is to test both against real-world workloads and measure the difference.
It is not in the question, but the answer is probably more ram. Any change you can push for that?
For Firebase-based mobile applications in which latencies of ~1 minute (or manual sync) are acceptable, will power consumption be optimal? Is it possible and does it make sense to adjust keep-alives, etc?
Firebase is optimized for real-time communication (meaning as low latency as possible). While I have no reason to suspect it'll be a power hog, we haven't (yet) optimized for power consumption or done any in-depth testing.
Feel free to email support#firebase.com if you do any testing on your own and want to share your findings.
What level of CPU usage should be considered high for SQL Server? ie 80% 90% 100%?
if under normal loads the CPU averages above 40% I start to get nervous. However, that's because I know the nature of our traffic and the spikes we get. Your mileage may vary.
We actually aim even lower, for about a 7-10% average, but we do sometimes get processing that will spike us to 60%.
We measure total CPU every 30 seconds and email a report daily, and if we see a move of 2% away from that average we expect for that day we investigate.
It takes time but it's helps me sleep better at night :)
Generally, you don't want a machine to sustain a constant CPU of over 40 or 50%, because it won't be able to handle spikes in activity.
It really depends on your machine. The best thing is to monitor the server using perfmon and see when things start to run slowly. It is normal for SQL to use a lot of CPU under load.