SQL query big duration increase after cpu+mainboard upgrade - sql-server

After upgrading server hardware (cpu+mainboard) I'm having a big increase in query duration for really small and simple querys.
Software: Windows Server 2012 R2 + SQL Server 2014
Storage: Samsung SSD 850 EVO 2TB Disk
Old Hardware: i7-4790k 4.0Ghz 4core cpu + Asus H97M-E mainboard + 32 GB DDR3
New Hardware: i9-7900X 3.60Ghz 10core cpu + Asus Prime X299 mainboard + 32 GB DDR4
Query Sample:
UPDATE CLIE_PRECIOS_COMPRA SET [c_res_tr] = '0.0' WHERE eje ='18' AND mes =8 AND dia =27 AND hor =19 AND unipro='001'
SQL Profiler Result :
Old Hardware - CPU: 0, Reads 4, Writes 0, Duration 123
Old Hardware - CPU: 0, Reads 4, Writes 0, Duration 2852
I've checked network speed of both server to be the same but anyway I'm running the querys directly in the server throught Microsoft SQL Server Management console to avoid applicactions or network issues.
Checked Storage speed too being the same both at reading and writting in old and new hardware.
Also played with paralelism and tried diferent scenarios even disabling paralelism with the same result.
Of course the data is the same having the same copy of SQL database in both hardware.
I've set the duration to be showed in microseconds instead of miliseconds to appreciate better the diference.
The diference in duration for a single query is not really visible to user but the problem is that there are several thousands querys of this type and the time increase is important.
Any hint or thing to investigate would be really appreciated.
Current Execution Plan New Server: https://www.brentozar.com/pastetheplan/?id=HJYDtQQD7
Current Execution Plan Old Server: https://www.brentozar.com/pastetheplan/?id=SynyW4mPQ
Thanks in advance.

Related

Can snowflake work as an operational data store against which I can write rest APIs

I am researching snowflake database and have a data aggregation use case, where i need to expose the aggregated data via a Rest API. While the data ingestion and aggregation seems to be well defined, is snowflake a system that can be used as an operational data store for servicing high throughput apis?
Or is this an anti pattern for this system
Updating based on your recent comment.
Here's some quick test results I did on large tables we have in production. *Changed the table names for display.
vLookupView records = 175,760,316
vMainView records = 179,035,026
SELECT
LP.REGIONCODE
, SUM(L.VALUE)
FROM DBO.vLookupView AS LP
INNER JOIN DBO.vMainView AS M
ON LP.PK = M.PK
GROUP BY LP.REGIONCODE;
Results:
SQL SERVER
Production box - 2:04 minutes
**Snowflake:**
By Warehouse (compute) size
XS - 17.1 seconds
Small - 9.9 seconds
Medium - 7.1s seconds
Large - 5.4 seconds
Extra Large - 5.4 seconds
When I added a WHERE condition
WHERE L.ENTEREDDATE BETWEEN '1/1/2018' AND '6/1/2018'
the results were:
SQL SERVER
Production box - 5 seconds
**Snowflake:**
By Warehouse (compute) size
XS - 12.1 seconds
Small - 3.9 seconds
Medium - 3.1s seconds
Large - 3.1 seconds
Extra Large - 3.1 seconds

How to check max-sql-memory and cache settings for an already running instance of cockroach db?

I have a cockroachdb instance running in production and would like to know the settings for the --max-sql-memory and --cache specified when the database was started. I am trying to enhance performance by following this production checklist but I am not able infer the setting either on dashboard or sql console.
Where can I check the values of max-sql-memory and cache value ?
Note: I am able to access the cockroachdb admin console and sql tables.
You can find this information in the logs, shortly after node startup:
I190626 10:22:47.714002 1 cli/start.go:1082 CockroachDB CCL v19.1.2 (x86_64-unknown-linux-gnu, built 2019/06/07 17:32:15, go1.11.6)
I190626 10:22:47.815277 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 31 GiB, using system memory
I190626 10:22:47.815311 1 server/config.go:386 system total memory: 31 GiB
I190626 10:22:47.815411 1 server/config.go:388 server configuration:
max offset 500000000
cache size 7.8 GiB <====
SQL memory pool size 7.8 GiB <====
scan interval 10m0s
scan min idle time 10ms
scan max idle time 1s
event log enabled true
If the logs have been rotated, the value depends on the flags.
The defaults for v19.1 are 128MB, with recommended settings being 0.25 (a quarter of system memory).
The settings are not currently logged periodically or exported through metrics.

Almost empty plan cache

I am experiencing a strange situation - my plan cache is almost empty. I use the following query to see what's inside:
SELECT dec.plan_handle,qs.sql_handle, dec.usecounts, dec.refcounts, dec.objtype
, dec.cacheobjtype, des.dbid, des.text,deq.query_plan
FROM sys.dm_exec_cached_plans AS dec
join sys.dm_exec_query_stats AS qs on dec.plan_handle=qs.plan_handle
CROSS APPLY sys.dm_exec_sql_text(dec.plan_handle) AS des
CROSS APPLY sys.dm_exec_query_plan(dec.plan_handle) AS deq
WHERE cacheobjtype = N'Compiled Plan'
AND objtype IN (N'Adhoc', N'Prepared')
One moment it shows me 82 rows, the next one 50, then 40 then 55 and so on while an hour before I couldn't reach the end of the plan cache issuing the same command. The point is that SQL Server keeps the plan cache very-very small.
The main reason of my investigation is high CPU compared to our baselines without any high loads, under normal during-the day workload - constantly 65-80%
Perfmon counters show low values for Plan Cache Hit Ratio - around 30-50%, high compilations - 400 out of 2000 batch requests per second and high CPU - 73 avg. What could cause this behaviour?
The main purpose of the question is to learn the possible reasons for an empty plan cache.
Memory is OK - min: 0 max: 245000.
I also didn't notice any signs of memory pressure - PLE, lazy writes, free list stalls disk activity were just ok, logs did not tell me a thing.
I came here for possible causes of this so I could proceed with investigation.
EDIT: I have also considered this thread:
SQL Server 2008 plan cache is almost always empty
But none of the recommendations/possible reasons are relevant.
The main purpose of the question is to learn the possible reasons for an empty plan cache.
If it is to learn,the answer from Martin Smith,in the thread you referred will help you
If you want to know in particular,why plan is getting emptied,i recommend using extended events and try below extended event

low plan cache memory

My question is: Why would an SQL server have a low amount of memory allocated to Plan Cache? And, if a correction is needed, what might be done to correct this?
We have an SQL server with an issue of Compilations per second being high indicating not enough of the execution plans are cached for use (first detected when we ran sp_AskBrent #ExpertMode=1, #Seconds=30 [from brentozar.com/askbrent/]).
We have run the SQL Live Monitor application (https://sqlmonitor.codeplex.com/) on the server and the Plan Cache results show a very low amount of memory (355.27 MB) allocated to caching execution plans and therefore a low Plan Cache Hit Ratio (varying between 5 and 50 percent).
My research shows that memory allocated to Plan Cache is not a configurable amount, but a calculation based on the memory allocated to the SQL instance. So, for this server, which has 48GB total and 40GB allocated to SQL, the calculation of (.75 * 4GB) + (.1 * 36GB) should allocate 6.6GB for Plan Cache. Did I calculate correctly?
Of note, this server has only one production database and that database is 50GB in size. We have Optimize for Ad hoc Workloads set to True and just set Parameterization at the database level to Forced.
Compared to another SQL server (that has 32GB total and 26GB allocated) the Plan Cache numbers look more reasonable (4GB in size and a Hit Ratio of above 80 percent.
Also, running the script below against both SQL servers consistently shows the problem server having a hit percentage in the mid 70% range and the other server showing a hit percentage in the high 90% range.
WITH cte1
AS ( SELECT [dopc].[object_name] ,
[dopc].[instance_name] ,
[dopc].[counter_name] ,
[dopc].[cntr_value] ,
[dopc].[cntr_type] ,
ROW_NUMBER() OVER ( PARTITION BY [dopc].[object_name], [dopc].[instance_name] ORDER BY [dopc].[counter_name] ) AS r_n
FROM [sys].[dm_os_performance_counters] AS dopc
WHERE [dopc].[counter_name] LIKE '%Cache Hit Ratio%'
AND ( [dopc].[object_name] LIKE '%Plan Cache%'
OR [dopc].[object_name] LIKE '%Buffer Cache%'
)
AND [dopc].[instance_name] LIKE '%_Total%'
)
SELECT CONVERT(DECIMAL(16, 2), ( [c].[cntr_value] * 1.0 / [c1].[cntr_value] ) * 100.0) AS [hit_pct]
FROM [cte1] AS c
INNER JOIN [cte1] AS c1
ON c.[object_name] = c1.[object_name]
AND c.[instance_name] = c1.[instance_name]
WHERE [c].[r_n] = 1
AND [c1].[r_n] = 2;
See:
... The maximum size for all caches is a function of the buffer pool size and cannot exceed the maximum server memory...
(https://technet.microsoft.com/en-us/library/ms181055%28v=sql.105%29.aspx)
I think optimize for ad hoc workloads option helps if you mostly run adhoc queries:
sp_configure 'show advanced options',1
GO
reconfigure
GO
sp_configure 'optimize for ad hoc workloads',1
GO
reconfigure
GO
DBCC FREEPROCCACHE
GO
Don't try it first on Prod servers, especialy FREEPROCCACHE.

Apache2: server-status reported value for "requests/sec" is wrong. What am I doing wrong?

I am running Apache2 on Linux (Ubuntu 9.10).
I am trying to monitor the load on my server using mod_status.
There are 2 things that puzzle me (see cut-and-paste below):
The CPU load is reported as a ridiculously small number,
whereas, "uptime" reports a number between 0.05 and 0.15 at the same time.
The "requests/sec" is also ridiculously low (0.06)
when I know there are at least 10 requests coming in per second right now.
(You can see there are close to a quarter million "accesses" - this sounds right.)
I am wondering whether this is a bug (if so, is there a fix/workaround),
or maybe a configuration error (but I can't imagine how).
Any insights would be appreciated.
-- David Jones
- - - - -
Current Time: Friday, 07-Jan-2011 13:48:09 PST
Restart Time: Thursday, 25-Nov-2010 14:50:59 PST
Parent Server Generation: 0
Server uptime: 42 days 22 hours 57 minutes 10 seconds
Total accesses: 238015 - Total Traffic: 91.5 MB
CPU Usage: u2.15 s1.54 cu0 cs0 - 9.94e-5% CPU load
.0641 requests/sec - 25 B/second - 402 B/request
11 requests currently being processed, 2 idle workers
- - - - -
After I restarted my Apache server, I realized what is going on. The "requests/sec" is calculated over the lifetime of the server. So if your Apache server has been running for 3 months, this tells you nothing at all about the current load on your server. Instead, reports the total number of requests, divided by the total number of seconds.
It would be nice if there was a way to see the current load on your server. Any ideas?
Anyway, ... answered my own question.
-- David Jones
Apache status value "Total Accesses" is total access count since server started, it's delta value of seconds just what we mean "Request per seconds".
There is the way:
1) Apache monitor script for zabbix
https://github.com/lorf/zapache/blob/master/zapache
2) Install & config zabbix agentd
UserParameter=apache.status[*],/bin/bash /path/apache_status.sh $1 $2
3) Zabbix - Create apache template - Create Monitor item
Key: apache.status[{$APACHE_STATUS_URL}, TotalAccesses]
Type: Numeric(float)
Update interval: 20
Store value: Delta (speed per second) --this is the key option
Zabbix will calculate the increment of the apache request, store delta value, that is "Request per seconds".

Resources