When I run MPR test for Microsoft azure, I got below two issues:
1. Which parameters I have to select in 'SQL Server 2014 Online Transaction Processing and Data Warehouse Gold Tests'
Partitioning
In-Memory OLTP Tables
Clustered Columnstore Index
Resource Governor
Encrypted Backups
2.'Default Trace should be turn on' Status shown as 'off', How to handle this?
1.Those tests are designed to see ,if your application passes them,one way I see them is select all,if you are not using those features,you will not see any issues.If you are using those,they will be tested to see if they meet criteria..
2.
This turns default trace on
sp_configure 'default trace enabled' ,1
reconfigure
Default trace runs always ,I am not sure why it is stopped in your infra .if you want to stop it,you can use below command
sp_configure 'default trace enabled' ,0
reconfigure
Related
This question is related to: Debezium How do I correctly register the SqlServer connector with Kafka Connect - connection refused
In Windows 10, I have Debezium running on an instance of Microsoft SQL Server that is outside of a Docker container. I am getting the following warning every 390 milliseconds:
No maximum LSN recorded in the database; please ensure that the SQL
Server Agent is running
[io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
I checked Debezium's code on Github and the only place that I can find this warning states in the code comments that this warning should only be thrown if the Agent is not running. I have confirmed that the SQL Server Agent is running.
Why is this warning showing up and how do I fix it?
Note:
My current solution appears to only work in a non-production environment - per Docker's documentation.
LSN is the "pieces" of information related about your SQL Server changes. If you don't have LSN, is possible that your CDC is not running or not configured properly. Debezium consumes LSNs to replicate so, your SQL Server need to generate this.
Some approaches:
Did you checked if your table are with CDC enabled? This will list your tables with CDC enabled:
SELECT s.name AS Schema_Name, tb.name AS Table_Name
, tb.object_id, tb.type, tb.type_desc, tb.is_tracked_by_cdc
FROM sys.tables tb
INNER JOIN sys.schemas s on s.schema_id = tb.schema_id
WHERE tb.is_tracked_by_cdc = 1
Your CDC database are enabled and runnig? (see here)
Check if enabled:
SELECT *
FROM sys.change_tracking_databases
WHERE database_id=DB_ID('MyDatabase')
And check if is running:
EXECUTE sys.sp_cdc_enable_db;
GO
Your CDC service are running on SQL Server? See in docs
EXEC sys.sp_cdc_start_job;
GO
On enabling table in CDC, I had some issues with rolename. For my case, configuring at null solved my problem (more details here)
EXEC sys.sp_cdc_enable_table
#source_schema=N'dbo',
#source_name=N'AD6010',
#capture_instance=N'ZZZZ_AD6010',
#role_name = NULL,
#filegroup_name=N'CDC_DATA',
#supports_net_changes=1
GO
Adding more to William's answer.
For the case SQL Server Agent is not running
You can enable it by following :
Control panel >
Administrative Tools >
Click "Services"
Look for SQL Server Agent
Right click and Start
Now you can fire cdc job queries in your mssql.
PS: you need to have login access to windows server.
Another possibility of this error (I just ran into this warning myself this morning trying to bring a new DB online) is the SQL login does not have the permissions needed. Debezium runs the following SQL. Check that the SQL login you are using has access to run this stored procedure and it returns the tables you have set up in CDC. If you get an error or zero rows returned, work with your DBA to get the appropriate permissions set up.
EXEC sys.sp_cdc_help_change_data_capture
I run same JAVA application (spring/hibernate) on different system both use same SQL Server version.
I'm using SQL Server Profiler to trace a query which I run (exactly same) on both systems.
This is my SQL Server version on both system:
Trace System 1 : slow-system2.trc query takes randomly between 100 - 300ms
Trace System 2 : fast.trc query takes randomly between 10-20ms
It seems here on slow-screenshot a query of "use database" takes "331ms" compared to fast.trc (0ms= :
What can cause this difference just by running "use database" query ?
I tried on a 3th system running on sql express which is too slow here is trace
It seems here on "sql express" it is due to the fact I have two additional classEvent Audit Logout that takes time :
Maybe I missed out some option on SQL Server?
The long duration of the USE statement indicates the database may be set to AUTO_CLOSE ON. Overhead is incurred during database startup when it must be opened.
The setting can be changed with:
ALTER DATABASE [YourDatabase] SET AUTO_CLOSE OFF;
I have a very strange situation on SQL Server that I cannot fathom out.
Environment : SQL Server 2012 SP3 CU3 running on a 2 node Windows 2008 R2 cluster
In SQL Server Management Studio\Management\Maintenance Plans\ I am unable to create or edit existing plans.
I receive the error:
'Agent XPs' component is turned off as part of the security configuration for this server. A system administrator can enable the use of 'Agent XPs' by using sp_configure. For more information about enabling 'Agent XPs', see "Surface Area Configuration" in SQL Server Books Online. (ObjectExplorer)
Checking around that error I expected the following config was going to be required.
-- To allow advanced options to be changed.
EXEC sp_configure 'show advanced options', 1;
GO
-- To update the currently configured value for advanced options.
RECONFIGURE;
GO
-- To enable the feature.
EXEC sp_configure 'Agent XPs', 1;
GO
-- To update the currently configured value for this feature.
RECONFIGURE;
GO
However, I noticed that SQL Agent was already running so I thought I would also check existing config options for 'Agent XPs'
What was interesting was that config_value = 0, run_value = 1 where I was expecting config_value = 1, run_value = 1.
I thought I'd try the sp_configure solution to 'force' the config but when I ran it (step by step), the first RECONFIGURE statement just hung and indeed when it ran I could not even run an sp_who2 to see if it was blocking or being blocked.
The only way I could kill the RECONFIGURE was to close the query window which cancelled it. I therefore am unable to run EXEC sp_configure 'Agent XPs', 1 as the required RECONFIGURE cannot be run.
After a failover of the cluster, the config settings for 'Agent XPs'
remains at config_value = 0, run_value = 1.
Has anyone got any ideas as to how to fix it?
I stumbled across an internet post with a similar issue and that contained a nugget of information that allowed me to ultimately fix the issue.
I documented the case over at SQLServerCentral
https://www.sqlservercentral.com/Forums/1927277/SQL-Server-2012-tells-me-Agent-XPs-component-is-turned-off-but-SQL-Agent-is-running
I am in the midst of evaluating default SQL Server 2008 R2 configuration settings.
I have been asked to run the below script on the production server:
sp_configure 'remote query timeout', 0
sp_configure 'max server memory (MB)', 28000
sp_configure 'remote login timeout', 300
go
reconfigure with override
go
Before proceeding on this, I have been trying to gauge the advantages and disadvantages of each line of SQL code.
Edited on 17-May-2016 14:19 IST:
Few Microsoft links that I have referred are as below:
https://msdn.microsoft.com/en-us/library/ms178067.aspx
https://msdn.microsoft.com/en-IN/library/ms175136.aspx
Edited on 23-May-2016 11:15 IST:
I have set the 'MAX SERVER MEMORY' based on feedback here and further investigation from my end. I have provided my inferences to the customer.
I have also provided my inferences on the other 2 queries based on views and answers provided here.
Thanks to all for your help. I will update this question after inputs from the customer.
Following Query will set the query timeout to 0 , i.e No timeout
sp_configure 'remote query timeout', 0
This value has no effect on queries received by the Database Engine.
To disable the time-out, set the value to 0. A query will wait until
it is canceled.
sp_configure 'max server memory (MB)', 28000
amount of memory (in megabytes) that is managed by the SQL Server
Memory Manager for a SQL Server process used by an instance of SQL
Server.
sp_configure 'remote login timeout', 300
If you have applications that connect remotely to server ,we can set timeout using the above query.
Note :
You can also set the server properties via SSMS (management studio) where you can set the maximum and minimum values rather using the codes as shown in your post.
You can very well try these queries ,but settings that you would like to opt in would depend on hardware and application type you are working on.
I would generally say that these statements are quite idiotic. Yes, seriously.
Line by line:
sp_configure 'remote query timeout', 0
Makes queries run unlimited time before aborting. While I accept there are long running queries, those should be rare (the default timeout of IIRC 30 seconds handles 99.99% of the queries) and the application programmer can set an appropriate timeout in the rare cases he needs it for this particular query.
sp_configure 'max server memory (MB)', 28000
Sets max server memory to 28gb. Well, that is nonsense - the DBA should have set that upon install to a sensible value, so it is not needed unless the dba is incompetent. Whether the roughly 28gb make sense I can not comment.
sp_configure 'remote login timeout', 300
Timeout for remote login 300 seconds. Default of 30 seconds already is plenty. I have serious problems considering a scenario where the server is healthy and does not process logins within less than a handful of seconds.
The only scenario I have seen where this whole batch would make sense is a server dying from overload - which is MOST OFTEN based on some brutal incompetence somewhere. Either the admin (64gb RAM machine configured to only use 2gb for SQL Server for example) or most often the programmers (no indices, ridiculous bad SQL making the server die from overload). Been there, seen that way too often.
Otherwise the timeouts really make little sense.
"Remote query timeout" sets how much time before a remote query times out.
"Remote login timeout" set how much time before a login attempt time out.
The values set here could make sense in certain conditions (slow, high-latency network, for example).
"Max server memory" is different. It's a very useful setting, and it should be set almost always to avoid possible performance problems. What value, it depends how much memory is on the server as whole and which other applications/service are running on it. If it's a dedicated server with 32 GB of memory, this value sounds about right.
None of these could be really tested on the test environment, I'm afraid, unless you have an 1:1 replica of the prod environment.
I have a problem with running a query in a linked server, I use sql sp’s for ETL process in my BI project
(For some reason I cannot use ssis).one of my queries witch has to read recently changed records and insert them in my warehouse take too long to execute and always fail with this error:
OLE DB provider 'SQLOLEDB' reported an error for linked server ‘XXX’. Execution terminated by the provider because a resource limit was reached..
But other queries run successfully. I also run following scrip in my linked server (warehouse) to increase timeout threshold.
sp_configure 'remote login timeout', 30
go
reconfigure with override
go
sp_configure 'remote query timeout', 0
go
reconfigure with override
go
Hint: I’ve used change tracking option in source tables to track updates and inserts..
I would be really thankful if someone could help me out of this.