Permissions required to access remote perfmon counters from SQL CLR - sql-server

So I'm trying to learn SQLCLR and have chosen to write a table-valued function that essentially gathers some perfmon counters. I'm able to run it successfully and gather counters from the server that hosts the database instance. However, when I try to access counters on another server, I get "access denied". I'm able to get it to work if I add the account that runs the SQL Server to the "Performance Monitor Users" group on the remote server, but I want to know if I can have the function run as a different windows account? That is to say, can I create a Windows account specifically for this task and somehow have SQL Server run the function in that context?

No, you cannot have the SQLCLR function run as a specific user. You may hear about use of LogonUser API to impersonate an user in the SQLCLR function but that approach is fraud with problems, particularly because of the issue of password storage. The correct solution is exactly what you did, grant the SQL Server account the needed privileges by adding him to the required security group. BTW, in case your SQLCLR function impersonates the current Windows login you will need to set up constrained delegation.
That being said, using SQLCLR to connect to a remote machine for anything is not a smart thing to do. Stealing the precious SQL Server workers to have them wait on slow network access is going to grind your server to a halt under load. You can do this as a way to learn how to do it, but don't even think about deploying it in production. Have the counter collection be done by an external process and save the counter in the database. In fact, there is already a tool that does exactly that: logman.exe.
And finally: querying performance counters from the C# API is extremity inefficient. You will quickly discover that there is a much faster API, the PDH library. But PDH has no manage equivalent, so you'll be back at square one, namely use the tool that does leverage PDH out-of-the-box: logman.exe.

Related

Move from a local single-user database to an online multi-user database

I have a calendar-type WPF program that is used to assign the workload to a team. The events are stored in an Access database and the program is accessed by one person at a time by remotely connection to a computer. The team has grown and multiple people would need to access the program simultaneously. I can install the program on several computers, but where should I move the database? On a software like Dropbox/Onedrive, on a SQL online host? Thanks.
You can use a SQL Server on many Cloud platforms (though I am not sure Dropbox can host SQL Server natively). Azure (Microsoft cloud) is a very mature solution. You still should verify, now that multiple users will be managing data, that the database is backed up a regular basis and that any updates to data should be done within transactions that your code should be aware of. 'Aware of' means that if there is a conflict your code should either resubmit or notify the user that the insert/update/delete failed.

"Fire and forget" T-SQL query in SSMS

I have an Azure SQL Database where I sometimes want to execute ad-hoc SQL statements, that may take a long time to complete. For example to create an index, delete records from a huge table or copy data between tables. Due to the amounts of data involved, these operations can take anywhere from 5 minutes to several hours.
I noticed that if a SQL statement is executed in SSMS, then the entire transaction will be automatically rolled back, in case SSMS loses its connection to the server, before the execution is complete. This is problematic for very long running queries, for example in case of local wifi connectivity issues, or if I simply want to shut down my computer to leave the office.
Is there any way to instruct SQL Server or SSMS to execute a SQL statement without requiring an open connection? We cannot use SQL Server Agent jobs, as this an Azure SQL DB, and we would like to avoid solutions based on other Azure services, if possible, as this is just for simple Ad-hoc needs.
We tried the "Discard results after execution" option in SSMS, but this still keeps an open connection until the statement finishes executing:
It is not an asynchronous solution I am looking for, as I don't really care about the execution result (I can always check if the query is still running using for example sys.dm_exec_requests). So in other words, a simple "fire and forget" mechanism for T-SQL queries.
While my initial requirements stated that we didn't want to use other Azure services, I have found that using Azure Data Factory seems to be the most cost-efficient and simple way to solve the problem. Other solutions proposed here, seems to suffer from either high cost (spinning up VMs), or timeout limitations (Azure Functions, Azure Automation Runbooks), non of which apply to ADF when used for this purpose.
The idea is:
Put the long-running SQL statement into a Stored Procedure
Create a Data Factory pipeline with a Stored Procedure activity to execute the SP on the database. Make sure to set the Timeout and Retry values of the activity to sensible values.
Trigger the pipeline
Since no data movement is taking place in Data Factory, this solution is very cheap, and I have had queries running for 10+ hours using this approach, which worked fine.
If you could put the ad-hoc query in a stored procedure you could then schedule to run on the server assuming you have the necessary privileges.
Note that this may not be a good idea, it but should work.
Unfortunately I don't think you will be able to complete the query the without an open connection in SSMS.
I can suggest the following approaches:
Pass the query into an azure function / AWS Lambda to execute on your behalf (perhaps, expose it as a service via rest) and have it store or send the results somewhere accessible.
Start up a VM in the cloud and run the query from the VM via RDP. Once you are ready you re-establish your RDP connection to the VM and you will be able to view the outcome of the query.
Use an Azure automation runbook to execute the query on a scheduled trigger.

Alarm DB Logger (Intouch) configuration with SQL Server Mirroring

I have an installation which has two SCADA (Intouch) HMIs and I want to save the data in an SQL Server database which will be in another computer. To be as sure as possible that I have an operating database I'm going to set a SQL Server mirroring. So I will have 2 SQL server databases with a distributor. About this I don't have any doubt. To make it easy to understand I've made an image with the architecture of the system.
Architecture.
My doubt is how do I configure the Alarm DB Logger to make it point, automatically, to the secondary database in case that the principal database is down for any unknown failover.
PS: I don't know if it's even possible.
Configure it the database in Automatic failover. The connection are handled automatically in case of a failover. Read on Mirroring EndPoints
The below Links should have more than enough information.
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/role-switching-during-a-database-mirroring-session-sql-server
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/the-database-mirroring-endpoint-sql-server
The AlarmDBLogger reads its configuration from the registry, so you could try the following:
Stop AlarmLogger
Change ServerName in registry [HKLM].[Software].[Wonderware].[AlarmLogger].[SQLServer]
Start AlarmLogger
But what about the two InTouch-nodes? What if one of those fails? You would have to make sure one of them logs alarms, and that they don't log duplicates!
The standard controls and activex for alarms use a specific view in the alarm database. You cannot change that behaviour, but you can script a server change in InTouch or System Platform.
Keep in mind that redundancy needs to be tested, and should only be implemented if 100% uptime is necessary. In many cases you will be creating new problems to solve instead of solving an actual problem.

Which user account to assign as owner when attaching an SQL Server database?

This is a simple database security & performance question, but I've always used either a special user (eg. mydbuser), or Windows' built-in Network Service account as the owner when attaching databases to my SQL Server instances.
When deploying my database to a production server, is there a specific user I should stick to or avoid? I would think that using an account with a set password could open the database up to a potential security issue.
Edit: Corrected NETWORK SECURITY to Network Service
You must always follow the rule of "Least Privileged Access".
Apart of this is taking into consideration what applications that are using the database. Ideally your database is inaccessible by the open internet (block port 1433!). Thus an attacker would be forced to access the database though the application via SQL Injection.
Worms/malware/exploits are most often targeting cmd.exe (or another shell like bash under Linux). MS-SQL gives attackers access to cmd.exe via the xp_cmdshell() sql function call. Worms rely on this function to spread. To make matters worse on old versions of MS-SQL the "sa" had a null password, and worms actively exploited this to spread.
Whatever account you give MS-SQL disallow access to cmd.exe. Make sure no applicaitons use the "sa" account, instead give them a specific account that only has the bare bones i needs to work. Even if you don't give the application access to xp_cmdshell() its possible to conduct a privilege escalation attack though sql injection. For instance SqlNinja will attempt to brute force the "sa" account via SQL Injection.
I also recommend following these steps laid out my Microsoft, especially testing to make sure your security systems hold water.

Practical limit to number of SQL Server logins?

Working with an application that needs to provide row and column level security for user reports. The logic for the row filtering an column masking is in place, but there are still decisions to be made about identifying users at report execution time.
The application uses a single SQL Server login to authenticate, as all rights are data driven within the application itself. This mechanism does not carry well to reports, as clients like Crystal and MS Office do not authenticate through the application (web and WinForms).
The traditional approach of using SQL Server logins and database users will work will, but may have one issue. In some implementations of the application, the number of users who run reports and need to be uniquely identified may run into the hundreds.
Are there any practical limits to the number of logins or users on a SQL Server database (v 2005+) where this approach may cause problems? Administration of the users on the database server can be automated by the application, but the potential number of credentials may be a concern.
We have looked into user impersonation techniques, but they become difficult to implement when a report client such as Excel authenticates directly to the server.
Edit: The concern is not concurrency or workload, but rather administration issues on remote instances where a local DBA is not available, especially when the server is not dedicated to the application. Interested in scenarios where the numbers of logins were problematic.
I've used your described approach (SQL Server accounts managed automatically by our application) and we didn't have any trouble. However, we only ever had a maximum of perhaps 200 SQL accounts. But we didn't experience any kind of administrative overhead except when "power users" restored databases without telling us, causing the SQL login account to become out of synch with the database*.
I think your approach is sound.
EDIT: Our solution for this was a proc that simply ran through the user accounts and called our procs that deleted/created the user accounts. When the power users called this proc all was well, and it was reasonably fast.

Resources