SQLWatch - notifications not being sent - sql-server

I’m wondering if someone with knowledge/experience of SQLWatch could help me out with something.
We have SQLWatch set up on 2 DEV servers and 1 Central monitoring server, its working fine and the data from the 2 DEV servers is coming over to the central server, I can see alerts are being recorded in the table - [dbo].[sqlwatch_logger_check].
However, our issue that we are not being notified by any means (email, Powershell script running).
What’s interesting is that if we drop a row into the table [dbo].[sqlwatch_meta_action_queue] then alert notification does happen.
So our issue seems to be for some reason alerts are being raised but the record is not being inserted into the queue table. I suspect some sort of mapping issue but as it stands now it all looks ok, I use the following to check
SELECT C.check_id,check_name,check_description,check_enabled,A.action_description,A.action_exec_type,A.action_exec
FROM [dbo].[sqlwatch_config_check] C
LEFT JOIN [dbo].[sqlwatch_config_check_action] CA ON C.check_id = CA.check_id
LEFT JOIN [dbo].[sqlwatch_config_action] A ON CA.action_id = A.action_id
WHERe C.check_id = -1
And it shows the failed job is set to run our PowerShell script, which it does when the row is manually inserted.
Any ideas on what the cause may be here?
Thanks,
Nic

I am the creator of SQLWATCH.
Firstly, just to clarify, default notifications that come with SQLWATCH only work in a local scope i.e. they will happen on each monitored instance where ##SERVERNAME = sql_instance. If you are expecting the default notifications to fire from the central server for a remote instance this will not happen. The default notifications on the central server will only fire for the central server itself and not for data imported from the remote instances. This is done to avoid a situation where pull into the central repository is rare and thus notifications could be well delayed.
However, there is nothing stopping you from creating Check Rules or Reports to fire on the back of the imported data.
Secondly, the checks are not alerts per se. Checks are just... well, checks... that run periodically and make sure everything is in order. Checks can trigger an action to send an email. For this, as you have worked out, there is an association table that links together checks and actions.
As for your problem, is the actual action enabled? All actions that are not associated with report are disabled by default as they need to be configured first:
Add a column to your query to bring action_enabled column:
SELECT C.check_id, check_name, check_description, check_enabled, A.action_description, A.action_exec_type, A.action_exec, [action_enabled]
FROM [dbo].[sqlwatch_config_check] C
LEFT JOIN [dbo].[sqlwatch_config_check_action] CA ON C.check_id = CA.check_id
LEFT JOIN [dbo].[sqlwatch_config_action] A ON CA.action_id = A.action_id
WHERE C.check_id = -1
Or, there is already a view that should provide you with the complete mapping:
SELECT *
FROM [dbo].[vw_sqlwatch_report_config_check_action]
WHERE check_id = -1
The application log table [dbo].[sqlwatch_app_log] should also contain valuable information. Did you look in there for anything out of ordinary?
Summarising
In order to enable alerts in a brand new install of SQLWATCH, all it's needed is setting up action_exec with your email details and action_enabled set to 1. If you have made some other changes it may be easier to reinstall back to default.

Related

Logic Apps - SQL Connector returning cached data?

I have a Logic App that uses the "SQL Server - When an item is modified (V2)" trigger, monitoring an Azure SQL DB for updated rows. When running this LA, I noticed that the modified row that came as output for this trigger did NOT contain the updated data.
I thought this might be by design (don't really see why, but ok...) so I added a "Get Row" action directly after the trigger, to go fetch the most recent data for the row that triggered the LA. But even this step still returned the old, not-updated data for that row.
However, when I resubmit the run some seconds later, the "Get Row" action does get the updated data from the database.
Is this normal behavior? Is the SQL DB row version already updated even though the data update isn't committed yet, triggering the Logic App but not returning the updated data yet?
Thanks for pointing me to add a timestamp to my table, I add the timestamp and then I can find the table in the selection. I test it in my side but the trigger works fine, it output the updated data. I provide my logic below for your reference:
My table show as:
My logic app:
Please note I disable the "Split On" in "Settings" of the trigger.
After running the update sql:
update Table3 set name = 'hury1' where id = 1;
update Table3 set name = 'jim1' where id = 2;
I got the result (the variable updateItems in screenshot contains both updated items):

Oracle DB: Two identical connections running the same query from the same user, and getting different results?

I have a test service account for an Oracle 11 database.
I can connect to the database, by creating a new connection in SQLDeveloper, and query the data.
I run the query, say,
select * from dw_my_data_dbo.vw_gftr_domain2_my_data;
when I open these views, they appear to be defined like this:
-- Unable to render VIEW DDL for object dw_my_data_dbo.vw_gftr_domain2_my_data with DBMS_METADATA attempting internal generator.
CREATE VIEW dw_my_data_dbo.vw_gftr_domain2_my_data AS
SELECT
my_data_category,
my_data_external_id,
my_data_internal_id,
my_data_desc,
my_data_rating
FROM gftr_domain2_my_data
WHERE active_my_data = 'Y'
I get 1000 rows back with the data I expect - great.
My colleague does step-for-step the same thing - same username, same credentials, same version of SQLDeveloper even: But he gets 0 rows back. No error messages or anything, just an empty results set.
This behaviour is the same for every object I have access to on the database (I only have access to views, as it turns out - the majority of which closely resemble the above, with varying numbers of columns of course)
What gives?
I'm guessing it's something do with how the database is handling multiple connections from the same 'user' - but I'm certain i've been able to do this in the past without issue.
If I'm not the DBA, is there any way to debug this issue?
What I've tried so far:
Changing around the queries - in functional and non-functional ways (To see if the problem was some sort of result-caching thing) - but the second session always returns no rows, no matter how I manipulate or pull the data.
Committing (everything). No effect.
Querying gv$session from both sessions to see if it's definitely the same server - unfortunately I don't have access to gv$session with this user.
Disconnecting and reconnecting everything. Same result.
Making absolutely certain I'm using the same credentials/server name from both machines.

How do I update columns via SQL when a column has be changed? Kind of like a log! Microsoft SQL Server Management Studio

Okay, just to clarify: I have a SQL Table (contains ID, School, Student ID, Name, Fee $, Fee Type, and Paid (as the columns)) that needs to be posted on a Grid that will uploaded on a website. The Grid shows everything correctly and shows what Fees need to be Paid. The Paid column has a bit data type for 1 or 0 (basically a checklist.) I am being asked to add two more columns: User and DateChanged. The reason why is to log which staff changed the "Paid" column. It would only capture the Username of the staff who changed it in the SQL Table and also the time. So to clarify even more, I need to create 2 columns: "User, DateChanged" and the columns would log when someone changed the "Paid" column.
For example: User:Bob checks the Paid column for X student on 5/2/17 at 10pm.
In the same row of X student's info, under User column Tom would appear there. Under DateChanged it would show 2017-05-02 10pm.
What steps would I take to make this possible.
I'm currently IT Intern and all this SQL stuff is new to me. Let me know if you need more clarification. FYI The two new columns: User, DateChanged will not be on the grid.
The way to do this as you've described is to use a trigger. I have an example of some code below but be warned as triggers can have unexpected side-effects, depending on how the database and app interface are set up.
If it is possible for you to change the application code that sends SQL queries to the database instead, that would be much safer than using a trigger. You can still add the new fields, you would just be relying on the app to keep them updated instead of doing it all in SQL.
Things to keep in mind about this code:
If any background processes or procedures make updates to the table, it will overwrite the timestamp and username automatically, because this is triggered on any update to the row(s) in question.
If the users don't have any direct access to SQL Server (in other words, the app is the only thing connecting to the database), then it is possible that the app will only be using one database login username for everyone, and in that case you will not be able to figure out which user made the update unless you can change the application code.
If anyone changes something by accident and then changes it back, it will overwrite your timestamp and make it look like the wrong person made the update.
Triggers can potentially bog down the database system if there are a very large number of rows and/or a high number of updates being made to the table constantly, because the trigger code will be executed every time an update is made to a row in the table.
But if you don't have access to change the application code, and you want to give triggers a try, here's some example code that should do what you are needing:
create trigger TG_Payments_Update on Payments
after update
as
begin
update Payments
set DateChanged = GetDate(), UserChanged = USER_NAME()
from Payments, inserted
where Payments.ID = inserted.ID
end
The web app already knows the current user working on the system, so your update would just include that user's ID and the current system time for when the action took place.
I would not rely on SQL Server triggers since that hides what's going on within the system. Plus, as others have said, they have side effects to deal with too.

How to change the default organization selected for a user?

In the earlier versions of CRM, the default organization could be set from Deployment Manager. It's not the case anymore, though. Now, every user gets his own default depending on the first organization ever accessed on the server.
I have strong (and less than favorable) opinions on the subject but it seems that Microsoft cares very little what I think.
So, I'm going to do the following to the DB.
use MSCRM_CONFIG
update SystemUser
set DefaultOrganizationId = 'GUID of the main organization'
--where Id='GUID of a user'
However, I'm concerned that it'll break something and cause an eternity to restore, so I'm verifying by asking the question here.
How can I ensure beyond any possible doubts that I've got the correct GUID for the organization?
Will it work well when commenting the clause targeting the individual users and hitting all of them in one swing?
What other consideration should I have, except backing up the whole system prior to the operation?
And if anybody can suggest a smoother and less intrusive way, I'll be jumping of joy.
You can utilize the script at http://complexitykills.blogspot.com/2009/09/default-organization-for-user-is.html which is similar to yours, but has a little bit more logging included with it - note in the comments that there's a comment to include a where clause condition looking for Organization that has "IsDeleted = 0" to prevent selecting an organization that has been deleted. If issue your SQL Command inside of a SQL Transaction, you can run the script, validate users can still login to Microsoft CRM, and if needed, quickly issue a "Rollback Tran" to roll the SQL transaction back rather than having to perform a complete restore of the MSCRM_CONFIG database (although that should be quick to restore as that's never very large as far as SQL Server databases are concerned).
To get the correct OrganizationID, you can use a SQL Query like this:
DECLARE #DefaultOrganizationId AS VARCHAR (100);
SET #DefaultOrganization = '<organizationname>';
SELECT #DefaultOrganizationId = id
FROM MSCRM_CONFIG..organization
WHERE UniqueName = #DefaultOrganization AND IsDeleted = 0;
If you don't have the where clause included, it will indeed update all of the users to the organizationid you have added here and should work well (see the query above for an example of how to retrieve the organizationid from the MSCRM_CONFIG..organization table).
This is not necessarily a common operation, but I have seen it used with a few organizations to successfully update the default organization associated to a user, noting that precautions were made before hand to back up the databases and testing was performed after to ensure everything worked for these users in Microsoft CRM.

How to properly implement "per field history" through triggers in SQL Server (2008)

so, I'm facing the challenge of having to log the data being changed for each field in a table. Now I can obviously do that with triggers (which I've never used before, but I can imagine is not that difficult), but I also need to be able to link the log who performed the change which is where the problem lies. The trigger wouldn't be aware of who is performing the change and I can't pass in a user id either.
So, how can I do what I need to do? If it helps say I have these tables:
Employees {
EmployeeId
}
Jobs {
JobId
}
Cookies {
CookieId
EmployeeId -> Employees.EmployeeId
}
So, as you can see I have a Cookies table which the application uses to verify sessions, and I can infer the user out of it, but again, I can't make the trigger be aware of it if I want to make changes to the Jobs table.
Help would be highly appreciated!
We use context_info to set the user making the calls to the DB. Then our application level security can be enforced all the way to in DB code. It might seem like an overhead, but really there is no performance issue for us.
make_db_call() {
Set context_info --some data representing the user----
do sql incantation
}
in db
select #user = dbo.ParseContextInfo()
... audit/log/security etc can determine who....
To get the previous value inside the trigger you select from the 'deleted' pseudo table, and to the get the values you are putting in you select from th 'inserted' pseudo table.
Before you issue linq2sql query issue the command like this.
context.ExecuteQuery('exec some_sp_to_set_context ' + userId')
Or more preferably I'd suggest an overloaded DataContext, where the above is executed before eqch query. See here for an example.
we don't use multiple SQL logins as we rely on the connection pooling and also locking down the db caller to a limited user.

Resources