Is this handled automatically somehow or should I have a scheduled job that deletes these periodically? I suppose keeping them for audit purposes makes sense but revocation does a hard delete so I don't think that's the intent of this table/entity.
This works for me:
services.AddIdentityServer().AddOperationalStore(options =>
{
options.EnableTokenCleanup = true;
options.TokenCleanupInterval = 3600; // 1 hour
});
I don't know if the EF implementation of persistent grants comes with a scheduled job, but that should be easy to verify by looking at what EF creates. We are using a different ORM to manage our grants table, so in that case yes - we would need to create a scheduled job to clean up that table.
Related
I want to use snowflake query_history's sessionId to find all the queries executed in one session. It works fine on the snowflake end when I have different worksheets which create different sessions. But from other tools (it looks to be using the same connection pool until it recreates the connection), it creates the same session id for multiple jobs on the snowflake side query_history. Is there a way to have sessionID created on every execution? I am using the control-m scheduling/job automation tool to execute multiple jobs which execute different snowflake stored procs. I want to see if I can get different sessionID for each execution of the procedure on the snowflake query_history table.
Thanks
Djay
You can change the "idle session timeout". See documentation here
You can set it to as low as 5, which means any queries that are at least 5 minutes apart will need to reauthenticate and will have a new session.
CREATE [OR REPLACE] SESSION POLICY DO_NOT_IDLE SESSION_IDLE_TIMEOUT_MINS = 0
Though I believe this will affect any applications that use your account, and will make your applications need to reauthenticate every time the session expires.
Another option, if you need a window smaller than 5 minutes, is to get the sessionid and explicitely run an ABORT_SESSION after your query has finished.
which would look something like this
SEELCT SYSTEM$ABORT_SESSION(CURRENT_SESSION())
The MaxScale distributes the requests to the MariaDB database -> master/slave server on which the database is located.
What i need is a script running as a cron or something similar which verifies the GTID from master and slaves. If the slaves GTID differs from the masters GTID i want to be informed/alarmed via email.
Unfortunately i have no idea if this is possible somehow and how to do it
You can enable gtid_strict_mode to automatically stop the replication if GTIDs from the same domain conflict with what is already in the binlogs. If you are using MaxScale, it will automatically detect this and stop using it.
Note that this will not prevent transactions from other GTID domains from causing problems with your data. This just means you'll have to pay some attention if you're using multi-domain replication.
If you want to be notified of this, you can use the script option in MaxScale to trigger a custom script to be launched whenever the server stops replicating.
Our deployment process involves two db copy procedures, one where we copy the production db to our rc site for rc testing, and then one where we copy the production db to our staging deployment slot for rollback purposes. Both of these can take as long as ten minutes, even though our db is very small. Ah, well.
What I'd like to do is have a way to get notified when a db Copy operation is done. Ideally, I could link this to an SMS alert or email.
I know that Azure has a big Push Notification subsystem but I'm not sure if it can hook the completion of an arbitrary db copy, and if there's a lighterweight solution.
There are some information about copy database in this page, http://msdn.microsoft.com/en-us/library/azure/ff951631.aspx. If you you are using T-SQL you can check the copy process through the query likes SELECT name, state, state_desc FROM sys.databases WHERE name = 'DEST_DB'. So you can keep running this query and send SMS when it shows finished.
I had a package that worked perfectly until i decided to put some of its tasks inside a sequence container (More on why I wanted to do that - How to make a SSIS transaction in my case?).
Now, i keep on getting an error -
[Execute SQL Task] Error: Failed to acquire connection "MyDatabase". Connection may not be configured correctly or you may not have the right permissions on this connection.
Why could this be happening and how do I fix it ?
I started writing my own examples to reply to your question. Then I remember that I met Matt Mason when I talked at a SQL Saturday in New Hampshire. He is the Microsoft Program Manager for SSIS.
While I spent 3 years between 2009 and 2011 writing nothing else but ETL code, I figured Matt had an article out there.
http://www.mattmasson.com/2011/12/design-pattern-avoiding-transactions/
Here is a high level summary of the approaches and the error you found.
[ERROR]
The error you found is related to MSDTC having issues. This must be configured and working correctly without any issues. Common issues are firewalls. Check out this post.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/3a5c847e-9c7e-4628-b857-4e6edaa7936c/sql-task-transaction-required?forum=sqlintegrationservices
[SOLUTION 1] - Use transactions at the package, task or container level.
Some data providers do not support MSDTC. Some tasks do not support transactions. This may be slow in performance since you are adding a new layer to support two phase commits.
http://technet.microsoft.com/en-us/library/aa213066(v=sql.80).aspx
[SOLUTION 2] - Use the following tasks.
A - BEGIN TRAN (EXECUTE SQL)
B - YOUR DATA FLOW
C - TEST THE RETURN CODE
1 - GOOD = COMMIT (EXECUTE SQL)
2 - FAILURE = ROLLBACK (EXECUTE SQL)
You must have the RetainSameConnection property set to True on the connection.
This forces all calls thru one session or SPID. All transaction management is now on the server.
[SOLUTION 3] - Write all you code so that it is restartable. This does not mean you go out and use check points.
One solution is to always use UPSERTS. Insert new data. Update old data. Deletes are only a flag in a table. This pattern allows a failed job to be executed many times with the same final state being achieved.
Another solution is to handle all error rows by placing them into a hospital table for manual inspection, correction, and insertion.
Why not use a database snapshot (keeps track of just changed records)? Take a snapshot before the ETL job. If an error occurs, restore the database from the snapshot. Last step is to remove the snapshot from the system to clean up house.
In short, I hope this is enough ideas to help you out.
While the transaction option is nice, it does have some down falls. If you need an example, just ping me.
Sincerely
J
What package protection level are you using? Don't Save Sensitive? Encrypt Sensitive with User Key? I'd recommend changing it to use Encrypt Sensitive with Password and enter a password. The password won't disappear.
Have you tried testing the connection to the database in the connection manager?
I have a SQL Server 2005 database that has been deleted, and I need to discover who deleted it. Is there a way of obtaining this user name?
Thanks, MagicAndi.
If there has been little or no activity since the deletion, then the out-of-the-box trace may be of help. Try running:
DECLARE #path varchar(256)
SELECT #path = path
FROM sys.traces
where id = 1
SELECT *
FROM fn_trace_gettable(#path, 1)
[In addition to the out-of-the-box trace, there is also the less well-known 'black box' trace, which is useful for diagnosing intermittent server crashes. This post, SQL Server’s Built-in Traces, shows you how to configure it.]
I would first ask everyone who has admin access to the Sql Server if they deleted it.
The best way to retrieve the information is to restore the latest backup.
Now to discuss how to avoid such problems in the future.
First make sure your backup process is running correctly and frequently. Make transaction log baclup evey 15 mintues or half an hour if it is a higly transactional database. Then the most you lose is a half an hour's worht of work. Practice restoring the database until you can easily do it under stress.
In SQL Server 2008 you can add DDL triggers (not sure if you can do this in 2005) which allow you to log who did changes to structure. It might be worth your time to look into this.
Do NOT allow more than two people admin access to your production database - a dba and a backup person for when the dba is out. These people should load all changes to the database structure and code and all of the changes should be scripted out, code reviewed and tested first on QA. No unscripted, "run by the seat of your pants" code should ever be run on prod.
Here is bit more precise TSQL
SELECT DatabaseID,NTUserName,HostName,LoginName,StartTime
FROM
sys.fn_trace_gettable(CONVERT(VARCHAR(150),
( SELECT TOP 1
f.[value]
FROM sys.fn_trace_getinfo(NULL) f
WHERE f.property = 2
)), DEFAULT) T
JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
WHERE TE.trace_event_id =47 AND T.DatabaseName = 'delete'
-- 47 Represents event for deleting objects.
This can be used in the both events of knowing or not knowing the database/object name. Results look like this: