Snowflake UI sessions are not killed after 4 hours - snowflake-cloud-data-platform

I know that there is user parameter CLIENT_SESSION_KEEP_ALIVE, which defines wether session should stay alive forever or be killed after 4 hours of inactivity.
Mine is set to False.
show parameters like 'CLIENT_SESSION_KEEP_ALIVE';
But in Snowflake UI on tab Account > Sessions I see my old session, which started almost 2 days ago.
When I check this session in QUERY_HISTORY table, I also see that there were no queries for almost 2 days.
Why my session is not getting killed? Which settings should I change?
If there is no way to kill such sessions automatically, then I'd like to kill them manually using select system$abort_session(<session_id>); command.
In order to do that, first of all I need to get list of active sessions, that I see on UI tab Account > Sessions.
Is there any system table/view which can provide such data?

Per the Snowflake Documentation, the CLIENT_SESSION_KEEP_ALIVE is only for ODBC, JDBC, Python, and Node.JS client connectors. It does not affect the UI.
https://docs.snowflake.com/en/sql-reference/parameters.html#client-session-keep-alive

Related

Hooks for Active Directory

We have a process during new hire onboarding that requires managers and/or ops teams to spend time creating and giving permissions to employees. Recently, we have been thinking that it would be nice for us to automate this process, i.e., through a script of sorts.
A good indication that someone has recently joined our team (under some organization), would be if it exists, for our Active Directory to post some event to some server.
So my question is, does AD have support for hooks or any sort of automation that developers can tap into?
See this about
Active Directory creating an event upon user creation.
Then you can attach a task to the event. This blog entry explains in a detailed way how to pass parameters to the powershell script defined in the task - it involves manipulating the xml export of the task itself to insert the XPath query of an event detail.
Or, depending on the size of your organization, you could query a dynamic group in which all user objects are retrieved, and work on the delta from a previous run.

SSRS Job Custom steps being deleted

I have added some custom steps to some of my SSRS jobs however they are being removed after a couple of days every time. I know that if you add custom jobs and then change the report or the subscription in the UI then it overwrites the jobs. However they are not being touched yet they are still disappearing.
Has anyone else come across this problem ?
Although I often customize jobs that run subscriptions - no, I haven't come across that problem. I did not customize the jobs that were created automatically, instead I created my own ones.
For a subscription to successfully fire, the name of the job isn't important. Instead, the SQL code to execute the subscription (to be more specific: the SubscriptionID) is what you need to know. Since you were able to find the jobs that execute specific subscriptions, I think that you don't have a problem in finding this information, neither. The code you need looks like this:
exec [ReportServer].dbo.AddEvent #EventType='TimedSubscription', #EventData='<YourSubscriptionID>'
You can use this code in your own jobs as well, and it will work as long as the subscription is there.
The name of the SSRS-generated job is the ID of the report schedule that you define for the subscription. This name is needed by SSRS to know where to change the schedule when you change the schedule of the subscription. As you found out, SSRS resets these jobs not only when a subscription is changed. But you don't need all these jobs when creating your own jobs that run the subscription.
To get rid of the auto-generated job with that cryptic name, don't just delete it yourself (as SSRS would re-create it), instead change the schedule of the subscription to a shared schedule that will never run. For this, I created a shared schedule (under site settings) named "Disabled Schedule" and disabled that schedule.

Access Database slow opening process when multiple users

Situation : I made an Access Database which runs 4 Update Queries + 1 SQL update statement in an AutoExec Macro as soon as a user opens it before displaying all the updated information in a Form with conditional formatting applied to it.
This database will be used (editted/read) by multiple users at the same time. So far, if only 1 user opens it it's fine.
But if multiple users do, the 1st one who opens it sees the Form after about 10 seconds whereas the 2nd sees it after 60 seconds or more and navigation through it is really slowed down.
Any ideas on how I can fix this ?
I currently think that this can be fixed with the Advanced Client Settings from the Access Option and these are my settings at the moment :
EDIT : The Database in question is NewVersion.accdb and I asked all users to open this one.

Clearing Access Cache

I am developing a system in Access talking to a Sql Server backend. I can connect with two separate accounts A and B so that I can control permissions. In particular I have a view which is accessed via a pass through query which is denied to A but allowed by B.
Normally selection of A or B as the login is related to which Access Security Group the user belongs in, but I have set it up so that people in the Admins group (ie me) read the login from an internal access table. I have also created a form (and associated code) that allows an Admin to change this value.
This all works great and does its job perfectly - provided I start up Access from scratch.
It detects I am admin, reads the last value I set in the internal table, connects to the server with the correct login string (I loop deleting and re-creating all the tabledefs using this new connection string) and then displays my first form. I navigate to a button that runs the pass through query. When I click that button it recreates the pass through query, by deleting one with the same name and recreating it with the correct connection string (A or B login) before then running it to output results. If I am A, then it fails with a permission error (which I display and inform the user about), if I am B it works and I get the results.
I have added a system to attempt to change this on the fly for testing purposes. Having changed who Admin should login as (by writing to an internal table), it recalls the startup code, which loops through deleting and re-creating tabledefs and then puts me back at the intial form.
HOWEVER - If I now navigate to the button that runs my permission controlled query, it still deletes and re-creates the query def from scratch, but when I run it, it seems to run in the context of the SQL Server Login it set when I first started access, and not the new SQL Server Login I have just re-created everything with. So the query will run when it shouldn't (of visa versa).
If I exit Access and try again - it starts working properly again.
The only conclusion I can draw from this is that somewhere inside of Access it is caching the ODBC connection string - and instead of using the new one is using the old.
So my question is - is my conclusion correct, and if so how can I tell Access to clear its cache.
I am developing in Access 2010 - for a system that will ultimately be running in an Access 2000 environment - so the file format is an .mdb in the Access 2000 format.
I came to this topic because I had the same question: "How to clear the cache in Access 2010?"
In my case, the problem was that my application somehow "remembered" the entire path to my linked photos, even though I referenced only the file name. One of the links above lead me to search under "File > Current Database > Caching Web Service and SharePoint tables." The option to "Use the cache format that is compatible with MS Access 2010" was already checked, but I enabled the check box for "Clear Cache on Close" and closed the database.
Voila! All previously cached values, including the values for my linked photos, were cleared out. It doesn't appear that this setting affected my ODBC DNS-less connections, but I haven't confirmed this.
**TO CLEAR CACHING, go to File-->**OPTIONS-->Current Database, and scroll down to Caching Web Service and SharePoint tables.****
Can this page about ODBC linked access password reset be what you're looking for ?
As far as I know, there is no way to clear this cache. If you execute a query and supply a different UID/Password for that query, then the permissions you obtained from that act will remain in effect until such time you close down Access.
Thus if you execute another query and supply a "differnt" UID/password, and then later own execute another query with “lower” permissions, the other cached UID/password will be used. So you can (and will) have multiple UID/passwords cached at this point in time - you have no control over which one is used.
The only way around this would be to adopt a separate ADO query – this to my knowledge does not cache the credentials like when using DAO queries.

Drupal website blocked because of many connection errors - website goes offline

From time to time, the number of database connections from our Drupal 6.20 system to our Mysql database reaches 100-150 and after a while the website goes offline. The error message when trying to connect to Mysql manually is "blocked because of many connection errors. Unblock with 'mysqladmin flush-hosts'". Since the database is hosted on an Amazon RDS I don't have the permission to issue this command, but I can reboot the database and once rebooted the website works normally again. Until next time.
Drupal reports multiple errors prior to going offline, of two types:
Duplicate entry
'279890-0-all' for key
'PRIMARY' query:
node_access_write_grants /* Guest :
node_access_write_grants */ INSERT
INTO node_access (nid, realm, gid,
grant_view, grant_update,
grant_delete) VALUES (279890,
'all', 0, 1, 0, 0) in
/var/www/quadplex/drupal-6.20/modules/node/node.module
on line 2267.
Lock wait timeout exceeded; try
restarting transaction query:
content_write_record /* Guest :
content_write_record */ UPDATE
content_field_rating SET vid = 503621,
nid = 503621, field_rating_value =
1212 WHERE vid = 503621 in
/var/www/quadplex/drupal-6.20/sites/all/modules/cck/content.module
on line 1213.
The nids in these two queries are always the same and refer to two nodes that are frequently automatically updated by a custom module. I can track down a correlation between these errors and unusually many web requests in the Apache logs. I would understand that the website would become slower because of this. But:
Why do these errors occur, and how can they be solved? It seems to me it's to do with several web requests trying to update the same node at the same time. But surely Drupal should deal with this by locking the tables etc? Or should I deal with it in some special way?
Despite the higher web load, why does the database completely lock and require to be rebooted? Wouldn't it be better if the website still had access to Mysql and so, once the load is lower, it can serve pages again? Is there some setting for this?
Thank you!
Can be solved one or all of these three things to check:
are you out of disk space? From ssh, type command df -h and make sure you still have disk space.
Are the tables damaged? Repair the tables in phpMyAdmin, or CLI instructions here: http://dev.mysql.com/doc/refman/5.1/en/repair-table.html
Have you performance-tuned your mysql with an /etc/my.cnf? See this for more ideas: http://drupal.org/node/51263

Resources