How to find out the connection limit per user on Postgresql? - database

I need to find out if a connection limit has been set on a Postgresql database on a per user basis.
I know you can set such a limit using:
ALTER USER johndoe WITH CONNECTION LIMIT 2;
Can you check this in the pg_users table?

Whilst connected to the database you want to get this information
SELECT rolname, rolconnlimit
FROM pg_roles
WHERE rolconnlimit <> -1;
More details are available at http://www.postgresql.org/docs/current/static/view-pg-roles.html

This information is available in the column rolconnlimit in the view pg_roles
http://www.postgresql.org/docs/current/static/view-pg-roles.html
For roles that can log in, this sets maximum number of concurrent connections this role can make. -1 means no limit.

Related

How to find the count of total connections in snowflakes

We know that we have "show transactions" to see the transactions currently connected to database.
But I am interested
- To get the count of active users for each warehouse?
-History of connections count for each warehouse?
Is there a way to get above information using the sql commands (not the web ui)
If I understood correctly, you want to see the warehouse and active user mapping. There is no direct views as per my knowledge but you can leverage provided query where by keeping warehouse size !='0' you can tied warehouse and user together. You can check the below link
https://docs.snowflake.com/en/sql-reference/account-usage/query_history.html
Before that
Snowflake Sessions are not tagged with user name or account , those are system
generated ID.
User and warehouse relationship is zero or many (An active user can use multiple warehouse in parallel , also a warehouse can be used by multiple users at same point of time)
A user can have active session without a running warehouse
It is not mandatory to have an active user to keep your warehouse running
Finally, queries can also be executed without turning the warehouse up
SELECT TO_CHAR(DATE_TRUNC('minute', query_history.START_TIME ),'YYYY-MM-DD
HH24:MI') AS "query_history.start_time",
query_history.WAREHOUSE_NAME AS "query_history.warehouse_name",
query_history.USER_NAME AS "query_history.user_name"
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY AS query_history
WHERE (query_history.WAREHOUSE_SIZE != '0')
GROUP BY DATE_TRUNC('minute', query_history.START_TIME ),2,3
ORDER BY 1 DESC
Note : Above SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY view refresh has latency of 45 minutes

How does warehouse size change automatically in Snowflake?

I have a small warehouse in SnowFlake, with minimum clusters = 1, maximum clusters = 5 and scaling policy set to standard. However, when I was viewing the query history profile, I saw that for some of the queries, size column was set to large, but the cluster number remained 1.
Now, I know that autoscaling helps increasing number of clusters, but how did the warehouse size change for some queries without manual intervention?
I referred to the official documentation of SnowFlake here, but couldn't find any ways to automatically change size of warehouse.
Snowflake does not carry any feature that will automatically alter the size of your warehouse.
It is likely that the tools in use (or users) may have run an ALTER WAREHOUSE SET WAREHOUSE_SIZE=LARGE. The purpose may have been to prepare for a larger operation, ensuring adequate performance temporarily.
Use the various history views to find out who/what and when such a change was run. For example, the QUERY_HISTORY view could be useful in finding the username and role that was used to alter the warehouse size, with the following query:
SELECT DISTINCT user_name, role_name, query_text, session_id, start_time
FROM snowflake.account_usage.query_history
WHERE query_text ILIKE 'ALTER%SET%WAREHOUSE_SIZE%=%LARGE%'
AND start_time > CURRENT_TIMESTAMP() - INTERVAL '7 days';
Then you could use LOGIN_HISTORY view to find which IP the user authenticated from during the time (or use the history UI for precise client information), check all other queries executed in the same session, etc.
To prevent unauthorized users from modifying warehouse sizes, consider restricting warehouse-level grants on their roles (rolename in use can be detected by the query above).

data capture using CDC

To capture the deleted user name I had added a new column in my CDC table (eg:- cdc.dbo_testCDC_CT) to set the logged SQL user name.
ie; ALTER TABLE cdc.dbo_testCDC_CT ADD username VARCHAR(20) DEFAULT(SUSER_SNAME())).
The value coming in that column is always "sa", but I am logged as windows authentication. Why this happing?
First of all, you should never be modifying the system tables generated by cdc. This table was generated when you enabled cdc on your dbo.testCDC table and will include the columns of your source table, plus 5 additional columns, whose meaning is described here: http://msdn.microsoft.com/en-us/library/bb500305(v=sql.110).aspx. It will be deleted automatically when you disable cdc from your table.
I recommend reading up on cdc and the intended usage patterns first. A good start could be this article:
http://technet.microsoft.com/en-us/magazine/2008.11.sql.aspx
To answer your question why sa was always assigned to your column: all rows in the *_CT tables are filled by the log reader process which happens to run under the sa account in your case. This is not the way to add auditing to your system. The previously mentioned article can give you some pointers on better ways to implement auditing too.
Your solution should be capturing the 'Changed By' or 'Inserted By' Logged User Name and persisting that to the underlying data table subject of the capture instance itself. In that way your CDC Instance will also capture the logged User Name for you.
As already mentioned, you should NEVER change the system-generated tables, for two simple reasons:
1. When they are restored for any reason, your changes will be lost
2. Changing system tables can provide you with quite unintended consequences.
Hope this might assist.
Rather than changing the CDC table change the base table (Main source table for default value set DEFAULT(SUSER_SNAME())) COLUMN you will get the user who deleted , or inserted and updated

If my database user is read only, why do I need to worry about sql injection?

Can they (malicious users) describe tables and get vital information? What about if I lock down the user to specific tables? I'm not saying I want sql injection, but I wonder about old code we have that is susceptible but the db user is locked down. Thank you.
EDIT: I understand what you are saying but if I have no response.write for the other data, how can they see it. The bringing to crawl and dos make sense, so do the others but how would they actually see the data?
Someone could inject SQL to cause an authorization check to return the equivalent of true instead of false to get access to things that should be off-limits.
Or they could inject a join of a catalog table to itself 20 or 30 times to bring database performance to a crawl.
Or they could call a stored procedure that runs as a different database user that does modify data.
'); SELECT * FROM Users
Yes, you should lock them down to only the data (tables/views) they should actually be able to see, especially if it's publicly facing.
Only if you don't mind arbitrary users reading the entire database. For example, here's a simple, injectable login sequence:
select * from UserTable where userID = 'txtUserName.Text' and password = 'txtPassword.Text'
if(RowCount > 0) {
// Logged in
}
I just have to log in with any username and password ' or 1 = 1 to log in as that user.
Be very careful. I am assuming that you have removed drop table, alter table, create table, and truncate table, right?
Basically, with good SQL Injection, you should be able to change anything that is dependent on the database. This could be authorization, permissions, access to external systems, ...
Do you ever write data to disk that was retrieved from the database? In that case, they could upload an executable like perl and a perl file and then execute them to gain better access to your box.
You can also determine what the data is by leveraging a situation where a specific return value is expected. I.e. if the SQL returns true, execution continues, if not, execution stops. Then, you can use a binary search in your SQL. select count(*) where user_password > 'H'; If the count is > 0 it continues. Now, you can find the exact plain text password without requiring it to ever be printed on the screen.
Also, if your application is not hardened against SQL errors, there might be a case where they can inject an error in the SQL or in the SQL of the result and have the result display on the screen during the error handler. The first SQL statement collects a nice list of usernames and passwords. The second statement tries to leverage them in a SQL condition for which they are not appropriate. If the SQL statement is displayed in this error condition, ...
Jacob
I read this question and answers because I was in the process of creating a SQL tutorial website with a readonly user that would allow end users to run any SQL.
Obviously this is risky and I made several mistakes. Here is what I learnt in the first 24 hours (yes most of this is covered by other answers but this information is more actionable).
Do not allow access to your user table or system tables:
Postgres:
REVOKE ALL ON SCHEMA PG_CATALOG, PUBLIC, INFORMATION_SCHEMA FROM PUBLIC
Ensure your readonly user only has access to the tables you need in
the schema you want:
Postgres:
GRANT USAGE ON SCHEMA X TO READ_ONLY_USER;
GRANT SELECT ON ALL TABLES IN SCHEMA X TO READ_ONLY_USER
Configure your database to drop long running queries
Postgres:
Set statement_timeout in the PG config file
/etc/postgresql/(version)/main/postgresql.conf
Consider putting the sensitive information inside its own Schema
Postgres:
GRANT USAGE ON SCHEMA MY_SCHEMA TO READ_ONLY_USER;
GRANT SELECT ON ALL TABLES IN SCHEMA MY_SCHEMA TO READ_ONLY_USER;
ALTER USER READ_ONLY_USER SET SEARCH_PATH TO MY_SCHEMA;
Take care to lock down any stored procedures and ensure they can not be run by the read only user
Edit: Note by completely removing access to system tables you no longer allow the user to make calls like cast(). So you may want to run this again to allow access:
GRANT USAGE ON SCHEMA PG_CATALOG to READ_ONLY_USER;
Yes, continue to worry about SQL injection. Malicious SQL statements are not just about writes.
Imagine as well if there were Linked Servers or the query was written to access cross-db resources. i.e.
SELECT * from someServer.somePayrollDB.dbo.EmployeeSalary;
There was an Oracle bug that allowed you to crash the instance by calling a public (but undocumented) method with bad parameters.

Date of last login or read operation on a SQL Server database?

Lets say I have a SQL Server with 100 databases on it. How can I find out which ones are actually being used?
(without turning them all off and waiting for the complaints to come in)
So 'have been accessed in the last week' or something like that.
I've tried the data file dates but they don't seem to represent that and databases do not seem to have a property that reflects this either.
This SQL query was useful for me
select max (login_time)as last_login_time, login_name from sys.dm_exec_sessions
group by login_name;
Look at sys.dm_db_index_usage_stats. The columns last_user_seek/last_user_scan/last_user_lookup/last_user_update represent the last time the respective index (heap or b-treee) was used. These values reset after server restart, so you must check them after the server was up an running for a sufficient time.
You might be able to get this information by using some of the system views related to performance.
http://technet.microsoft.com/en-us/library/ms187743.aspx

Resources