pg_largeobject access on heroku - database

I'm trying to clean up my postgres database on heroku where some large objects have gotten out of control and I want to remove large objects which aren't used anymore.
On my dev machine, I can do:
select distinct loid from pg_largeobject
Where loid Not In (select id from table )
then i run:
SELECT lo_unlink(loid)
on each of those IDs.
But on heroku, i can't run any selects on pg_largeobject, as even
select * from pg_largeobject limit 1;
gives me an error:
ERROR: permission denied for relation pg_largeobject
Any suggestions how to work around this, or actually even why we don't have read access on pg_largeobject in heroku?

Since PostgreSQL 9.0, a non-superuser can't access pg_largeobject. This is documented in the release notes:
Add the ability to control large object (BLOB) permissions with
GRANT/REVOKE (KaiGai Kohei)
Formerly, any database user could read or modify any large object.
Read and write permissions can now be granted and revoked per large
object, and the ownership of large objects is tracked.
If it works on your development instance, it's either because it's version 8.4 or lower, or because you're logged in as a superuser.
If you can't log in as a superuser on heroku, I guess you could dump the remote database with pg_dump, then reload it locally, identify the leaked OIDs as a local superuser, put them in a script with the lo_unlink commands, and finally play this script against the heroku instance.
Update:
Based on how the psql \dl command queries the database, it appears that pg_catalog.pg_largeobject_metadata can be used to retrieve the OIDs and ownership of all large objects, through this query:
SELECT oid as "ID",
pg_catalog.pg_get_userbyid(lomowner) as "Owner",
pg_catalog.obj_description(oid, 'pg_largeobject') as "Description"
FROM pg_catalog.pg_largeobject_metadata ORDER BY oid
So your initial query finding the leaked large objects could be changed for non-superusers with 9.0+ into:
select oid from pg_largeobject_metadata
Where oid Not In (select id from table )
and if necessary, a condition on lomowner could be added to filter on the large objects owned by a specific user.

Related

Azure SQL Database - change user permissions on a read-only database for cross-database queries

We use Azure SQL Database, and therefore had to jump through some hoops to get cross-database queries set up. We achieved this following this great article: https://techcommunity.microsoft.com/t5/azure-database-support-blog/cross-database-query-in-azure-sql-database/ba-p/369126 Things are working great for most of our databases.
The problem comes in for one of our databases which is read-only. The reason it's read-only is b/c it is being synced from another Azure SQL Server to derive its content. This is being achieved via the Geo-Replication function in Azure SQL Database. When attempting to run the query GRANT SELECT ON [RemoteTable] TO RemoteLogger as seen in the linked article, I of course get the error "Failed to update because the database is read-only."
I have been trying to come up with a workaround for this. It appears user permissions are one of the things that do NOT sync as part of the geo-replication, as I've created this user and granted the SELECT permission on the origin database, but it doesn't carry over.
Has anyone run into this or something similar and found a workaround/solution? Is it safe/feasible to temporarily set the database to read/write, update the permission, then put it back to read-only? I don't know if this is even possible - I was told by one colleague that they think it will throw an error along the lines of "this database can't be set to read/write b/c it's syncing from another database..."
I figured out a work-around: Create a remote connection to the database on the ORIGIN server. So simple, yet it escaped me until now. Everything working great now.

How to configure Snowflake to send logs to Azure Log Analytics

Expert,
How can we configure Azure/Snowflake to access all snowflake logs using Azure Log Analytics and use kusto and alert to create alert?
Rana
it depends what data you want to unload from Snowflake to log files, as there is lots of information available in account_usage and information schema. But it's easy enough to write that data out to files on Azure storage, for ingestion and use in Azure Log Analytics. Here's an example - pushing errors recorded in the login_history view to JSON files:
copy into #~/json_error_log.json from
(select object_construct(*) from (
select event_timestamp, event_type,user_name,reported_client_type,error_code,error_message
from table(information_schema.login_history(dateadd('days',-7,current_timestamp()),current_timestamp()))
where error_code is not null
order by event_timestamp))
file_format = (type ='JSON');
And you can find more information here:
https://docs.snowflake.com/en/user-guide/data-unload-azure.html
Can't comment on the A.L.A tool operations but hopefully this gives you some idea of what to do on the Snowflake side.

Deploy DACPAC with SqlPackage from Azure Pipeline is ignoring arguments and dropping users

I have a release pipeline in Azure where I pass a .dacpac artifact (created with VStudio) to be deployed in an on-prem Sql Server 2016. It works good, but now I want to drop tables, views, functions and stored procedures on my server via .dacpac. So I did that in my VS project and then tried to deploy with option /p:DropObjectsNotInSource=True along with /p:DoNotDropObjectType and even /p:ExcludeObjectType to exclude the things that I didn't want to be dropped.
But regardless of these filters, once the job starts, the tables are dropped (as expected) but then it drops the DB users. It also removes the db_owner mapping of the login that I used to install the Azure agent and then try to drop that login (which is the same user that is configured to authenticate during the deploy). Then it fails with this error:
Error SQL72014: .Net SqlClient Data Provider: Msg 15151, Level 16, State 1, Line 1 Cannot drop the user '<Domain>\<User>', because it does not exist or you do not have permission.
Which is obvious since the user's permission to the DB was just removed. Any suggestions to avoid this, or my only choice is to re-create the users/permissions once finished? Or not to use that flag altogether and do all the "drop objects plumbing" via post-deployment script?
For reference, the additional arguments set on my release task are (it looks bizarre but there is no way to specify ONLY the objects that I want to drop):
/p:DropObjectsNotInSource=True /p:BlockOnPossibleDataLoss=false /p:DoNotDropObjectType="Aggregates,ApplicationRoles,Assemblies,AsymmetricKeys,BrokerPriorities,Certificates,ColumnEncryptionKeys,ColumnMasterKeys,Contracts,DatabaseRoles,DatabaseTriggers,Defaults,ExtendedProperties,ExternalDataSources,ExternalFileFormats,ExternalTables,Filegroups,FileTables,FullTextCatalogs,FullTextStoplists,MessageTypes,PartitionFunctions,PartitionSchemes,Permissions,Queues,RemoteServiceBindings,RoleMembership,Rules,SearchPropertyLists,SecurityPolicies,Sequences,Services,Signatures,SymmetricKeys,Synonyms,UserDefinedDataTypes,UserDefinedTableTypes,ClrUserDefinedTypes,Users,XmlSchemaCollections,Audits,Credentials,CryptographicProviders,DatabaseAuditSpecifications,DatabaseScopedCredentials,Endpoints,ErrorMessages,EventNotifications,EventSessions,LinkedServerLogins,LinkedServers,Logins,Routes,ServerAuditSpecifications,ServerRoleMembership,ServerRoles,ServerTriggers" /p:ExcludeObjectType="Aggregates,ApplicationRoles,Assemblies,AsymmetricKeys,BrokerPriorities,Certificates,ColumnEncryptionKeys,ColumnMasterKeys,Contracts,DatabaseRoles,DatabaseTriggers,Defaults,ExtendedProperties,ExternalDataSources,ExternalFileFormats,ExternalTables,Filegroups,FileTables,FullTextCatalogs,FullTextStoplists,MessageTypes,PartitionFunctions,PartitionSchemes,Permissions,Queues,RemoteServiceBindings,RoleMembership,Rules,SearchPropertyLists,SecurityPolicies,Sequences,Services,Signatures,SymmetricKeys,Synonyms,UserDefinedDataTypes,UserDefinedTableTypes,ClrUserDefinedTypes,Users,XmlSchemaCollections,Audits,Credentials,CryptographicProviders,DatabaseAuditSpecifications,DatabaseScopedCredentials,Endpoints,ErrorMessages,EventNotifications,EventSessions,LinkedServerLogins,LinkedServers,Logins,Routes,ServerAuditSpecifications,ServerRoleMembership,ServerRoles,ServerTriggers"
So, I answer to myself: there were 2 things wrong in my initial attempt:
When you want to use multiple values in a parameter, you must separate them by semicolon (;) instead of colon.
If you want to exclude more than one object, the parameter name must be in plural: /p:ExcludeObjectTypes
So, with something like the following I achieved what I wanted:
/p:ExcludeObjectTypes=Users;Logins;RoleMembership;Permissions;Credentials;DatabaseScopedCredentials

SQL Server database audit selects, failed logins and executed code for entire database, all objects

I want to track all failed logins to our production environment.
Including all selects to all objects.
Based on:
https://www.simple-talk.com/sql/database-administration/sql-server-audit-magic-without-a-wizard/
and
https://www.simple-talk.com/sql/database-administration/sql-server-security-audit-basics/
and in particular:
https://blogs.msdn.microsoft.com/sreekarm/2009/01/05/auditing-select-statements-in-sql-server-2008/
It suggests I need to name each object, in the schema for me to be able to save all the select statements, which I don't want to do. There are 1500 tables, and 2300 views.
Is it not possible for the audit, to take the database object, and any SELECT executed on that object is saved in the audit file, including user, statement and time etc.?
The failed login i get from the failed login principal group, but so far I've not been able to get the select statement, unless I specifically name the objects for which to audit.
Naming them, also means I have to update the audit every time a new view or table is added.
You can use Extended Events
For your specific scenario,you might want to select batch starting and batch completed events..
You can also add more info in the next screens like username,host info ...
finally,you can add filters to filter this only for one database or all databases or proc with speficic name and a lot..
This info can be logged to file for later analysis..
https://www.simple-talk.com/sql/database-administration/getting-started-with-extended-events-in-sql-server-2012/
For Failed logins,you can right click server and go to below page to audit ..this will be enabled by default and it will be logged to error log

How to get an unique id from sql server

Is there a way for a normal user (client-side) without elevated privileges (no special database permissions, local administrator, or any of the sort) on the server to get any kind of unique ID from a server (MAC address, database installation ID, server hardware ID) or anything of the kind?
Basically I am looking for an ID to verify the installation. I know I can do it by writing some sort of ID into registry and the database to install server-side, but is there a way to do it without installing anything? The minimum requirements for that is that I get that from MySQL and SQL Server with Linux and Windows.
My current research suggests that there is no such thing. As seen in the comment below:
I think any answer is going to require xp_cmdshell since unique
hardware information is not exposed directly to SQL Server
I dont think you can get hardware details directly from Sql-Server . it may possible throw the other programs which can contribute with both sql-server and your system hardware .
there is a pre-define function in sql server which will give you unique and random id .
create table TableName
(
id varchar(max) default newid()
)
Best I can find is that you can use file_guid from sys.database_files. You only need to be in the public role to do that and that should be unique per DB. If you create or remove database files, you'll run into trouble, and it doesn't do anything about verifying that you're on the same server.
Note that if your DB was created prior to SQL Server 2005, this value will be null since it didn't exist.

Resources