What is the difference between user postgres and a superuser? - database

I created a new superuser just so that this user can run COPY command.
Note that a non-superuser cannot run a copy command.
I need this user due to a backup application, and that application requires to run COPY command
But all the restrictions that I specified does not take effect (see below).
What is the difference between user postgres and a superuser?
And is there a better way to achieve what I want? I looked into a function with security definer as postgres ... that seems a lot of work for multiple tables.
DROP ROLE IF EXISTS mynewuser;
CREATE ROLE mynewuser PASSWORD 'somepassword' SUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT LOGIN;
-- ISSUE: the user can still CREATEDB, CREATEROLE
REVOKE UPDATE,DELETE,TRUNCATE ON ALL TABLES IN SCHEMA public, schema1, schema2, schema3 FROM mynewuser;
-- ISSUE: the user can still UPDATE, DELETE, TRUNCATE
REVOKE CREATE ON DATABASE ip2_sync_master FROM mynewuser;
-- ISSUE: the user can still create table;

You are describing a situation where a user can write files to the server where the database runs but is not a superuser. While not impossible, it's definitely abnormal. I would be very selective about who I allow to access my DB server.
That said, if this is the situation, I'd create a function to load the table (using copy), owned by the postgres user and grant the user rights to execute the function. You can pass the filename as a parameter.
If you want to get fancy, you can create a table of users and tables to define what users can upload to what tables and have the table name as a parameter also.
It's pretty outside of the norm, but it's an idea.
Here's a basic example:
CREATE OR REPLACE FUNCTION load_table(TABLENAME text, FILENAME text)
RETURNS character varying AS
$BODY$
DECLARE
can_upload integer;
BEGIN
select count (*)
into can_upload
from upload_permissions p
where p.user_name = current_user and p.table_name = TABLENAME;
if can_upload = 0 then
return 'Permission denied';
end if;
execute 'copy ' || TABLENAME ||
' from ''' || FILENAME || '''' ||
' csv';
return '';
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

COPY with option other than writing to STDOUT and reading from STDIN is only allowed for database superusers role since it allows reading or writing any file that the server has privileges to access.
\copy is a psql client command which serves the same functionality as COPY but is not server-sided, so only local files can be processed - meaning it invokes COPY but ... FROM STDIN / ... TO STDOUT, so that files on a server are not "touched".
You can not revoke specific rights from a superuser. I'm quoting docs on this one:
Docs: Access DB
Being a superuser means that you are not subject to access controls.
Docs: CREATE ROLE
"superuser", who can override all access restrictions within the database. Superuser status is dangerous and should be used only when really needed.

Related

In the tutorial "Tutorial: Bulk Loading from a local file system using copy" what is the difference between my_stage and my_table permissions?

I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated

How to access IBM DB2 warehouse on cloud as administrator

I'm currently using a free DB2 warehouse on cloud provided by IBM. What I'm trying to do is to create a new table in the database. However, an error message pops up saying that
To resolve this, I open the web console and run the following command: create tablespace mytablespace pagesize 4096. Then, another error message pops up:
Based on what I have googled, it looks like I need to grant administrator role for the user "DASH******". So I do this by adding an optional parameter to the credentials:
But it doesn't work. Is there any way to workaround this?
EDIT1: I create the table using the following command:
Users are not allowed to create their own tablespaces in free DB2WoC systems, since they don't have the SYSCTRL or SYSADM authorities there. You have to use existing tablespaces where you are allowed to create your tables.
Run the following statement from your DASH*** user.
This statement returns all the tablespaces, where your user is allowed to create tables.
If it doesn't return any rows, then this means, that you should open a ticket to the IBM support. Support should create it for you and grant your user the USE privilege on this tablespace.
SELECT
T.DATATYPE
--, P.PRIVILEGE
--, P.OBJECTTYPE
--, P.OBJECTSCHEMA
, P.OBJECTNAME
, U.AUTHID, U.AUTHIDTYPE
FROM SYSIBMADM.PRIVILEGES P
CROSS JOIN TABLE(VALUES USER) A (AUTHID)
JOIN TABLE (
SELECT GROUP, 'G' FROM table(AUTH_LIST_GROUPS_FOR_AUTHID(A.AUTHID))
UNION ALL
select ROLENAME, 'R' from table(AUTH_LIST_ROLES_FOR_AUTHID(A.AUTHID, 'U'))
UNION ALL
SELECT * FROM TABLE(VALUES ('PUBLIC', 'G'), (A.AUTHID, 'U')) T (AUTHID, AUTHIDTYPE)
) U (AUTHID, AUTHIDTYPE) ON U.AUTHID=P.AUTHID AND U.AUTHIDTYPE=P.AUTHIDTYPE
JOIN SYSCAT.TABLESPACES T ON T.TBSPACE=P.OBJECTNAME
WHERE P.OBJECTTYPE='TABLESPACE' AND T.DATATYPE IN ('A', 'L')

Optionally including scripts in SQL Server Projects 2012

I am building a SQL Publish Script that will be used to generate a database to our internal servers, and then used externally by our client.
The problem I have is that our internal script will automate quite a few things for us, in which the actual production environment will require these completed manually.
For example, internally we would use the following script
-- Global variables
:setvar EnvironmentName 'Local'
-- Script.PostDeployment.sql
:r .\PopulateDefaultValues.sql
IF ($(EnvironmentName) = 'Test')
BEGIN
:r .\GivePermissionsToDevelopmentTeam.sql
:r .\PopulateTestData.sql
:r .\RunETL.sql
END
ELSE IF ($(EnvironmentName) = 'Client_Dev')
BEGIN
:r .\GivePermissionsToDevWebsite.sql
END
This would generate a script like this:
-- (Ignore syntax correctness, its just the process I'm after)
IF($(EnvironmentName) = 'Test')
BEGIN
CREATE LOGIN [Developer1] AS USER [MyDomain\Developer1] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer2] AS USER [MyDomain\Developer2] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer3] AS USER [MyDomain\Developer3] WITH DEFAULT SCHEMA=[dbo];
-- Populate entire database (10000's of rows over 100 tables)
INSERT INTO Products ( Name, Description, Price ) VALUES
( 'Cheese Balls', 'Cheesy Balls ... mm mm mmmm', 1.00),
( 'Cheese Balls +', 'Cheesy Balls with a caffeine kick', 2.00),
( 'Cheese Squares', 'Cheesy squares with a hint of ginger', 2.50);
EXEC spRunETL 'AUTO-DEPLOY';
END
ELSE IF($(EnvironmentName) = 'Client_Dev')
BEGIN
CREATE LOGIN [WebLogin] AS USER [FABRIKAM\AppPoolUser];
END
END IF
This works fine, for us. When this script is taken on site, the script fails because it cannot authenticate the users of our internal environment.
One item I thought about permissions was to just give our internal team sysadmin privileges, but the test data just fills the script up. When going on site, having all of this test data just bloats the published script and isn't used anyway.
Is there any way to exclude a section entirely from a published file, so that all of the test data and extraeous inserts are removed, without any manual intervention of the published file?
Unfortunately, there is currently no way to remove the contents of a referenced script from the generated file entirely.
The only way to achieve this is to post-process the generated script (Powershell/Ruby/scripting language of choice) to find and remove the parts you care about using some form of string and file manipulation.
Based on: My experience with doing this exact same thing to remove a development-environment-only script which was sizable and bloated the Production deployment script with a lot of 'noise', making it harder for DBA's to review the script sensibly.

DB2 IBM everytime I create a view in my database the permissions are limited to the user that created, I want to be for everyone

DB2 IBM everytime I create a view in my database the permissions are limited to the user that created, I want to be for everyone:
create view stkqry.aaa as SELECT ... from ...
Now this "aaa" is protected. I would like to be available for everyone by default.. how to do? thanks
(This answer assumes you're using DB2 for Linux/Unix/Windows)
You have to use GRANT to assign the special AuthID "PUBLIC" (everyone) permissions to the view.
GRANT SELECT ON stkqry.aaa TO PUBLIC
I don't think that there's a way to automatically mark all views as readable by public, but if you need to go back and mark all of them, you could use something like this to generate the statement for you:
SELECT 'GRANT SELECT ON ' ||
TRIM(VIEWSCHEMA) || '.' ||
TRIM(VIEWNAME) || ' TO PUBLIC'
FROM SYSCAT.VIEWS
WHERE DEFINER <> 'SYSIBM'

In SQL Server 2005, is there an easy way to "copy" permissions on an object from one user/role to another?

I asked another question about roles and permissions, which mostly served to reveal my ignorance. One of the other outcomes was the advice that one should generally stay away from mucking with permissions for the "public" role.
OK, fine, but if I've already done so and want to re-assign the same permissions to a custom/"flexible" role, what's the best way to do that? What I've done so far is to run the Scripting wizard, and tell it to script object permissions without CREATE or DROP, then run a find-replace so that I wind up with a lot of "GRANT DELETE on [dbo.tablename] TO [newRole]". It gets the job done, but I feel like it could be prettier/easier. Any "best practice" suggestions?
Working from memory (no SQL on my gaming 'pooter), you can use sys.database_permissions
Run this and paste the results into a new query.
Edit, Jan 2012. Added OBJECT_SCHEMA_NAME.
You may need to pimp it to support schemas (dbo.) by joining onto sys.objects
SET NOCOUNT ON;
DECLARE #NewRole varchar(100), #SourceRole varchar(100);
-- Change as needed
SELECT #SourceRole = 'Giver', #NewRole = 'Taker';
SELECT
state_desc + ' ' +
permission_name + ' ON ' +
OBJECT_SCHEMA_NAME(major_id) + '.' + OBJECT_NAME(major_id) +
' TO ' + #NewRole
FROM
sys.database_permissions
WHERE
grantee_principal_id = DATABASE_PRINCIPAL_ID(#SourceRole)
AND
-- 0 = DB, 1 = object/column, 3 = schema. 1 is normally enough
class <= 3;
The idea of having a role is that you only need to setup the permissions once. You can then assign users, or groups of users to that role.
It's also possible to nest roles, so that a role can contain other roles.
Not sure if its best practice, but it makes sense that if you have a complex set of permissions, with groups of users that need access to multiple applications you go something like:
NT User -> NT Security Group -> SQL Server Role -> SQL Server Role A, Role B ...

Resources