I want to restore a particular database under another database name to another server as well. So far, so good.
I used this command :
pg_dump -U postgres -F c -O -b -f maindb.dump maindb
to dump the main database on the production server. The I use this command :
pg_restore --verbose -O -l -d restoredb maindb.dump
to restore the database in another database on our test server. It restore mostly ok, but there are some errors, like :
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3595; 1259 213452 INDEX idx_clientnomclient maindbuser
pg_restore: [archiver (db)] could not execute query: ERROR: function unaccent(text) does not exist
LINE 1: SELECT unaccent(lower($1));
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
QUERY: SELECT unaccent(lower($1));
CONTEXT: SQL function "cyunaccent" during inlining
Command was: CREATE INDEX idx_clientnomclient ON client USING btree (public.cyunaccent((lower((nomclient)::text))::character varying));
cyunaccent is a function that is in the public shcema and does gets created with the restore.
After the restore, I am able to re-create those indexs perfecly with the same sql, without any errors.
I've also tried to restore with the -i option of pg_restore to do a single transaction, but it doesn't help.
What am I doing wrong ?
I just found the problem, and I was able to narrow it down to a simple test-case.
CREATE SCHEMA intranet;
CREATE EXTENSION IF NOT EXISTS unaccent WITH SCHEMA public;
SET search_path = public, pg_catalog;
CREATE FUNCTION cyunaccent(character varying) RETURNS character varying
LANGUAGE sql IMMUTABLE
AS $_$ SELECT unaccent(lower($1)); $_$;
SET search_path = intranet, pg_catalog;
CREATE TABLE intranet.client (
codeclient character varying(10) NOT NULL,
noclient character varying(7),
nomclient character varying(200) COLLATE pg_catalog."fr_CA"
);
ALTER TABLE ONLY client ADD CONSTRAINT client_pkey PRIMARY KEY (codeclient);
CREATE INDEX idx_clientnomclient ON client USING btree (public.cyunaccent((lower((nomclient)::text))::character varying));
This test case is from a pg_dump done in plain text.
As you can see, the cyunaccent function is created in the public shcema, as it's later used by other tables in other schema.
psql/pg_restore won't re-create the index, as it cannot find the function, despite the fact that the shcema name is specified to reference it. The problem lies in the
SET search_path = intranet, pg_catalog;
call. Changing it to
SET search_path = intranet, public, pg_catalog;
solves the problem. I've submitted a bug report to postgres about this, not yet in the queue.
Related
After starting the Postgres server process for a cluster:
bin/pg_ctl -D demo -l logfile start
Starting a process for a database 'demo':
bin/psql demo
When I try to load AGE extension by
LOAD 'age';
It shows error that access to 'age' is denied.
Do I need to change some security/credential information for the user?
I expected the extension to be loaded so that I can execute cypher queries.
Run install check to see if postgresql and Apache AGE have been sucessfully installed without any error using the command in the age folder:
make PG_CONFIG=/home/path/to/age/bin/pg_config installcheck
and if this is the case then you have to create an extension of age and then load as follows:
CREATE EXTENSION age;
Load 'age';
Now set Search Path and run a simple cypher query:
SET search_path = ag_catalog, "$user", public;
SELECT create_graph('demo_graph');
To load the APACHE AGE extension, run the following commands after successful installation (verify using installcheck):
CREATE EXTENSION IF NOT EXISTS age;
LOAD 'age';
SET search_path = ag_catalog, "$user", public;
Create a graph using:
SELECT create_graph('graph_name');
To avoid running the load command each time, set the required parameters in the postgresql.conf file:
Locate the file at database_name/postgresql.conf (in your case,
it would be demo/postgresql.conf)
Add the following lines to the file:
shared_preload_libraries = 'age'
search_path = 'ag_catalog, "$user", public'
You might need superuser privileges as described here in order to execute the CREATE EXTENSION statement.
Here's a possible relevant issue with a solution in GitHub issues
I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated
I am not sure if my issue connecting to the Scala Play 2.5.x Framework or to PostgreSQL so I am going to describe my setup.
I am using the Play 2.5.6 with Scala and PostgreSQL 9.5.4-2 from the BigSQL Sandboxes. I use the Play Framework default evolution package to manage the DB versions.
I created a new database in BigSQL Sandbox's PGSQL and PGSQL created a default schema called public. I use this schema for development.
I would like to create a table with the following script (1.sql in DB evolution config):
# Initialize the database
# --- !Ups
CREATE TABLE user (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL,
creation_date TIMESTAMP NOT NULL
);
# --- !Downs
DROP TABLE user;
Besides that I would like to read the table with a code like this:
val resultSet = statement.executeQuery("SELECT id, name, email FROM public.user WHERE id=" + id.toString)
I got an error if I would like to execute any of the mentioned code or even if I use the CREATE TABLE... code in pgadmin. The issue is with the user table name. If I prefix it with public (i.e. public.user) everything works fine.
My questions are:
Is it normal to prefix the table name with the schema name every time? It seems to odd to me.
How can I make the public schema a default option so I do not have to qualify the table name? (e.g. CREATE TABLE user (...); will not throw an error)
I tried the following:
I set the search_path for my user: ALTER USER my_user SET search_path to public;
I set the search_path for my database: ALTER database "my_database" SET search_path TO my_schema;
search_path correctly shows this: "$user",public
I got the following errors:
In Play: p.a.d.e.DefaultEvolutionsApi - ERROR: syntax error at or near "user"
In pgadmin:
ERROR: syntax error at or near "user"
LINE 1: CREATE TABLE user (
********** Error **********
ERROR: syntax error at or near "user"
SQL state: 42601
Character: 14
This has nothing to do with the default schema. user is a reserved word.
You need to use double quotes to be able to create such a table:
CREATE TABLE "user" (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL,
creation_date TIMESTAMP NOT NULL
);
But I strongly recommend not doing that. Find a different name that does not require a quoted identifier.
I created a new superuser just so that this user can run COPY command.
Note that a non-superuser cannot run a copy command.
I need this user due to a backup application, and that application requires to run COPY command
But all the restrictions that I specified does not take effect (see below).
What is the difference between user postgres and a superuser?
And is there a better way to achieve what I want? I looked into a function with security definer as postgres ... that seems a lot of work for multiple tables.
DROP ROLE IF EXISTS mynewuser;
CREATE ROLE mynewuser PASSWORD 'somepassword' SUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT LOGIN;
-- ISSUE: the user can still CREATEDB, CREATEROLE
REVOKE UPDATE,DELETE,TRUNCATE ON ALL TABLES IN SCHEMA public, schema1, schema2, schema3 FROM mynewuser;
-- ISSUE: the user can still UPDATE, DELETE, TRUNCATE
REVOKE CREATE ON DATABASE ip2_sync_master FROM mynewuser;
-- ISSUE: the user can still create table;
You are describing a situation where a user can write files to the server where the database runs but is not a superuser. While not impossible, it's definitely abnormal. I would be very selective about who I allow to access my DB server.
That said, if this is the situation, I'd create a function to load the table (using copy), owned by the postgres user and grant the user rights to execute the function. You can pass the filename as a parameter.
If you want to get fancy, you can create a table of users and tables to define what users can upload to what tables and have the table name as a parameter also.
It's pretty outside of the norm, but it's an idea.
Here's a basic example:
CREATE OR REPLACE FUNCTION load_table(TABLENAME text, FILENAME text)
RETURNS character varying AS
$BODY$
DECLARE
can_upload integer;
BEGIN
select count (*)
into can_upload
from upload_permissions p
where p.user_name = current_user and p.table_name = TABLENAME;
if can_upload = 0 then
return 'Permission denied';
end if;
execute 'copy ' || TABLENAME ||
' from ''' || FILENAME || '''' ||
' csv';
return '';
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
COPY with option other than writing to STDOUT and reading from STDIN is only allowed for database superusers role since it allows reading or writing any file that the server has privileges to access.
\copy is a psql client command which serves the same functionality as COPY but is not server-sided, so only local files can be processed - meaning it invokes COPY but ... FROM STDIN / ... TO STDOUT, so that files on a server are not "touched".
You can not revoke specific rights from a superuser. I'm quoting docs on this one:
Docs: Access DB
Being a superuser means that you are not subject to access controls.
Docs: CREATE ROLE
"superuser", who can override all access restrictions within the database. Superuser status is dangerous and should be used only when really needed.
CREATE TABLE LOG_FILES (
LOG_DTM VARCHAR(18),
LOG_TXT VARCHAR(300)
)
ORGANIZATION EXTERNAL(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY LOG_DIR
ACCESS PARAMETERS(
RECORDS DELIMITED BY NEWLINE
FIELDS(
LOG_DTM position(1:18),
LOG_TXT position(19:300)
)
)
LOCATION('logadm'))
)
REJECT LIMIT UNLIMITED
/
LOG_DIR is an oracle directory that points to /u/logs/
The problem though is that the contents of /u/logs/ looks like this
logadm_12012012.log
logadm_13012012.log
logadm_14012012.log
logadm_15012012.log
Is there any way i can specify the location of the file dynamically? i.e. every time i run Select * from LOG_FILES it should use the log file of the day. (e.g. log_adm_DDMMYYYYY).
I know i can use alter table log_files location ('logadm_15012012.log') but i would like not to have to issue the alter command.
Any other possibilities?
Thanks
It's a shame you're running 10g. On 11g we can associate a pre-processor script - a shell script - with an external table. In your case you could run a script which would figure out the latest file and then issue a copy command. Something like:
cp logadm_15012012.log logadm
Adrian Billington has blogged about this feature here. Frankly his write-up is more helpful than the official docs.
But as you're on 10g all you can do is run the ALTER TABLE statement, or use a scheduled job (cron or whatever) to sync a new file with the generic name.