Postgres extension AGE not getting loaded - database

After starting the Postgres server process for a cluster:
bin/pg_ctl -D demo -l logfile start
Starting a process for a database 'demo':
bin/psql demo
When I try to load AGE extension by
LOAD 'age';
It shows error that access to 'age' is denied.
Do I need to change some security/credential information for the user?
I expected the extension to be loaded so that I can execute cypher queries.

Run install check to see if postgresql and Apache AGE have been sucessfully installed without any error using the command in the age folder:
make PG_CONFIG=/home/path/to/age/bin/pg_config installcheck
and if this is the case then you have to create an extension of age and then load as follows:
CREATE EXTENSION age;
Load 'age';
Now set Search Path and run a simple cypher query:
SET search_path = ag_catalog, "$user", public;
SELECT create_graph('demo_graph');

To load the APACHE AGE extension, run the following commands after successful installation (verify using installcheck):
CREATE EXTENSION IF NOT EXISTS age;
LOAD 'age';
SET search_path = ag_catalog, "$user", public;
Create a graph using:
SELECT create_graph('graph_name');
To avoid running the load command each time, set the required parameters in the postgresql.conf file:
Locate the file at database_name/postgresql.conf (in your case,
it would be demo/postgresql.conf)
Add the following lines to the file:
shared_preload_libraries = 'age'
search_path = 'ag_catalog, "$user", public'

You might need superuser privileges as described here in order to execute the CREATE EXTENSION statement.
Here's a possible relevant issue with a solution in GitHub issues

Related

My snowsql connection locked when trying to connect and run config

I am trying to set up a Snowpipe, and I have created my warehouse, database and table and am trying to stage the filew with snowsql.
USE WAREHOUSE IoT;
USE DATABASE SNOWPIPE_TEST;
CREATE OR REPLACE STAGE my_stage;
CREATE OR REPLACE FILE_FORMAT r_json;
CREATE OR REPLACE PIPE snowpipe_pipe
AUTO_INGEST = TRUE,
COMMENT = 'add items IoT',
VALIDATION_MODE = RETURN_ALL_ERRORS
AS (COPY INTO snowpipe_test.public.mytable
from #snowpipe_db.public.my_stage
FILE_FORMAT = (type = 'JSON');
CREATE PIPE mypipe AS COPY INTO mytable FROM #my_stage;
I think something is locked but I am not sure.
I tried to save the config file as config1 and made a copy. It hung, then I remove the copy and tried to connect and there was no error, it just hung
Am I missing something?
To specify the auto ingest parameter it's AUTO_INGEST rather than AUTO-INGEST, but note that this option is not available for an internal stage. So when you try to run this command using an internal stage it should error with a message pointing this out.
https://docs.snowflake.net/manuals/sql-reference/sql/create-pipe.html#optional-parameters
Also you don't need the bracket between the "AS" and "copy" on line 5.

In the tutorial "Tutorial: Bulk Loading from a local file system using copy" what is the difference between my_stage and my_table permissions?

I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated

Unaccent issue when restoring a Postgres database

I want to restore a particular database under another database name to another server as well. So far, so good.
I used this command :
pg_dump -U postgres -F c -O -b -f maindb.dump maindb
to dump the main database on the production server. The I use this command :
pg_restore --verbose -O -l -d restoredb maindb.dump
to restore the database in another database on our test server. It restore mostly ok, but there are some errors, like :
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3595; 1259 213452 INDEX idx_clientnomclient maindbuser
pg_restore: [archiver (db)] could not execute query: ERROR: function unaccent(text) does not exist
LINE 1: SELECT unaccent(lower($1));
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
QUERY: SELECT unaccent(lower($1));
CONTEXT: SQL function "cyunaccent" during inlining
Command was: CREATE INDEX idx_clientnomclient ON client USING btree (public.cyunaccent((lower((nomclient)::text))::character varying));
cyunaccent is a function that is in the public shcema and does gets created with the restore.
After the restore, I am able to re-create those indexs perfecly with the same sql, without any errors.
I've also tried to restore with the -i option of pg_restore to do a single transaction, but it doesn't help.
What am I doing wrong ?
I just found the problem, and I was able to narrow it down to a simple test-case.
CREATE SCHEMA intranet;
CREATE EXTENSION IF NOT EXISTS unaccent WITH SCHEMA public;
SET search_path = public, pg_catalog;
CREATE FUNCTION cyunaccent(character varying) RETURNS character varying
LANGUAGE sql IMMUTABLE
AS $_$ SELECT unaccent(lower($1)); $_$;
SET search_path = intranet, pg_catalog;
CREATE TABLE intranet.client (
codeclient character varying(10) NOT NULL,
noclient character varying(7),
nomclient character varying(200) COLLATE pg_catalog."fr_CA"
);
ALTER TABLE ONLY client ADD CONSTRAINT client_pkey PRIMARY KEY (codeclient);
CREATE INDEX idx_clientnomclient ON client USING btree (public.cyunaccent((lower((nomclient)::text))::character varying));
This test case is from a pg_dump done in plain text.
As you can see, the cyunaccent function is created in the public shcema, as it's later used by other tables in other schema.
psql/pg_restore won't re-create the index, as it cannot find the function, despite the fact that the shcema name is specified to reference it. The problem lies in the
SET search_path = intranet, pg_catalog;
call. Changing it to
SET search_path = intranet, public, pg_catalog;
solves the problem. I've submitted a bug report to postgres about this, not yet in the queue.

Oracle 11g External Table error

I'm trying to run a simple external table program using oracle 11g on Linux VM. The problem is that I can't query any data from .txt files.
Here's my code:
CONN / as sysdba;
CREATE OR REPLACE DIRECTORY DIR1 AS 'home/oracle/TEMP/X/';
GRANT READ, WRITE ON DIRECTORY DIR1 TO user;
CONN user/password;
CREATE TABLE gerada
(
field1 INT,
field2 Varchar2(20)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY DIR1
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ';'
MISSING FIELD VALUES ARE NULL
)
LOCATION ('registros.txt')
)
REJECT LIMIT UNLIMITED;
--Error starts here.
SELECT * FROM gerada;
DROP TABLE gerada;
DROP DIRECTORY DIR1;
Here's the error message:
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
error opening file home/oracle/TEMP/X/GERADA_3375.log
And thats how registros.txt looks like:
1234;hello world;
I've checked my permissions on DIR1 and I do have read/write permissions.
Any ideas?
ORA-29913 and ORA-29400 mean that you're unable to access to directory and/or file.
Looking carefully at the CREATE DIRECTORY command it looks like the path you're using may be mis-formatted. Try putting a forward slash at the start of the path and removing the one at the end of the path when creating the directory - e.g. CREATE OR REPLACE DIRECTORY DIR1 AS '/home/oracle/TEMP/X';.
Share and enjoy.

Oracle external tables - Specifying dynamic filename

CREATE TABLE LOG_FILES (
LOG_DTM VARCHAR(18),
LOG_TXT VARCHAR(300)
)
ORGANIZATION EXTERNAL(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY LOG_DIR
ACCESS PARAMETERS(
RECORDS DELIMITED BY NEWLINE
FIELDS(
LOG_DTM position(1:18),
LOG_TXT position(19:300)
)
)
LOCATION('logadm'))
)
REJECT LIMIT UNLIMITED
/
LOG_DIR is an oracle directory that points to /u/logs/
The problem though is that the contents of /u/logs/ looks like this
logadm_12012012.log
logadm_13012012.log
logadm_14012012.log
logadm_15012012.log
Is there any way i can specify the location of the file dynamically? i.e. every time i run Select * from LOG_FILES it should use the log file of the day. (e.g. log_adm_DDMMYYYYY).
I know i can use alter table log_files location ('logadm_15012012.log') but i would like not to have to issue the alter command.
Any other possibilities?
Thanks
It's a shame you're running 10g. On 11g we can associate a pre-processor script - a shell script - with an external table. In your case you could run a script which would figure out the latest file and then issue a copy command. Something like:
cp logadm_15012012.log logadm
Adrian Billington has blogged about this feature here. Frankly his write-up is more helpful than the official docs.
But as you're on 10g all you can do is run the ALTER TABLE statement, or use a scheduled job (cron or whatever) to sync a new file with the generic name.

Resources