I'm trying to run a simple external table program using oracle 11g on Linux VM. The problem is that I can't query any data from .txt files.
Here's my code:
CONN / as sysdba;
CREATE OR REPLACE DIRECTORY DIR1 AS 'home/oracle/TEMP/X/';
GRANT READ, WRITE ON DIRECTORY DIR1 TO user;
CONN user/password;
CREATE TABLE gerada
(
field1 INT,
field2 Varchar2(20)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY DIR1
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ';'
MISSING FIELD VALUES ARE NULL
)
LOCATION ('registros.txt')
)
REJECT LIMIT UNLIMITED;
--Error starts here.
SELECT * FROM gerada;
DROP TABLE gerada;
DROP DIRECTORY DIR1;
Here's the error message:
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
error opening file home/oracle/TEMP/X/GERADA_3375.log
And thats how registros.txt looks like:
1234;hello world;
I've checked my permissions on DIR1 and I do have read/write permissions.
Any ideas?
ORA-29913 and ORA-29400 mean that you're unable to access to directory and/or file.
Looking carefully at the CREATE DIRECTORY command it looks like the path you're using may be mis-formatted. Try putting a forward slash at the start of the path and removing the one at the end of the path when creating the directory - e.g. CREATE OR REPLACE DIRECTORY DIR1 AS '/home/oracle/TEMP/X';.
Share and enjoy.
Related
i created a external table when i select from it this error show.
i work with oracle 19c
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file customer.csv in EXTERNAL not found
------------code----------------
CREATE TABLE customers
(Email VARCHAR2(255) NOT NULL,
Name VARCHAR2(255) NOT NULL,
Phone VARCHAR2(255) NOT NULL,
Address VARCHAR2(255) NOT NULL)
ORGANIZATION EXTERNAL(
type oracle_loader
DEFAULT DIRECTORY external
ACCESS PARAMETERS
(
records delimited by newline
fields terminated by ','
missing field values are null
REJECT ROWS WITH ALL NULL FIELDS)
LOCATION ('customer.csv'))
REJECT LIMIT UNLIMITED;
customer.csv data
salma.55#gmm.com,salma,0152275522,44al,
mariam.66#hotmail.com,mariam,011145528,552www,
ahmed.85#gmail.com,ahmed,0111552774,44eee,
"DEFAULT DIRECTORY external" means you are looking in a named directory that you have called "external".
For example, if I had done:
create directory XYZ as '/tmp';
then
default directory XYZ
means I'll be searching in /tmp for my files. So look at DBA_DIRECTORIES to see where your "EXTERNAL" directory is pointing
I am trying to set up a Snowpipe, and I have created my warehouse, database and table and am trying to stage the filew with snowsql.
USE WAREHOUSE IoT;
USE DATABASE SNOWPIPE_TEST;
CREATE OR REPLACE STAGE my_stage;
CREATE OR REPLACE FILE_FORMAT r_json;
CREATE OR REPLACE PIPE snowpipe_pipe
AUTO_INGEST = TRUE,
COMMENT = 'add items IoT',
VALIDATION_MODE = RETURN_ALL_ERRORS
AS (COPY INTO snowpipe_test.public.mytable
from #snowpipe_db.public.my_stage
FILE_FORMAT = (type = 'JSON');
CREATE PIPE mypipe AS COPY INTO mytable FROM #my_stage;
I think something is locked but I am not sure.
I tried to save the config file as config1 and made a copy. It hung, then I remove the copy and tried to connect and there was no error, it just hung
Am I missing something?
To specify the auto ingest parameter it's AUTO_INGEST rather than AUTO-INGEST, but note that this option is not available for an internal stage. So when you try to run this command using an internal stage it should error with a message pointing this out.
https://docs.snowflake.net/manuals/sql-reference/sql/create-pipe.html#optional-parameters
Also you don't need the bracket between the "AS" and "copy" on line 5.
I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated
I am trying to export all my tables of postrgres into individual csv files for that I am using the following function
CREATE OR REPLACE FUNCTION db_to_csv(path text)
RETURNS void AS
$BODY$
declare
tables RECORD;
statement TEXT;
begin
FOR tables IN
SELECT (table_schema || '.' || table_name) AS schema_table
FROM information_schema.tables t INNER JOIN information_schema.schemata s
ON s.schema_name = t.table_schema
WHERE t.table_schema NOT IN ('pg_catalog', 'information_schema', 'configuration')
ORDER BY schema_table
LOOP
statement := 'COPY ' || tables.schema_table || ' TO ''' || path || '/' || tables.schema_table || '.csv' ||''' DELIMITER '';'' CSV HEADER';
EXECUTE statement;
END LOOP;
return;
end;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION db_to_csv(text)
OWNER TO postgres;
but when I am calling this function I am getting could not open file "/home/user/Documents/public.tablename.csv" for writing: Permission denied
I have tried copying individual table using
COPY activities TO '/home/user/Documents/foldername/conversions/tablename.csv' DELIMITER ',' CSV HEADER;
It gives me the following error
ERROR: could not open file "/home/user/Documents/foldername/conversions/tablename.csv" for writing: Permission denied
********** Error **********
ERROR: could not open file "/home/user/Documents/foldername/conversions/tablename.csv" for writing: Permission denied
SQL state: 42501
Any suggestions how to fix this.
Make a folder on which every user has access. Then run the COPY command on a file there. COPY works only on directories where postgres user has access
sudo mkdir /media/export
sudo chmod 777 /media/export
COPY activities TO '/media/export/activities.csv' DELIMITER ',' CSV HEADER;
I was facing the same issue and I followed the second answer
Make a folder on which every user has access. Then run the COPY command on a file there. COPY works only on directories where postgres user has access
This didn't work for me.
So, I performed copy to /tmp/somename.csv and then copied the file to my actual required location for usage.
\copy query TO '/tmp/somename.csv' with csv;
Not working after given permission.
Now I tried to export the same location where greenplum data available i.e greenplum/data, now permission denied problem get resolved.
COPY Table_Name TO '/home/greenplum/data/table_data_export.csv';
CREATE TABLE LOG_FILES (
LOG_DTM VARCHAR(18),
LOG_TXT VARCHAR(300)
)
ORGANIZATION EXTERNAL(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY LOG_DIR
ACCESS PARAMETERS(
RECORDS DELIMITED BY NEWLINE
FIELDS(
LOG_DTM position(1:18),
LOG_TXT position(19:300)
)
)
LOCATION('logadm'))
)
REJECT LIMIT UNLIMITED
/
LOG_DIR is an oracle directory that points to /u/logs/
The problem though is that the contents of /u/logs/ looks like this
logadm_12012012.log
logadm_13012012.log
logadm_14012012.log
logadm_15012012.log
Is there any way i can specify the location of the file dynamically? i.e. every time i run Select * from LOG_FILES it should use the log file of the day. (e.g. log_adm_DDMMYYYYY).
I know i can use alter table log_files location ('logadm_15012012.log') but i would like not to have to issue the alter command.
Any other possibilities?
Thanks
It's a shame you're running 10g. On 11g we can associate a pre-processor script - a shell script - with an external table. In your case you could run a script which would figure out the latest file and then issue a copy command. Something like:
cp logadm_15012012.log logadm
Adrian Billington has blogged about this feature here. Frankly his write-up is more helpful than the official docs.
But as you're on 10g all you can do is run the ALTER TABLE statement, or use a scheduled job (cron or whatever) to sync a new file with the generic name.