monetdb - error loading tbl - database

Loading the .tbl file I've got this error:
[nicola#localhost ~]$ mclient -d dbmonet -s "COPY INTO monet.SUPPLIER FROM STDIN USING DELIMITERS ',','\\n','\"'" - < /home/nicola/Scrivania/tabellemonetdb/supplier.tbl
user(nicola):monetdb
password:
missing separator ',' line 0 expecting 6 got 1 fields
failed to import table
current transaction is aborted (please ROLLBACK)
syntax error, unexpected sqlINT in: "0201"
Why do I get this error?
I'm using an ssb schema.

Without knowing anything about the structure of the supplier.tbl file, my guess (from having used SSBM before) would be that it does not use "," as a field separator, but "|".
My SSBM loading command for the supplier table looks like this:
COPY INTO SUPPLIER FROM '/path/to/supplier.tbl' USING DELIMITERS '|', '|\n' LOCKED;

Related

How to solve error "Field delimiter ',' found while expecting record delimiter '\n'" while loading json data to the stage

I am trying to "COPY INTO" command to load data from s3 to the snowflake
Below are the steps I followed to create the stage and loading file from stage to Snowflake
JSON file
{
"Name":"Umesh",
"Desigantion":"Product Manager",
"Location":"United Kingdom"
}
create or replace stage emp_json_stage
url='s3://mybucket/emp.json'
credentials=(aws_key_id='my id' aws_secret_key='my key');
# create the table with variant
CREATE TABLE emp_json_raw (
json_data_raw VARIANT
);
#load data from stage to snowflake
COPY INTO emp_json_raw from #emp_json_stage;
I am getting below error
Field delimiter ',' found while expecting record delimiter '\n' File
'emp.json', line 2, character 18 Row 2, column
"emp_json_raw"["JSON_DATA_RAW":1]
I am using a simple JSON file, and I don't understand this error.
What causes it and how can I solve it?
File format is not specified and is defaulting to CSV format hence the error.
Try this:
COPY INTO emp_json_raw
from #emp_json_stage
file_format=(TYPE=JSON);
There are other options too that can be specified with file_format other than TYPE. Refer the documentation here: https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#type-json
try:
file_format = (type = csv field_optionally_enclosed_by='"')
The default settings do not expect the " wrapping around your data.
So you could strip all the " or ... just set the field_optionally_enclosed_by to a ". This does mean if your data has " in it things get messy.
https://docs.snowflake.com/en/user-guide/getting-started-tutorial-copy-into.html
https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html#type-csv
Also have a standard practice to mention type of file either it could be CSV, JSON ,AVRO , Parquet etc.
https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html

Snowflake-Internal Stage data load error: How to load "\" character

In a file, few of the rows have \ in a column value for example, i have rows in below format.
101,Path1,Z:\VMC\PSPS,abc
102,Path5,C:\wintm\PSPS,abc
I was wondering how to load \ character
COPY INTO TEST_TABLE from #database.schema.stage_name FILE_FORMAT = ( TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY = '\"' SKIP_HEADER = 1 );
is there any thing that i can mention the file_format line?
Are you still getting this error? I just tried to recreate it by creating a CSV based off your sample data and a test table. I loaded the CSV into an internal stage and then ran your COPY command. It worked for me. Please see the screenshot below.
Could you provide more details on the error you are facing? Perhaps there was something off with your table definition.

Laravel 5.5 - DB::statement error with \copy command (POSTGRES)

Im trying to use the \copy command from POSTGRES using laravel 5.5, to insert a large file at the DB, but im getting this error bellow.
I tried this way:
DB::statement( DB::raw("\\copy requisicoes FROM '".$file1."' WITH DELIMITER ','"));
Get this error:
SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "\" LINE 1: \copy requisicoes FROM '/srv/www/bilhetagem_logs/bilhetagem_... ^ (SQL: \copy requisicoes FROM '/srv/www/bilhetagem_logs/bilhetagem_log1_2018-10-29' WITH DELIMITER ',')
Tried this way too:
DB::statement( DB::raw('\copy requisicoes FROM \''.$file1.'\' WITH DELIMITER \',\''));
Get this error:
SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "\" LINE 1: \copy requisicoes FROM '/srv/www/bilhetagem_logs/bilhetagem_... ^ (SQL: \copy requisicoes FROM '/srv/www/bilhetagem_logs/bilhetagem_log1_2018-10-29' WITH DELIMITER ',')
If i execute the command that returns on the error above with psql line command, works fine
\copy requisicoes FROM '/srv/www/bilhetagem_logs/bilhetagem_log1_2018-10-29' WITH DELIMITER ','
Could somebody helps me? :)
I have to use \copy insted of copy becouse I dont have superuser privilege at the DB.
https://www.postgresql.org/docs/9.2/static/sql-copy.html
COPY naming a file is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.
See this article on PostgreSQL and note this line:
Do not confuse COPY with the psql instruction \copy. \copy invokes
COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in
a file accessible to the psql client. Thus, file accessibility and
access rights depend on the client rather than the server when \copy
is used.
\copy is a psql instruction, so you do not need to write \copy, just COPY.
This is my code to import data from sql to pgsql database
First export CSV file with separator '^'
Then import same file into pgsql using copy command
$users = User::select('*')->get()->toArray();
$pages = "id,warehouse_id,name,email,email_verified_at,password,remember_token,created_at,updated_at\n";
foreach ($users as $where) {
$pages .= "{$where['id']}^{$where['warehouse_id']}^{$where['name']}^{$where['email']}^{$where['email_verified_at']}^{$where['password']}^{$where['remember_token']}^{$where['created_at']}^{$where['updated_at']}\n";
}
$file = Storage::disk('local')->put('user.csv', $pages);
if($file){
$data = "";
try {
$file_path = storage_path('app/user.csv');
$data = DB::connection('pgsql')->statement("copy public.users (id, warehouse_id, name, email, email_verified_at, password, remember_token, created_at, updated_at) FROM '$file_path' DELIMITER '^' CSV HEADER ENCODING 'UTF8' ESCAPE '\"';");
} catch (\Exception $e) {
throw $e;
}
}

How do I output the results of a HiveQL query to CSV?

we would like to put the results of a Hive query to a CSV file. I thought the command should look like this:
insert overwrite directory '/home/output.csv' select books from table;
When I run it, it says it completeld successfully but I can never find the file. How do I find this file or should I be extracting the data in a different way?
Although it is possible to use INSERT OVERWRITE to get data out of Hive, it might not be the best method for your particular case. First let me explain what INSERT OVERWRITE does, then I'll describe the method I use to get tsv files from Hive tables.
According to the manual, your query will store the data in a directory in HDFS. The format will not be csv.
Data written to the filesystem is serialized as text with columns separated by ^A and rows separated by newlines. If any of the columns are not of primitive type, then those columns are serialized to JSON format.
A slight modification (adding the LOCAL keyword) will store the data in a local directory.
INSERT OVERWRITE LOCAL DIRECTORY '/home/lvermeer/temp' select books from table;
When I run a similar query, here's what the output looks like.
[lvermeer#hadoop temp]$ ll
total 4
-rwxr-xr-x 1 lvermeer users 811 Aug 9 09:21 000000_0
[lvermeer#hadoop temp]$ head 000000_0
"row1""col1"1234"col3"1234FALSE
"row2""col1"5678"col3"5678TRUE
Personally, I usually run my query directly through Hive on the command line for this kind of thing, and pipe it into the local file like so:
hive -e 'select books from table' > /home/lvermeer/temp.tsv
That gives me a tab-separated file that I can use. Hope that is useful for you as well.
Based on this patch-3682, I suspect a better solution is available when using Hive 0.11, but I am unable to test this myself. The new syntax should allow the following.
INSERT OVERWRITE LOCAL DIRECTORY '/home/lvermeer/temp'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
select books from table;
If you want a CSV file then you can modify Lukas' solutions as follows (assuming you are on a linux box):
hive -e 'select books from table' | sed 's/[[:space:]]\+/,/g' > /home/lvermeer/temp.csv
This is most csv friendly way I found to output the results of HiveQL.
You don't need any grep or sed commands to format the data, instead hive supports it, just need to add extra tag of outputformat.
hive --outputformat=csv2 -e 'select * from <table_name> limit 20' > /path/toStore/data/results.csv
You should use CREATE TABLE AS SELECT (CTAS) statement to create a directory in HDFS with the files containing the results of the query. After that you will have to export those files from HDFS to your regular disk and merge them into a single file.
You also might have to do some trickery to convert the files from '\001' - delimited to CSV. You could use a custom CSV SerDe or postprocess the extracted file.
You can use INSERT … DIRECTORY …, as in this example:
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/ca_employees'
SELECT name, salary, address
FROM employees
WHERE se.state = 'CA';
OVERWRITE and LOCAL have the same interpretations as before and paths are interpreted following the usual rules. One or more files will be written to /tmp/ca_employees, depending on the number of reducers invoked.
If you are using HUE this is fairly simple as well. Simply go to the Hive editor in HUE, execute your hive query, then save the result file locally as XLS or CSV, or you can save the result file to HDFS.
I was looking for a similar solution, but the ones mentioned here would not work. My data had all variations of whitespace (space, newline, tab) chars and commas.
To make the column data tsv safe, I replaced all \t chars in the column data with a space, and executed python code on the commandline to generate a csv file, as shown below:
hive -e 'tab_replaced_hql_query' | python -c 'exec("import sys;import csv;reader = csv.reader(sys.stdin, dialect=csv.excel_tab);writer = csv.writer(sys.stdout, dialect=csv.excel)\nfor row in reader: writer.writerow(row)")'
This created a perfectly valid csv. Hope this helps those who come looking for this solution.
You can use hive string function CONCAT_WS( string delimiter, string str1, string str2...strn )
for ex:
hive -e 'select CONCAT_WS(',',cola,colb,colc...,coln) from Mytable' > /home/user/Mycsv.csv
I had a similar issue and this is how I was able to address it.
Step 1 - Loaded the data from Hive table into another table as follows
DROP TABLE IF EXISTS TestHiveTableCSV;
CREATE TABLE TestHiveTableCSV
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n' AS
SELECT Column List FROM TestHiveTable;
Step 2 - Copied the blob from Hive warehouse to the new location with appropriate extension
Start-AzureStorageBlobCopy
-DestContext $destContext
-SrcContainer "Source Container"
-SrcBlob "hive/warehouse/TestHiveTableCSV/000000_0"
-DestContainer "Destination Container"
-DestBlob "CSV/TestHiveTable.csv"
hive --outputformat=csv2 -e "select * from yourtable" > my_file.csv
or
hive --outputformat=csv2 -e "select * from yourtable" > [your_path]/file_name.csv
For tsv, just change csv to tsv in the above queries and run your queries
The default separator is "^A". In python language, it is "\x01".
When I want to change the delimiter, I use SQL like:
SELECT col1, delimiter, col2, delimiter, col3, ..., FROM table
Then, regard delimiter+"^A" as a new delimiter.
I tried various options, but this would be one of the simplest solution for Python Pandas:
hive -e 'select books from table' | grep "|" ' > temp.csv
df=pd.read_csv("temp.csv",sep='|')
You can also use tr "|" "," to convert "|" to ","
Similar to Ray's answer above, Hive View 2.0 in Hortonworks Data Platform also allows you to run a Hive query and then save the output as csv.
In case you are doing it from Windows you can use Python script hivehoney to extract table data to local CSV file.
It will:
Login to bastion host.
pbrun.
kinit.
beeline (with your query).
Save echo from beeline to a file on Windows.
Execute it like this:
set PROXY_HOST=your_bastion_host
set SERVICE_USER=you_func_user
set LINUX_USER=your_SOID
set LINUX_PWD=your_pwd
python hh.py --query_file=query.sql
Just to cover more following steps after kicking off the query:
INSERT OVERWRITE LOCAL DIRECTORY '/home/lvermeer/temp'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
select books from table;
In my case, the generated data under temp folder is in deflate format,
and it looks like this:
$ ls
000000_0.deflate
000001_0.deflate
000002_0.deflate
000003_0.deflate
000004_0.deflate
000005_0.deflate
000006_0.deflate
000007_0.deflate
Here's the command to unzip the deflate files and put everything into one csv file:
hadoop fs -text "file:///home/lvermeer/temp/*" > /home/lvermeer/result.csv
I may be late to this one, but would help with the answer:
echo "COL_NAME1|COL_NAME2|COL_NAME3|COL_NAME4" > SAMPLE_Data.csv
hive -e '
select distinct concat(COL_1, "|",
COL_2, "|",
COL_3, "|",
COL_4)
from table_Name where clause if required;' >> SAMPLE_Data.csv
This shell command prints the output format in csv to output.txt without the column headers.
$ hive --outputformat=csv2 -f 'hivedatascript.hql' --hiveconf hive.cli.print.header=false > output.txt
Use the command:
hive -e "use [database_name]; select * from [table_name] LIMIT 10;" > /path/to/file/my_file_name.csv
I had a huge dataset whose details I was trying to organize and determine the types of attacks and the numbers of each type. An example that I used on my practice that worked (and had a little more details) goes something like this:
hive -e "use DataAnalysis;
select attack_cat,
case when attack_cat == 'Backdoor' then 'Backdoors'
when length(attack_cat) == 0 then 'Normal'
when attack_cat == 'Backdoors' then 'Backdoors'
when attack_cat == 'Fuzzers' then 'Fuzzers'
when attack_cat == 'Generic' then 'Generic'
when attack_cat == 'Reconnaissance' then 'Reconnaissance'
when attack_cat == 'Shellcode' then 'Shellcode'
when attack_cat == 'Worms' then 'Worms'
when attack_cat == 'Analysis' then 'Analysis'
when attack_cat == 'DoS' then 'DoS'
when attack_cat == 'Exploits' then 'Exploits'
when trim(attack_cat) == 'Fuzzers' then 'Fuzzers'
when trim(attack_cat) == 'Shellcode' then 'Shellcode'
when trim(attack_cat) == 'Reconnaissance' then 'Reconnaissance' end,
count(*) from actualattacks group by attack_cat;">/root/data/output/results2.csv

Sybase SQL Anywhere - Unable to export data to file

I am attempting to export a query in Sybase SQl Anywhere but am receiving an error when getting to the OUTPUT TO command. My query looks like this:
SELECT User_Name as 'Remote Database', nDaysBehind as 'Days Behind', Time_Received as 'Last Message Received'
FROM DailySynchRptView
WHERE Time_Received < today() -1 AND nDaysBehind > 0
ORDER BY Time_Received ASC
OUTPUT TO c:\daysbehind.txt format ascii
The information that shows up in ISQL when I leave off the "OUTPUT TO" is the following:
Remote Database,Days Behind,Last Message Received
'Rem00027',23,'2011-02-23 16:10:14.000'
'Rem00085',7,'2011-03-11 04:47:02.000'
'Rem00040',5,'2011-03-13 15:22:15.000'
'Rem00074',4,'2011-03-14 16:01:25.000'
'Rem00087',3,'2011-03-15 06:04:16.000'
However, when the OUTPUT TO command is placed in the query, I receive the following error:
Could not execute statement.
Syntax error near 'OUTPUT' on line 5
SQLCODE=-131, ODBC 3 State="42000"
Line 1, column 1
I am open to any suggestions that might help me be able to export the data from the query. I have ran a similar query that returns a single line of information and it does export without errors.
After a while looking at the code, I found that I was missing a semi-colon ; to separate the two sets of commands. Once I added the semi-colon before the OUTPUT line, I was able to export the information.

Resources