using pg_read_file read file in desktop PostgreSQL - database

I wanted to how to read a file in my desk top using pg_read_file in PostgreSQL
pg_read_file(filename text [, offset bigint, length bigint])
my query
select pg_read_file('/root/desktop/new.txt' , 0 , 1000000);
error
ERROR: absolute path not allowed
UPDATE

pg_read_file can read the files only from the data directory path, if you would like to know your data directory path use:
SHOW data_directory;
I think that you can resolve you problem by looking to this post

If you're using psql you can use \lo_import to create a large object from a local file.
The pg_read_file tool only allows reads from server-side files.

To read the content of a file from PostgreSQL you can use this.
CREATE TABLE demo(t text);
COPY demo from '[FILENAME]';
SELECT * FROM demo;
Each text-line in a SQL-ROW. Useful for temporary transfers.

lo_import(file path) will generate an oid.This may solve your problem. you can import any type of file using this (even image)

Related

using pattern while loading csv file from s3 to snowflake

below copy command is not working , please correct me if something wrong.
copy into mytable from #mystage pattern='20.*csv.gz'
here i am trying to load the files which are starts with 20, there are mix of files which are having the name as 2021myfile.csv.gz, myfile202109.csv.gz, above command is not loading any files though there are files which starts with 20.
if i use pattern as pattern='.*20.*csv.gz'`` it is taking all the files which is wrong, i need to load only the files that are starts with 20`.
Thanks!
This is because the pattern clause is a Regex expression.
Try this:
copy into mytable from #mystage pattern = '[20]*.*csv.gz'
Reference: Loading Using Pattern Matching

How download more than 100MB data into csv from snowflake's database table

Is there a way to download more than 100MB of data from Snowflake into excel or csv?
I'm able to download up to 100MB through the UI, clicking the 'download or view results button'
You'll need to consider using what we call "unload", a.k.a. COPY INTO LOCATION
which is documented here:
https://docs.snowflake.net/manuals/sql-reference/sql/copy-into-location.html
Other options might be to use a different type of client (python script or similar).
I hope this helps...Rich
.....EDITS AS FOLLOWS....
Using the unload (COPY INTO LOCATION) isn't quite as overwhelming as it may appear to be, and if you can use the snowSQL client (instead of the webUI) you can "grab" the files from what we call an "INTERNAL STAGE" fairly easily, example as follows.
CREATE TEMPORARY STAGE my_temp_stage;
COPY INTO #my_temp_stage/output_filex
FROM (select * FROM databaseNameHere.SchemaNameHere.tableNameHere)
FILE_FORMAT = (
TYPE='CSV'
COMPRESSION=GZIP
FIELD_DELIMITER=','
ESCAPE=NONE
ESCAPE_UNENCLOSED_FIELD=NONE
date_format='AUTO'
time_format='AUTO'
timestamp_format='AUTO'
binary_format='UTF-8'
field_optionally_enclosed_by='"'
null_if=''
EMPTY_FIELD_AS_NULL = FALSE
)
overwrite=TRUE
single=FALSE
max_file_size=5368709120
header=TRUE;
ls #my_temp_stage;
GET #my_temp_stage file:///tmp/ ;
This example:
Creates a temporary stage object in Snowflake, which will be discarded when you close your session.
Takes the results of your query and loads them into one (or more) csv files in that internal temporary stage, depending on size of your output. Notice how I didn't create another database object called a "FILE FORMAT", it's considered a best practice to do so, but you can do these one off extracts without creating that separate object if you don't mind having the command be so long.
Lists the files in the stage, so you can see what was created.
Pulls the files down using the GET, in this case this was run on my mac and the file(s) were placed in /tmp, if you are using Windoz you will need to modify a little bit.

Piwik database - piwik_archive_blob value column

I am using Piwik and after inspecting the database i see a table: piwik_archive_blob__
This table has a column called value with type: mediumblob
The values appear to be jumbled characters. I assume that there is an encode/decode process.
Can anyone help me decode this column. I think there is good data here, but i need to be able to read it
Thanks
The value column stores serialized and gzcompressed DataTable objects, so there is no easy way to read it.
Just a quick example how you could uncompress and deserialize it using PHP:
Download the blob as a .bin using something a tool like phpmyadmin
Load the file into PHP, uncompress and unserialze it using the following:
<?php
$sBlobFile = file_get_contents( 'piwik_archive_blob_2017_03-value.bin' );
$sBlobFile = unserialize( gzuncompress ( $sBlobFile ) );
var_dump( $sBlobFile );
Of course you can also just retrieve the blob using MySQL and access it directly in PHP opposed to downloading it as a file first.

How to view 'data' file which is included by .odb database file?

I am trying to extract the data from .odb database file. For this, at first I unzipped the .odb file and then tried to read 'data' file came from this unzipped. But I guess there is an encoding problem during the reading process. I get some meaningless symbols. As far I search, this file could be a binary file. By the way, I can not see the extension of the 'data' file. I wonder how to read file to exract data?
i'm brazilian, and i saw this question without a answer.
i'm python user and i did this:
try to open the file that contains database *.odb
___________________________a file.py________________________
import sys, zipfile
myfile = zipfile.ZipFile(yourfile.odb)
listoffiles = myfile.infolist()
for s in listoffiles:
if s.orig_filename == "database/data":
print(bh.decode("utf-8", "ignore"))
____________________________eof_________________________________
my table is very simple, but it may help.
I've found this jointing parts from several websites.
as you see, an odb file is simply a zipped file that contains a xml file that contains a table information "content.xml" but only table information.
the content of database is in database/data. the values are stored here. you can decode with decode on python.
thanks to http://www.linuxjournal.com/ too, were i found some scripts

Read a value from a .xls file using .bat files

I just want to know if there could be any way by which we can read a value from an .xls file using a .bat file.
For eg:If i have an .xls named test.xls which is having two columns
namely 'EID' and then 'mail ID'.Now when we give the input to the .xls the EID name.it should extract the mail id which corresponds to the EID and echo the result out.
**EID** **MailID**
E22222 MynameisA#company.com
E33333 MynameisB#company.com
...
...
So by the above table,when i give the input to the xls file using my .bat file as E22222,it should read the corresponding mail ID as MynameisA#company.com and it should echo the value.
So i hope i am able to present my doubt.Please get back to me for more clarifications.
Thanks and regards
Maddy
There is no facility to do this directly with traditional .bat files. However, you might investigate PowerShell, which is designed to be able to do this sort of thing. PowerShell integrates well with existing Windows applications (such as Excel) and may provide the tools you need to do this easily.
A quick search turned up this example of reading Excel files from PowerShell.
You can't do this directly from a batch file. Furthermore, to manipulate use Excel files in scripting you need Excel to be installed.
What you can do is wrap the Excel-specific stuff in a VBScript and call that from your batch.
You can do it with Alacon - command-line utility for Alasql database.
It works with Node.js, so you need to install Node.js and then Alasql package:
To take data from Excel file you can use the following command:
> node alacon "SELECT VALUE [mail ID] FROM XLS('mydata.xls', {headers:true})
WHERE EID = ?" "E2222"
Fist parameter is a SQL-expresion, which read data from XLSX file with header and search data
for second parameter value: "E22222". The command returns mail ID value.
This will be hard (very close to impossible) in BAT, especially when using the original XLS file, but even after an export to CSV it will be much easier to use a script/programming language (Perl, C, whatever) to do this.

Resources