I have exported my data from an InfluxDB bucket with the following command:
> influxd inspect export-lp --bucket-id d5f80730ede82d67 --engine-path ~\.influxdbv2\engine --output-path ~\Desktop\my-data.lp.gz --start 2022-11-01T00:00:00Z --end 2022-12-31T00:00:00Z --compress
I am following steps from this influxdb document.
The size of the exported file is ~8MB.
I use the below command to write the exported file back to my new bucket:
> influx write --bucket my-new-bucket --file ~\Desktop\my-data.lp.gz
I am following this InfluxDB document to write my data.
Now when I try to write it back to the DB, I get an Error:
Error: failed to write data: Post "/api/v2/write?bucket=my-new-bucket&org=00ef2f123c4706fd&precision=ns": unsupported protocol scheme ""
I have even tried to export and import without compressing and using .txt format for my line protocol. Still for all my attempts I face this same error.
I even tried uploading the same exported file through Telegraf > Sources > Line Protocol. But that too fails with an Error:
Unable to Write Data
Failed to write data - invalid line protocol submitted.
I don't know why the file exported from InfluxDB's "export-lp" command fails when I try to write it back.
Related
I have downloaded many gz files from an ftp address :
http://ftp.ebi.ac.uk/pub/databases/spot/eQTL/sumstats/
How can I check that whether the files have been truncated during the download (i.e. wget did not download the entire file because of network connection) ? Thanks.
As you can see in each directory you have file md5sum.txt.
You can use command like:
md5sum -c md5sum.txt
This will calculate the hashes and compare them with the values in the file.
How can I check that whether the files have been truncated during the
download (i.e. wget did not download the entire file because of
network connection) ?
You might use spider mode to get just headers of response, for example
wget --spider http://ftp.ebi.ac.uk/pub/databases/spot/eQTL/sumstats/Alasoo_2018/exon/Alasoo_2018_exon_macrophage_naive.permuted.tsv.gz
gives output
Spider mode enabled. Check if remote file exists.
--2022-05-30 09:38:55-- http://ftp.ebi.ac.uk/pub/databases/spot/eQTL/sumstats/Alasoo_2018/exon/Alasoo_2018_exon_macrophage_naive.permuted.tsv.gz
Resolving ftp.ebi.ac.uk (ftp.ebi.ac.uk)... 193.62.193.138
Connecting to ftp.ebi.ac.uk (ftp.ebi.ac.uk)|193.62.193.138|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 645718 (631K) [application/octet-stream]
Remote file exists.
Length is size of file (in bytes) so after comparing it with your local file you will be able to tell if it is complete or not.
If you want to download missing parts if any, rather than merely check for completeness, then take look at -c option, from wget man page
-c
--continue
Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of
Wget, or by another program.(...)
I'm trying to upload data to a Snowflake table using a zip file containg multiple CSV files but I keep getting the following message:
Unable to copy files into table. Found character '\u0098' instead of
field delimiter ',' File 'tes.zip', line 118, character 42 Row 110,
column "TEST"["CLIENT_USERNAME":1] If you would like to continue
loading when an error is encountered, use other values such as
'SKIP_FILE' or 'CONTINUE' for the ON_ERROR option. For more
information on loading options, please run 'info loading_data' in a
SQL client.
If I skip the errors some data load but it is like snowflake is not properly opening the zip file and I just get some random characters like if the zip file was only opened with notepad.
I tried changing the File Format Compression Method to all the available ones: Auto, Gzip, Deflate, Raw Deflate, Bz2m Brotli, Zstd and None. Getting different error messages.
I know my Zip file is compressed using the standard Deflate compression method but when I select this type I'm getting the following error:
Invalid data encountered during decompression for file: 'test.zip',compression type used: 'DEFLATE', cause: 'data error'
The "Auto" method sends the same error message as None
I also tried with zip files containing only one file and I get the same errors. The files that worked correctly were an uncompressed one (CSV) and one compressed using GZ but I need this to work using a zip file containing multiple CSVs
A zip file is not a DEFLATE file, even though zip uses deflate. All the compression methods supported are single file compression methods. Where-as zip is a file archive, thus why it has many files, which would be similar to are tar.gz which is also not supported.
Thus you will ether need to uncompress your files yourself, in your S3 bucket, or alter your data export tool to conform.
CREATE FILE FORMAT help
In Windows, I am trying to import a large SQL dump file (4.6 GB) into SQL Server 2008. Since it's a large file, I used
sqlcmd -S <SERVER-NAME\INSTANCE> -i C:\<path_to_SQL_file>.sql -o C:\<path_to_output_file_1.txt
However, the command produces the following error:
Error: Syntax error at line 11302 near command '-' in file C:\.sql
In Mac OS X, sed -n '11302p' C:\<path_to_SQL_file>.sql produces
?????$R?}D?XL?)?_K??h?l????????p?^?'?F1璨?¸??ωN?Q???흞????????+/??I?*5jE?1????f?`?nL_?~E?????^ap??Ht?2???g
?2z7$(f???*??????C?????????A?K?хl?B?#??˞K?
q??z?
??I.?
^ ?ݢ?G??cu?Zc?t?'?&L?W??s???W\|x??^_??PǴb???F???m:RY?ES??-D??L?????n??'
3???+?ecKd?vysEkz???wh~;o7?y??\??i
I tried to inspect the file by splitting it using split command but the output is garbled.
Do you think the file is wrongly encoded? How do you feel I should proceed trying to find the error?
The dump file was earlier used by another developer who didn't have any problems with it. Unfortunately, he is on extended vacation for the time being.
I am writing a bash script that - among other things - has to create a hive table and load a csv file (whose name is not known a priori) into that table.
I have exported the name of the file foo.csv into the environment variable myfile and have tried the command
hive --hiveconf mf=$myfile -e 'set mf; set hiveconf:mf; load data local inpath ${hiveconf:mf} into table mytable'
It returns the error
FAILED: ParseException line 1:23 mismatched input 'foo' expecting StringLiteral near 'inpath' in load statement
I already tried using the absolute path to the file and it won't work either: if the path is /mypath/foo.csv the error will be
FAILED: ParseException line 1:23 mismatched input '/' expecting StringLiteral near 'inpath' in load statement
Even trying to directly put the file name like this
hive -e 'load data local inpath foo.csv into table mytable'
doesn't work at all, and the thrown error is the same as before.
Does anybody have any idea on what is wrong with these commands? I could really appreciate some help, thanks.
Filename should be placed inside '' :
load data local inpath 'foo.csv' into table mytable
In your script you should probably escape these symbols so you won't get another parse exception.
Also, look at Language Manual on loading
I am brand new to teradata. I wrote a fastexport script to take some data from a db and export it to some excel files.
The script is all good. I just can't run it!
I am running the cmd fexp < C:\Documents\ScriptName (that is not the actual path)
Where ScriptName is a .txt file containing the script.
I get the error:
"The system cannot find the file specified"
I have tried changing the location of the file and such but always get the same error.
What am I missing here?
I found the answer. The correct command is:
fexp < "C:\Path to file\ScriptName.txt"