How to import CSV file into TDengine database - tdengine

I have a CSV file and I want to import my data into the TDengine database. Are there any tutorials for data importing?

you can use following sql:
INSERT INTO table_name FILE '/tmp/csvfile.csv';
INSERT INTO table_name USING super_table_name TAGS ('Beijing.Chaoyang', 2) FILE '/tmp/csvfile.csv';
INSERT INTO table_name_1 USING super_table_name TAGS ('Beijing.Chaoyang', 2) FILE '/tmp/csvfile_21001.csv'
table_name_2 USING super_table_name (groupId) TAGS (2) FILE '/tmp/csvfile_21002.csv';
you can find more details from here Taos SQL

Related

how to Read headers of a CSV file in Snowflake stage

I am learning snowflake ,I was enter image description here trying to read the headers of CSV file stored in aws bucket ..I used the metadata fields that required me to input $1,$2 as column names and so on to obtain headers(for copy into table creation)..
is there a better alternative to this?
Statement :
select
Top 100 metadata$filename,
metadata$file_row_number,
t.$1,
t.$2,
t.$3,
t.$4,
t.$5,
t.$6
from
#aws_stage t
where
metadata$filename = 'OrderDetails.csv'

How to store an image in an IBM DB2 Database using python?

I can connect to my IBM DB2 database using a python file, I can run all sorts of commands through it, but how do i store an image to a column in my table on DB2 database? I have create A BLOB TYPE COLUMN in my table.
Why not look at the test cases for ibm_db on github?
They show a test case for using the PARAM_FILE option to ibm_db.bind_param() to cause the contents of an image file to be copied into a BLOB column .
See the testcase [here][1].
Although the test(s) may be out of date, the following snippet of code
shows the parameters that work using ibm_db version 3.0.1 (successfully insert a jpg file into a BLOB column) via the PARAM_FILE method:
jpg_file="/home/some_user/Pictures/houston.jpg" # path to the image file
# table my_pics already exists with a blob colum called jpg_content of appropriate length
# The database already contains a table called MY_PICS in the current schema
# with a BLOB column named JPG_CONTENT
#
insert_sql = "INSERT INTO my_pics(jpg_content) values(?)"
try:
stmt = ibm_db.prepare(conn, insert_sql)
print("Successfully prepared the insert statement")
except:
print("Failed to compile the insert statement")
print(ibm_db.stmt_errormsg(stmt))
ibm_db.close(conn)
sys.exit(1)
# link a file-name to the parameter-marker of the insert-statement (target column is BLOB)
try:
rc = ibm_db.bind_param(stmt, 1, jpg_file, ibm_db.PARAM_FILE,ibm_db.SQL_BLOB )
print("Bind returned: "+str(rc))
print("Successfully bound the filename to the parmameter-marker")
except:
print("Bind returned: "+str(rc))
print("Failed to bind the input parameter file")
print(ibm_db.stmt_errormsg(stmt))
ibm_db.close(conn)
sys.exit(1)
try:
ibm_db.execute(stmt)
print("Successfully inserted jpg file into blob column")
except:
print("Failed to execute the insert to blob column")
print(ibm_db.stmt_errormsg(stmt))

In the tutorial "Tutorial: Bulk Loading from a local file system using copy" what is the difference between my_stage and my_table permissions?

I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated

read file from Azure Blob Storage into Azure SQL Database

I have already tested this design using a local SQL Server Express set-up.
I uploaded several .json files to Azure Storage
In SQL Database, I created an External Data source:
CREATE EXTERNAL DATA SOURCE MyAzureStorage
WITH
(TYPE = BLOB_STORAGE,
LOCATION = 'https://mydatafilestest.blob.core.windows.net/my_dir
);
Then I tried to query the file using my External Data Source:
select *
from OPENROWSET
(BULK 'my_test_doc.json', DATA_SOURCE = 'MyAzureStorage', SINGLE_CLOB) as data
However, this failed with the error message "Cannot bulk load. The file "prod_EnvBlow.json" does not exist or you don't have file access rights."
Do I need to configure a DATABASE SCOPED CREDENTIAL to access the file storage, as described here?
https://learn.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql
What else can anyone see that has gone wrong and I need to correct?
OPENROWSET is currently not supported on Azure SQL Database as explained in this documentation page. You may use BULK INSERT to insert data into a temporary table and then query this table. See this page for documentation on BULK INSERT.
Now that OPENROWSET is in public preview, the following works. Nb the key option is in case your blob is not public. I tried it on a private blob with the scoped credential option and it worked. nnb if you are using a SAS key make sure you delete the leading ? so the string should start with sv as shown below.
Make sure the blobcontainer/my_test_doc.json section specifies the correct path e.g. container/file.
CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sv=2017****************';
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://yourstorage.blob.core.windows.net',
CREDENTIAL= MyAzureBlobStorageCredential);
DECLARE #json varchar(max)
SELECT #json = BulkColumn FROM OPENROWSET(BULK 'blobcontainer/my_test_doc.json',
SINGLE_BLOB, DATA_SOURCE = 'MyAzureBlobStorage',
FORMATFILE_DATA_SOURCE = 'MyAzureBlobStorage') as j;
select #json;
More detail provided in these docs

Copying data from one SQLite database to another

I have 2 SQLite databases with common data but with different purposes and I wanted to avoid reinserting data, so I was wondering if it was possible to copy a whole table from one database to another?
You'll have to attach Database X with Database Y using the ATTACH command, then run the appropriate Insert Into commands for the tables you want to transfer.
INSERT INTO X.TABLE SELECT * FROM Y.TABLE;
// "INSERT or IGNORE" if you want to ignore duplicates with same unique constraint
Or, if the columns are not matched up in order:
INSERT INTO X.TABLE(fieldname1, fieldname2) SELECT fieldname1, fieldname2 FROM Y.TABLE;
Easiest and correct way on a single line:
sqlite3 old.db ".dump mytable" | sqlite3 new.db
The primary key and the columns types will be kept.
Consider a example where I have two databases namely allmsa.db and atlanta.db. Say the database allmsa.db has tables for all msas in US and database atlanta.db is empty.
Our target is to copy the table atlanta from allmsa.db to atlanta.db.
Steps
sqlite3 atlanta.db(to go into atlanta database)
Attach allmsa.db. This can be done using the command ATTACH '/mnt/fastaccessDS/core/csv/allmsa.db' AS AM;
note that we give the entire path of the database to be attached.
check the database list using sqlite> .databases
you can see the output as
seq name file
--- --------------- ----------------------------------------------------------
0 main /mnt/fastaccessDS/core/csv/atlanta.db
2 AM /mnt/fastaccessDS/core/csv/allmsa.db
now you come to your actual target. Use the command
INSERT INTO atlanta SELECT * FROM AM.atlanta;
This should serve your purpose.
For one time action, you can use .dump and .read.
Dump the table my_table from old_db.sqlite
c:\sqlite>sqlite3.exe old_db.sqlite
sqlite> .output mytable_dump.sql
sqlite> .dump my_table
sqlite> .quit
Read the dump into the new_db.sqlite assuming the table there does not exist
c:\sqlite>sqlite3.exe new_db.sqlite
sqlite> .read mytable_dump.sql
Now you have cloned your table.
To do this for whole database, simply leave out the table name in the .dump command.
Bonus: The databases can have different encodings.
Objective-C code for copy Table from a Database to another Database
-(void) createCopyDatabase{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory , NSUserDomainMask, YES);
NSString *documentsDir = [paths objectAtIndex:0];
NSString *maindbPath = [documentsDir stringByAppendingPathComponent:#"User.sqlite"];;
NSString *newdbPath = [documentsDir stringByAppendingPathComponent:#"User_copy.sqlite"];
NSFileManager *fileManager = [NSFileManager defaultManager];
char *error;
if ([fileManager fileExistsAtPath:newdbPath]) {
[fileManager removeItemAtPath:newdbPath error:nil];
}
sqlite3 *database;
//open database
if (sqlite3_open([newdbPath UTF8String], &database)!=SQLITE_OK) {
NSLog(#"Error to open database");
}
NSString *attachQuery = [NSString stringWithFormat:#"ATTACH DATABASE \"%#\" AS aDB",maindbPath];
sqlite3_exec(database, [attachQuery UTF8String], NULL, NULL, &error);
if (error) {
NSLog(#"Error to Attach = %s",error);
}
//Query for copy Table
NSString *sqlString = #"CREATE TABLE Info AS SELECT * FROM aDB.Info";
sqlite3_exec(database, [sqlString UTF8String], NULL, NULL, &error);
if (error) {
NSLog(#"Error to copy database = %s",error);
}
//Query for copy Table with Where Clause
sqlString = #"CREATE TABLE comments AS SELECT * FROM aDB.comments Where user_name = 'XYZ'";
sqlite3_exec(database, [sqlString UTF8String], NULL, NULL, &error);
if (error) {
NSLog(#"Error to copy database = %s",error);
}
}
The Easiest way to do is through SQLite Studio
If you don't have download from https://download.cnet.com/SQLiteStudio/3000-10254_4-75836135.html
Steps:
1.Add both the databases.
2.Click View tab and then databases as shown in the picture.
3.Right click the table you want to copy and copy it.
Paste the table after right clicking the database where you want to paste.
Now you're done
First scenario: DB1.sqlite and DB2.sqlite have the same table(t1), but DB1 is more "up to date" than DB2. If it's small, drop the table from DB2 and recreate it with the data:
> DROP TABLE IF EXISTS db2.t1; CREATE TABLE db2.t1 AS SELECT * FROM db1.t1;
Second scenario: If it's a large table, you may be better off with an INSERT if not exists type solution. If you have a Unique Key column it's more straight forward, otherwise you'd need to use a combination of fields (maybe every field) and at some point it's still faster to just drop and re-create the table; it's always more straight forward (less thinking required).
THE SETUP: open SQLite without a DB which creates a temporary in memory main database, then attach DB1.sqlite and DB2.sqlite
> sqlite3
sqlite> ATTACH "DB1.sqlite" AS db1
sqlite> ATTACH "DB2.sqlite" AS db2
and use .databases to see the attached databases and their files.
sqlite> .databases
main:
db1: /db/DB1.sqlite
db2: /db/DB2.sqlite
I needed to move data from a sql server compact database to sqlite, so using sql server 2008 you can right click on the table and select 'Script Table To' and then 'Data to Inserts'. Copy the insert statements remove the 'GO' statements and it executed successfully when applied to the sqlite database using the 'DB Browser for Sqlite' app.
If you use DB Browser for SQLite, you can copy the table from one db to another in following steps:
Open two instances of the app and load the source db and target db side by side.
If the target db does not have the table, "Copy Create Statement" from the source db and then paste the sql statement in "Execute SQL" tab and run the sql to create the table.
In the source db, export the table as a CSV file.
In the target db, import the CSV file to the table with the same table name. The app will ask you do you want to import the data to the existing table, click yes. Done.

Resources