postgreSQL importing csv file in pgadmin4 - database

Is it possible to have a stored procedure in importing a csv file to postgreSQL? Does the simple copy function enough? However, I have like 10 csv files to import with same column names and my instructor wants us to use a stored procedure zz
CREATE TABLE Sales(
OrderID INTEGER,
Product VARCHAR(100),
Quantity INTEGER,
PriceEach NUMERIC(10,2),
OrderDate TIMESTAMP,
PurchaseAddress VARCHAR(100)
);
--COPY the records from CSV
COPY Sales(OrderID,Product,Quantity,PriceEach,OrderDate,PurchaseAddress)
FROM 'C:\sampledb\Sales.csv'
DELIMITER ','
CSV HEADER;

Related

Validating Records before imported into Database in SSIS [duplicate]

I have a .txt file which is 6.00 GB. It is a tab-delimited file so when I try to load it into SQL Server, the column delimiter is tab.
I need to load that .txt file into the database, but I don't need all the rows from the 6.00 Gb file. I need to be able to use a condition like
select *
into <my table>
where column5 in ('ab, 'cd')
but this is a text file and am not able to load it into db with that condition.
Can anyone help me with this?
Have you tried with BULK INSERT command? Take a look at this solution:
--Create temporary table
CREATE TABLE #BulkTemporary
(
Id int,
Value varchar(10)
)
--BULK INSERT has no WHERE clause
BULK INSERT #BulkTemporary FROM 'D:\Temp\File.txt'
WITH (FIELDTERMINATOR = '\t', ROWTERMINATOR = '\n')
--Filter results
SELECT * INTO MyTable FROM #BulkTemporary WHERE Value IN ('Row2', 'Row3')
--Drop temporary table
DROP TABLE #BulkTemporary
Hope this helps.
Just do a Bulk Insert into a staging table and form there move the data you actually want into a production table. The Where Clause is for doing something based on a specific condition inside SQL Server, not for loading data into SQL Server.

Insert data from CSV file to Cassandra table with auto generated UUIDs and timestamp

I want to insert data from a csv file to Cassandra table with auto generated UUIDs and timestamps.
Cassandra Schema:
CREATE TABLE IF NOT EXISTS mytable (
myid text,
mytype text,
mymodels set<text>,
myname,
created bigint,
PRIMARY KEY (myid, mytype)
) WITH CLUSTERING ORDER BY (mytype ASC);
CSV File:
mytype|mymodels|myname
"type1"|[model1,model2,model3]|"name1"
"type2"|[model1,model4,model5]|"name2"
I want to generate UUIDs and timestamps on the flow.
Tried COPY command from here with csv file as:
myid|mytype|mymodels|myname|created
uuid()|"type1"|[model1,model2,model3]|"name1"|blobAsBigint(timestampAsBlob(now()))
uuid()|"type2"|[model1,model4,model5]|"name2"|blobAsBigint(timestampAsBlob(now()))
Doesn't work. Is there any alternative for this?
If None, Is it better to manually hard code UUIDs and timestamps (the ugly way of doing) in the CSV file or write multiple INSERT statements for each record in a file and execute it or use BATCH feature of Cassandra?
P.S. Record count is around 150 to 200

Importing a CSV file which contains French Characters

I have a CSV file which has French Characters in some of the fields.
But when I import this data into a DB. I do not see French characters instead it shows some other special characters.
Query I am using to import the .csv file is as follows:
--Create Table
Create Table A_test (A_1 VARCHAR(100))
--Bulk Import .csv file with ANSI encoding
BULK INSERT A_Test
FROM 'C:\A_Test.csv'
WITH
( DataFileType = 'widechar',
ROWTERMINATOR ='\n'
);
--Sample Data in C:\A_Test.csv file
Le vieux château
Une fête
Le dîner
L'hôtel
Could anyone help me on this?
You can alter the collation of the affected columns by running the following code (I just made up the column name and datatype):
ALTER TABLE dbo.a_test
ALTER COLUMN somecolumn varchar(100) COLLATE French_CI_AS NOT NULL;
Also you could create the original table with the relevant columns pre-collated:
CREATE TABLE dbo.a_test
(
[somecolumn] varchar(100) COLLATE French_CI_AS NOT NULL
)
BULK INSERT like this:
BULK INSERT a_test from 'C:\etc.txt' WITH (DATAFILETYPE = 'widechar')

Importing a txt file into SQL Server with a where clause

I have a .txt file which is 6.00 GB. It is a tab-delimited file so when I try to load it into SQL Server, the column delimiter is tab.
I need to load that .txt file into the database, but I don't need all the rows from the 6.00 Gb file. I need to be able to use a condition like
select *
into <my table>
where column5 in ('ab, 'cd')
but this is a text file and am not able to load it into db with that condition.
Can anyone help me with this?
Have you tried with BULK INSERT command? Take a look at this solution:
--Create temporary table
CREATE TABLE #BulkTemporary
(
Id int,
Value varchar(10)
)
--BULK INSERT has no WHERE clause
BULK INSERT #BulkTemporary FROM 'D:\Temp\File.txt'
WITH (FIELDTERMINATOR = '\t', ROWTERMINATOR = '\n')
--Filter results
SELECT * INTO MyTable FROM #BulkTemporary WHERE Value IN ('Row2', 'Row3')
--Drop temporary table
DROP TABLE #BulkTemporary
Hope this helps.
Just do a Bulk Insert into a staging table and form there move the data you actually want into a production table. The Where Clause is for doing something based on a specific condition inside SQL Server, not for loading data into SQL Server.

BULK INSERT import text file

When I import a CSV or text file and bulk insert it into my database, the process successfully adds all record to the table.
My problem is that the inserted string is in Arabic, which appears as symbols in my database table. How can i solve this problem?
Insert using query
You need to choose an Arabic collation for your varchar/char columns or use Unicode (nchar/nvarchar).
CREATE TABLE MyTable
(
MyArabicColumn VARCHAR(100) COLLATE Arabic_CI_AI_KS_WS,
MyNVarCharColumn NVARCHAR(100)
)
Both columns should work.
Bulk Insert from file
This article explains how to bulk insert unicode characters.
Test Table
USE AdventureWorks2012;
GO
CREATE TABLE myTestUniCharData (
Col1 smallint,
Col2 nvarchar(50),
Col3 nvarchar(50)
);
Bulk Insert
DATAFILETYPE='widechar' allows the use of Unicode character format when bulk importing data.
USE AdventureWorks2012;
GO
BULK INSERT myTestUniCharData
FROM 'C:\myTestUniCharData-w.Dat'
WITH (
DATAFILETYPE='widechar',
FIELDTERMINATOR=','
);
GO
SELECT Col1,Col2,Col3 FROM myTestUniCharData;
GO

Resources