import dump.sql file into postresql - database

I am new to PostgreSQL. I have a file name dump.sql. I want to import this into my POstgreSQL database. I created the database using CREATE database_name. Then I used psql database_name < /Downloads/web-task/dump.sql. After running this command it shows not output. I assume it did not import anything from dump.sql. How can I import this file into my PostgreSQL DB?

We have determined in chat that you were trying to import your dump.sql file from within the psql prompt, which obviously couldn't work... The data is now imported.

Related

How to import an Oracle DB .dmp file using DBeaver?

I'm currently trying to import an Oracle DB .dmp (dump) file into my Oracle DB using DBeaver but have trouble doing so.
The Oracle DB in question is running in a docker container. I successfully connected to this Oracle database with DBeaver, and can thus browse the database using DBeaver.
Currently however, the DB is empty. That's where the .dmp file comes in.
I want to import this .dmp file into my database, under a certain schema but I cannot seem to do this. The dump file looks something like this: 'export.dmp' and is around 16MB big.
I'd like to import the data from the .dmp file to be able to browse the data to get familiar with it, as similar data will be stored in our own database.
I looked online but was unable to get an answer that works for me.
I tried using DBeaver but I don't seem to have the option to import or restore a DB via a .dmp file. At best, DBeaver proposes to import data using a .CSV file. I also downloaded the Oracle tool SQLDeveloper, but I can't manage to connect to my database in the docker container.
Online there is also talk of an import / export tool that supposedly can create these .dmp files and import them, but I'm unsure how to get this tool and whether that is the way to do it.
If so, I still don't understand how I can get to browse the data in DBeaver.
How can I import and browse the data from the .dmp file in my Oracle DB using DBeaver?
How to find Oracle datapump dir
presumably set to /u01/app/oracle/admin/<mydatabase>/dpdump on your system
How to copy files from host to docker container
docker cp export.dmp container_id:/u01/app/oracle/admin/<mydatabase>/dpdump/export.dmp
How do I get into a Docker container's shell
docker exec -it <mycontainer> bash
How to import an Oracle database from dmp file
If it was exported using expdp, then start the import with impdp:
impdp <username>/<password> dumpfile=export.dmp full=y
It will output the log file in the same default DATA_PUMP_DIR directory in the container.
oracle has two utilities IMPORT and IMPDP to import dumps , with IMPORT you can't use database directories and you have to specify the location . The IMPDP on other hand require database directory .
having said that you can't import oracle export dumps using dbeaver , you have to use IMPORT or IMPDP utility from OS.

SQL Server - Command Line utility for exporting and importing of entire database

I am looking for a command line utility to export and import an entire SQL Server database. I am looking to automate the process of moving data from source database to destination database given the credentials.
This would be similar to exp and imp command for Oracle.
I have already looked at Bcp and SS import export wizard.
Would someone point to any such utility?
I haven't found one if it exists. I typically script up a power shell function like this one to serve the purpose.
export with power shell
You can then call the script from the command prompt and even add parameters to export by table, database etc.

Mean Stack - storing of data

I want to make website on books using mean stack so I want to store contents of books now I don't know where I have to store all those content in database or somewhere else.
The easiest way is to import a CSV file. MongoDB can import CSV files directly:
In the following example, mongoimport imports the csv formatted data
in the /opt/backups/contacts.csv file into the collection contacts in
the users database on the MongoDB instance running on the localhost
port numbered 27017.
Specifying --headerline instructs mongoimport to determine the name of
the fields using the first line in the CSV file.
mongoimport --db users --collection contacts --type csv --headerline --file /opt/backups/contacts.csv
Source:
https://docs.mongodb.com/manual/reference/program/mongoimport/
See also:
How to use mongoimport to import csv

Postgres -- Simple batch file to export CSV

Hi and thanks in advance.
I am currently exporting from my postgres database VIA the psql shell with::
\COPY "Accounts" TO 'C:\Users\admin\Desktop\Accounts.csv' CSV HEADER;
This works fine, but I want to be able to double click a batch file .cmd or .bat that is saved on my desktop to 1) log into the database 2) export the CSV.
So that way I dont have to go into psql shell every time. Please help, I did google but postgres resources are few.
Because the comments above are limited in their length and formatting, I am sharing some basic research that might get you started:
Using a .pgpass file
PowerShell connect to Postgres DB

How to import data into Hive warehouse from SQL Server 2014 (Unicode) for specific schema

I want to import data from SQL Server and query it from hive.
I created a VirtualBox using cloudera template and also started reading its tutorial.
I am successfully able to import data from SQL Server using sqoop as avro files and then create table in hive and import data from avro file. Then query it from hive.
But import-all-tables command of sqoop only imports table of schema "dbo".
What if I want to import tables with a schema dw also? I tried to use import command to import specific table exist in dw schema. but that also doesn't work.
Any idea how to import data from SQL Sever using sqoop for non dbo. schema related tables as avro? Or import data from SQL Server for other than dbo. schema and load it directly into hive?
Download JDBC driver and copy it to sqoop directory
$ curl -L 'http://download.microsoft.com/download/0/2/A/02AAE597-3865-456C-AE7F-613F99F850A8/sqljdbc_4.0.2206.100_enu.tar.gz' | tar xz
$ sudo cp sqljdbc_4.0/enu/sqljdbc4.jar /var/lib/sqoop/
Import table from Sql Server using sqoop
sqoop import --driver="com.microsoft.sqlserver.jdbc.SQLServerDriver" --connect="jdbc:sqlserver://sqlserver;database=databasename;username=username;password=passwordofuserprovidedinusername" --username=username --password= passwordofuserprovidedinusername --table="schemaname.tablename" --split-by=primarykeyoftable --compression-codec=snappy --as-avrodatafile --warehouse-dir=/user/hive/warehouse/tablename
Verify if table imported properly
hadoop fs -ls /user/hive/warehouse
ls -l *.avsc
create new directory and provide appropriate permissions
sudo -u hdfs hadoop fs -mkdir /user/examples
sudo -u hdfs hadoop fs -chmod +rw /user/examples
hadoop fs -copyFromLocal ~/*.avsc /user/examples
start hive
hive
import table schema and data from sqoop to hive warehouse
CREATE EXTERNAL TABLE tablename
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 'hdfs:///user/hive/warehouse/tablename’
TBLPROPERTIES ('avro.schema.url'='hdfs://quickstart.cloudera/user/examples/sqoop_import_schemaname_tablename.avsc');
Note: make sure while typing command the single quote may change if you are coping the command. There should not be any space in path or filenames.

Resources