Import and export schema in cassandra - database

How to import and export schema from Cassandra or Cassandra cqlsh prompt?

To export keyspace schema:
cqlsh -e "DESC KEYSPACE user" > user_schema.cql
To export entire database schema:
cqlsh -e "DESC SCHEMA" > db_schema.cql
To import schema open terminal at 'user_schema.cql' ('db_schema.cql') location (or you can specify the full path) and open cqlsh shell. Then use the following command to import keyspace schema:
source 'user_schema.cql'
To import full database schema:
source 'db_schema.cql'

Everything straight from the command line. No need to go into cqlsh.
Import schema (.cql file):
$ cqlsh -e "SOURCE '/path/to/schema.cql'"
Export keyspace:
$ cqlsh -e "DESCRIBE KEYSPACE somekeyspace" > /path/to/somekeyspace.cql
Export database schema:
$ cqlsh -e "DESCRIBE SCHEMA" > /path/to/schema.cql

If using cassandra-cli, you can use the 'show schema;' command to dump the whole schema. You can restrict to a specific keyspace by running 'use keyspace;' first.
You can store the output in a file, then import with 'cassandra-cli -f filename'.
If using cqlsh, you can use the 'describe schema' command. You can restrict to a keyspace with 'describe keyspace keyspace'.
You can save this to a file then import with 'cqlsh -f filename'.

For someone who comes in future, just to get ddl for schema/keyspace with "myschema" in "CassandraHost" server.
echo -e "use myschema;\nDESCRIBE KEYSPACE;\n" | cqlsh CassandraHost > mySchema.cdl
and you can use following to import just DDL (without data):
cqlsh CassandraNEWhost -f mySchema.cdl

With authentication
cqlsh -u <user-name> -e "DESC KEYSPACE user" > user_schema.cql
password will be promted.

Related

Problem exporting from mongo and then importing to SQL Server

Question: how do I export from mongo such that I can import into SQL Server if I use $unwind?
I need to use $unwind which means I can't use mongoexport.exe. Mongo.exe gives different output for json as shown below. Output I can't load into SQL Server. I would export as csv output, but my data includes commas. I would use $out to first copy my data to a new collection & then use mongoexport, but I'm querying a production server in the cloud where I only have read access.
To illustrate my problem, I created a collection with one record that has a date field "edited_on". You can see here that mongoexport output starts with ["_id:{$oid.... while mongo output starts with {"_id : ObjectID(….
*** MONGOEXPORT
The command:
mongoexport --quiet --host localhost:27017 --db "zzz" -c
"Test_Structures" --fields edited_on --type json --jsonArray --out
C:\export_test.json
The output:
[{"_id":{"$oid":"5aaa1d85b8078250f1000c0e"},"edited_on":{"$date":"2018-03-15T07:15:17.583Z"}}]
I can import this data into SQL with OPENROWSET along with OPENJSON.
Described here: https://www.mssqltips.com/sqlservertip/5295/different-ways-to-import-json-files-into-sql-server/
*** MONGO
The command:
mongo localhost/UW --quiet -eval "db.Test_Structures.aggregate( {
$project: { _id: 1 , edited_on: 1} } )" > C:\aggregate_test.json
The output:
{ "_id" : ObjectId("5aaa1d85b8078250f1000c0e"), "edited_on" :
ISODate("2018-03-15T07:15:17.583Z") }
Declare #JSON varchar(max)
My coworker answered my question. Use replace() to remove the text in the json file that are causing problems as follows.
SELECT #JSON = BulkColumn
FROM OPENROWSET (BULK 'C:\aggregate_test.json', SINGLE_CLOB) as j
SET #JSON = replace(replace(replace(#JSON,'objectid(',''),'isodate(',''),'")','"')
SELECT * FROM OPENJSON (#JSON) With (...)

issue while importing from sqlserver into hive using apache sqoop?

When i run following sqoop command from $SQOOP_HOME/bin it works fine
sqoop import --connect "jdbc:sqlserver://ip_address:port_number;database=database_name;username=sa;password=sa#Admin" --table $SQL_TABLE_NAME --hive-import --hive-home $HIVE_HOME --hive-table $HIVE_TABLE_NAME -m 1
But when i run the same command in a loop for different databases from bash script as follows
while IFS='' read -r line || [[ -n $line ]]; do
$DATABASE_NAME=$line
sqoop import --connect "jdbc:sqlserver://ip_address:port_number;database=$DATABASE_NAME;username=sa;password=sa#Admin" --table $SQL_TABLE_NAME --hive-import --hive-home $HIVE_HOME --hive-table $HIVE_TABLE_NAME -m 1
done < "$1"
I am passing database names to my bash script in text file as parameter. My hive table is same as i want to append data from all databases in one hive table only.
For first two-three databases it works fine after that it starts giving following errors
15/06/25 11:41:06 INFO mapreduce.Job: Job job_1435124207953_0033 failed with state FAILED due to:
15/06/25 11:41:06 INFO mapreduce.ImportJobBase: The MapReduce job has already been retired. Performance
15/06/25 11:41:06 INFO mapreduce.ImportJobBase: counters are unavailable. To get this information,
15/06/25 11:41:06 INFO mapreduce.ImportJobBase: you will need to enable the completed job store on
15/06/25 11:41:06 INFO mapreduce.ImportJobBase: the jobtracker with:
11:41:06 INFO mapreduce.ImportJobBase:mapreduce.jobtracker.persist.jobstatus.active = true
11:41:06 INFO mapreduce.ImportJobBase: mapreduce.jobtracker.persist.jobstatus.hours = 1
15/06/25 11:41:06 INFO mapreduce.ImportJobBase: A jobtracker restart is required for these settings
15/06/25 11:41:06 INFO mapreduce.ImportJobBase: to take effect.
15/06/25 11:41:06 ERROR tool.ImportTool: Error during import: Import job failed!
I have already restarted my multi-node hadoop cluster by changing mapred-site.xml with above two said parametrs i.e
mapreduce.ImportJobBase:mapreduce.jobtracker.persist.jobstatus.active = true
mapreduce.jobtracker.persist.jobstatus.hours = 1
Still i am facing same problem. As i have just started learning sqoop any help will be appreciated.

I want to handle the exception when database is not connected and should come out of script UNIX

I have made a script which get conneted to database and runs the query.
I want to handle the exception if database is not connected.
Below is my sample code
#!/bin/sh
export ORACLE_HOME=/opt/cia/oracle-client/product/11.2.0/client_1
export NLS_LANG=AMERICAN_AMERICA.UTF8
export PATH=$ORACLE_HOME/bin:$PATH
export HOME_DIR=/home/aytripat
export OBJECT_TYPE=$HOME_DIR/object_type.txt
export OBJECT_LIST=$HOME_DIR/object_list.txt
$ORACLE_HOME/bin/sqlplus -s username/password#dwebpre1 <<EOF
set feedback off heading off pages 4000
spool ${OBJECT_TYPE}
select distinct decode(object_type,'PACKAGE','PACKAGE_SPEC','DATABASE
LINK','DB_LINK','JOB','PROCOBJ',replace(OBJECT_TYPE,' ','_'))from user_objects where object_type not in ('TABLE PARTITION','LOB') order by 1;
spool off
Now if sqlplus -s username/password#dwebpre1 <
does not get connected to data base i want it should show exception,and stop there only
Please help
Check the $? return value of your connection command.
Based on its value, handle the "exception" cases.
#Yushi/#deimus
What he meant, is to add an "if" to check the return code. For example:
if [ $? -eq 0 ]; then
echo "success return"
else
echo "error"
fi

MySQLdump no data export

I want to do a mysqldump with the table definition and table data every day, for this I config a cron job with this comand: "mysqldump -u user -pxxxxx site_DB | gzip > backup/site/site_t_$(date | awk {'print $1""$2""$3"_"$4'}).sql.gz" but this only export the table definition. What is the correct command to export the data? Thanks
By default mysqldump exports data too - you have to use the --no-data flag to make it only export structure. Since yours IS doing it by default, that means "no-data" is set in your a MySQL options file, which you can find following these directions.
I was having the same issue. re-run the dump command with '--databases' as follows:
mysqldump -u user -pxxxxx --databases site_DB . . . .

Clone MySQL database

I have database on a server with 120 tables.
I want to clone the whole database with a new db name and the copied data.
Is there an efficient way to do this?
$ mysqldump yourFirstDatabase -u user -ppassword > yourDatabase.sql
$ mysql yourSecondDatabase -u user -ppassword < yourDatabase.sql
mysqldump -u <user> --password=<password> <DATABASE_NAME> | mysql -u <user> --password=<password> -h <hostname> <DATABASE_NAME_NEW>
Like accepted answer but without .sql files:
mysqldump sourcedb -u <USERNAME> -p<PASS> | mysql destdb -u <USERNAME> -p<PASS>
In case you use phpMyAdmin
Select the database you wish to copy (by clicking on the database from the phpMyAdmin home screen).
Once inside the database, select the Operations tab.
Scroll down to the section where it says "Copy database to:"
Type in the name of the new database.
Select "structure and data" to copy everything. Alternately, you can select "Structure only" if you want the columns but not the data.
Check the box "CREATE DATABASE before copying" to create a new database.
Check the box "Add AUTO_INCREMENT value."
Click on the Go button to proceed.
There is mysqldbcopy tool from the MySQL Utilities package.
http://dev.mysql.com/doc/mysql-utilities/1.3/en/mysqldbcopy.html
If you want to make sure it is an exact clone, the receiving database needs to be entirely cleared / dropped. This way, the new db only has the tables in your import file and nothing else. Otherwise, your receiving database could retain tables that weren't specified in your import file.
ex from prior answers:
DB1 == tableA, tableB
DB2 == tableB, tableC
DB1 imported to -> DB2
DB2 == tableA, tableB, tableC //true clone should not contain tableC
the change is easy with --databases and --add-drop-database (see mysql docs). This adds the drop statement to the sqldump so your new database will be an exact replica:
$ mysqldump -h $ip -u $user -p$pass --databases $dbname --add-drop-database > $file.sql
$ mysql -h $ip $dbname -u $user -p$pass < $file.sql
of course replace the $ variables and as always, no space between password and -p. For extra security, strip the -p$pass from your command
$newdb = (date('Y')-1);
$mysqli->query("DROP DATABASE `".$newdb."`;");
$mysqli->query("CREATE DATABASE `".$newdb."`;");
$query = "
SELECT
TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA LIKE 'rds'
";
$result = $mysqli->query($query)->fetch_all(MYSQLI_ASSOC);
foreach($result as $val) {
echo $val['TABLE_NAME'].PHP_EOL;
$mysqli->query("CREATE TABLE `".$newdb."`.`".$val['TABLE_NAME']."` LIKE rds.`".$val['TABLE_NAME']."`");
$mysqli->query("INSERT `".$newdb."`.`".$val['TABLE_NAME']."` SELECT * FROM rds.`".$val['TABLE_NAME']."`");
}

Resources