DB backup in cron is creating a file with zero byte - database

I have set up a cron in my godaddy server for taking DB backup. For Testing purpose, I run the cron in every minute. The command is :
mysqldump tuniv_results > /home/username/public_html/DB-VVS/tuniv_results.sql
In my DB-VVS folder one file, tuniv_results.sql, is creating but it is of zero byte. Could you please let me know the issue, why it is not creating properly?
Thanks in advance.
------------UPDATE-------------------
$user="****";
$password="****";
$database="*****";
$dumpCommand='/usr/bin/mysqldump';
$dumpCommand.=" -e -f -h <ipaddress> -u$user -p$password";
$dumpCommand.=" $database";
$dumpCommand.=" > bekap.sql";
$results=$dumpCommand;
exec($dumpCommand);
echo "result: ".$results;
I create a file in the root folder and put the absolute path of that file in the Command text-field as /home/username/cronfile.php. But in the root file there is no file like bekap.sql. Please let me know what might be the issue.

Try this one :
$user="*********";
$password="*****";
$database="*********";
$dumpCommand='/usr/bin/mysqldump';
$dumpCommand.=" -e -f -h host.name.com -u$user -p$password";
$dumpCommand.=" $database";
$dumpCommand.=" > bekap.sql";
$results=$dumpCommand;
exec($dumpCommand);
echo "result: ".$results;
Another solution :
I think this will help to you.
Open terminal and type:
sudo tcsh
pico /etc/crontab
or
nano /etc/crontab
And add one of the following lines depending on your situation. This schedule the backup on 1am every day.
Remote Host Backup with linked PATH to mysqldump:
0 1 * * * mysqldump -h mysql.host.com -uusername -ppassword --opt database > /path/to/directory/filename.sql
Remote Host Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -h mysql.host.com -uusername -ppassword --opt database > /path/to/directory/filename.sql
Local Host mysql Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -uroot -ppassword --opt database > /path/to/directory/filename.sql
(There is no space between the -p and password or -u and username - replace root with a correct database username.)

Related

Export a postgresql database

I want to export a postgresql database named "kd" with all roles, tablespaces, etc. I run the command pg_dumpall -U sce -h localhost -p 5450 -d kd > /tmp/db.sql I get the error
pg_dumpall: missing "=" after "kd5" in connection info string
I run the command pg_dumpall -U sce -h localhost -p 5450 > /tmp/db.sql
I get the error
pg_dumpall: query was: SELECT oid, rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolconnlimit, rolpassword, rolvaliduntil, rolreplication, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment, rolname = current_user AS is_current_user FROM pg_authid ORDER BY 2
How can fix this problem ?

Cronjob -e task not running but works without cron

I am trying to make a backup of my database in debian and put it on a specific folder once a day
I use this:
mysqldump -u mysqlacc -pMYPASSWORD mytable --single-transaction
--quick --lock-tables=false > /var/www/html/backup/mytable-backup-$(date "+%b_%d_%Y_%H_%M_%S").sql'
and it works well, the file is put in the respective folder, but once i put this line in crontab -e, it won't run it:
0 17 * * * mysqldump -u mysqlacc -pMYPASSWORD mytable
--single-transaction --quick --lock-tables=false > /var/www/html/backup/mytable-backup-$(date "+%b_%d_%Y_%H_%M_%S").sql
Any idea what seems to be wrong?

Error while using NZload

I am trying to use nzload to load a file residing in a unix directory to a NZ database but I keep getting the following error:
nzsql "nzload -u $NZ_USER -pw $NZ_PASSWORD -host $NZ_HOST -db $NZ_DATABASE -df -lf log.txt -bf err.txt"
nzsql: database name exceeds limit
my database name is only 18 bytes
Where am I going wrong? Is there a work around for this?
You are mixing two commands in one, NZSQL and NZLOAD are two different commands. Please use following command -
user#domain dir]# nzload -u $NZ_USER -pw $NZ_PASSWORD -host $NZ_HOST -db $NZ_DATABASE
-df your_file_name -lf log.txt -bf err.txt
Hope this will help.

Automate truncate/copy of table data

Every week, I have to run a script that truncates a bunch of tables. Then I use the export data task to move the data to another server (same database name).
The servers aren't linked, I can't save the export job, and my permissions/settings are limited by the DBA (I am an admin on the databases). I have windows authentication on both servers only. The servers are different versions (2005/2008).
My question is is there a way to automate this with my limited ability to modify the servers? Perhaps using Powershell?
Selecting all these tables and stuff in the export wizard week after week is a pain.
If you have access to the SQL Server Management console apps, try something this from a different system.
C:\> bcp ExportImportFile.inp out prod.dbo.[Table] -b 10000 -S %SQLSERVER% -U %USERNAME% -P %PASSWORD% -T -c > C:\Temp\ExportImport.log
C:\> sqlcmd -S %SQLSERVER% -U %USERNAEM% -P %PASSWORD% -Q "Use Prod;TRUNCATE TABLE [Table];" >> C:\Temp\ExportImport.log
C:\> bcp prod.dbo.[Table] in ExportImportFile.inp -b 10000 -S %SQLSERVER% -U %USERNAME% -P %PASSWORD% -T -c >> C:\Temp\ExportImport.log
You can use DBATOOLS:
$splat = #{
SqlInstance = '{source instance}'
Database = 'tempdb'
Destination = '{dest instance}'
DestinationDatabase = 'tempdb'
Table = 'table1' # you can provide a list of tables
AutoCreateTable = $true
Truncate = $true
}
Copy-DbaDbTableData #splat
If you don't have dbatools: https://dbatools.io/getting-started/

mysqldump puts CREATE on the first line with mysqldump

Here's my full bash script:
#!/bin/bash
logs="$HOME/sitedb_backups/log"
mysql_user="user"
mysql_password="pass"
mysql=/usr/bin/mysql
mysqldump=/usr/bin/mysqldump
tbackups="$HOME/sitedb_backups/today"
ybackups="$HOME/sitedb_backups/yesterday"
echo "`date`" > $logs/backups.log
rm $ybackups/* >> $logs/backups.log
mv $tbackups/* $ybackups/ >> $logs/backups.log
databases=`$mysql --user=$mysql_user -p$mysql_password -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases ; do
$mysqldump --force --opt --user=$mysql_user -p$mysql_password --databases $db | gzip > "$tbackups/$db.gz"
echo -e "\r\nBackup of $db successfull" >> $logs/backups.log
done
mail -s "Your DB backups is ready!" yourmail#gmail.com <<< "Today: "`date`"
DB backups of every site is ready."
exit 0
Problem is when i try to import it with mysql i am gettint error 1044 error connecting to oldname_db. When i opened sql file i have noticed on the first line CREATE command so it tries to create that database with the old name. How can i solve that problem?
SOLVED.
Using --databases parameter in my case is not necessary and because of --databases it was generating CREATE and USE action in the beginning of the sql file, hope it helps somebody else.
Use the --no-create-db option of mysqldump.
From man mysqldump:
--no-create-db, -n
This option suppresses the CREATE DATABASE statements that are
otherwise included in the output if the --databases or --all-databases
option is given.

Resources