Cronjob -e task not running but works without cron - database

I am trying to make a backup of my database in debian and put it on a specific folder once a day
I use this:
mysqldump -u mysqlacc -pMYPASSWORD mytable --single-transaction
--quick --lock-tables=false > /var/www/html/backup/mytable-backup-$(date "+%b_%d_%Y_%H_%M_%S").sql'
and it works well, the file is put in the respective folder, but once i put this line in crontab -e, it won't run it:
0 17 * * * mysqldump -u mysqlacc -pMYPASSWORD mytable
--single-transaction --quick --lock-tables=false > /var/www/html/backup/mytable-backup-$(date "+%b_%d_%Y_%H_%M_%S").sql
Any idea what seems to be wrong?

Related

DB backup in cron is creating a file with zero byte

I have set up a cron in my godaddy server for taking DB backup. For Testing purpose, I run the cron in every minute. The command is :
mysqldump tuniv_results > /home/username/public_html/DB-VVS/tuniv_results.sql
In my DB-VVS folder one file, tuniv_results.sql, is creating but it is of zero byte. Could you please let me know the issue, why it is not creating properly?
Thanks in advance.
------------UPDATE-------------------
$user="****";
$password="****";
$database="*****";
$dumpCommand='/usr/bin/mysqldump';
$dumpCommand.=" -e -f -h <ipaddress> -u$user -p$password";
$dumpCommand.=" $database";
$dumpCommand.=" > bekap.sql";
$results=$dumpCommand;
exec($dumpCommand);
echo "result: ".$results;
I create a file in the root folder and put the absolute path of that file in the Command text-field as /home/username/cronfile.php. But in the root file there is no file like bekap.sql. Please let me know what might be the issue.
Try this one :
$user="*********";
$password="*****";
$database="*********";
$dumpCommand='/usr/bin/mysqldump';
$dumpCommand.=" -e -f -h host.name.com -u$user -p$password";
$dumpCommand.=" $database";
$dumpCommand.=" > bekap.sql";
$results=$dumpCommand;
exec($dumpCommand);
echo "result: ".$results;
Another solution :
I think this will help to you.
Open terminal and type:
sudo tcsh
pico /etc/crontab
or
nano /etc/crontab
And add one of the following lines depending on your situation. This schedule the backup on 1am every day.
Remote Host Backup with linked PATH to mysqldump:
0 1 * * * mysqldump -h mysql.host.com -uusername -ppassword --opt database > /path/to/directory/filename.sql
Remote Host Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -h mysql.host.com -uusername -ppassword --opt database > /path/to/directory/filename.sql
Local Host mysql Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -uroot -ppassword --opt database > /path/to/directory/filename.sql
(There is no space between the -p and password or -u and username - replace root with a correct database username.)

backup of database using shellscript using crontab fails?

My shellscript for taking backup of databse works fine normally.
But when i try to run through crontab there is no backup.
this is mycrontab
* * * * * /home/mohan/sohan/backuptest.sh
content of backuptest.sh are
#!/bin/bash
name=`date +%Y%m%d`.sql
#echo $name
mysqldump -u abc --password=abc my_db > $name
backup.sh works fine when normally run .But fails to generate backup when run through crontab
A couple of possibilities... first that your programs/commands cannot be found when run from cron, and second that your database cannot be found when run from cron.
So, first the programs. You are using date and mysqldump, so at youir Terminal prompt you need to find where they are located, like this:
which date
which mysqldump
Then you can either put the full paths that you get as output above into your script, or add a PATH= statement at the second line that incorporates both paths.
Secondly, your database. Where is it located? If it is in /home/mohan/sohan/ for example, you will need to change your script like this:
#!/bin/bash
name=`/bin/date +%Y%m%d`.sql
cd /home/mohan/sohan
/usr/local/bin/mysqldump -u abc --password=abc my_db > $name

Setting up a database, schemas, tables and stored procedures all in one click

I have all the scripts to do:
Set up a database.
Create schema/s.
Create tables.
Create stored procedures.
I would like to write a batch file that will have SQL Server run those scripts and consequently my database will be created easier and quicker. For the sake of this example, lets assume that I have a folder with the address C:\folder and inside this folder I have files SetDatabase.sql, SetSchema.sql, SetTable.sql, and SetSP.sql. How would I set all that up on localhost\TSQL2012?
You can do this in powershell using sqlcmd
sqlcmd -S serverName\instanceName -i scripts.sql
The above statement will execute a script.
You can use the :r command in another file (scripts.sql) to store all your scripts.
:r C:\..\script1.sql
:r C:\..\script2.sql
....
set _connectionCredentialsMaster=-S MyServer\MyInstance -d Master -U sa -P mypassword
set _connectionCredentialsMyDatabase=-S MyServer\MyInstance -d MyDatabase -U sa -P mypassword
set _sqlcmd="%ProgramFiles%\Microsoft SQL Server\110\Tools\Binn\SQLCMD.EXE"
%_sqlcmd% -i MyFileCreateDatabase001.sql -b -o MyFileCreateDatabase001.Sql.log %_connectionCredentialsMaster%
%_sqlcmd% -i MyFile001.sql -b -o MyFile001.Sql.log %_connectionCredentialsMyDatabase%
%_sqlcmd% -i MyFile002.sql -b -o MyFile002.Sql.log %_connectionCredentialsMyDatabase%
set _connectionCredentialsMaster=
set _connectionCredentialsMyDatabase=
set _sqlcmd=
Just remember, when you run the 'Create Database' statement, you are actually USING the "Master" database. Then, after MyDatabase is created, you can use it. Thus why the first line in the example above...connects to Master.
The above will let you set the credentials "at the top" "one time"....and keep your lines in the file for each file.
Use SQL data tools to implement your needs. You should study about that before you do.
http://msdn.microsoft.com/en-in/data/tools.aspx

mysqldump puts CREATE on the first line with mysqldump

Here's my full bash script:
#!/bin/bash
logs="$HOME/sitedb_backups/log"
mysql_user="user"
mysql_password="pass"
mysql=/usr/bin/mysql
mysqldump=/usr/bin/mysqldump
tbackups="$HOME/sitedb_backups/today"
ybackups="$HOME/sitedb_backups/yesterday"
echo "`date`" > $logs/backups.log
rm $ybackups/* >> $logs/backups.log
mv $tbackups/* $ybackups/ >> $logs/backups.log
databases=`$mysql --user=$mysql_user -p$mysql_password -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases ; do
$mysqldump --force --opt --user=$mysql_user -p$mysql_password --databases $db | gzip > "$tbackups/$db.gz"
echo -e "\r\nBackup of $db successfull" >> $logs/backups.log
done
mail -s "Your DB backups is ready!" yourmail#gmail.com <<< "Today: "`date`"
DB backups of every site is ready."
exit 0
Problem is when i try to import it with mysql i am gettint error 1044 error connecting to oldname_db. When i opened sql file i have noticed on the first line CREATE command so it tries to create that database with the old name. How can i solve that problem?
SOLVED.
Using --databases parameter in my case is not necessary and because of --databases it was generating CREATE and USE action in the beginning of the sql file, hope it helps somebody else.
Use the --no-create-db option of mysqldump.
From man mysqldump:
--no-create-db, -n
This option suppresses the CREATE DATABASE statements that are
otherwise included in the output if the --databases or --all-databases
option is given.

Using variables in SQLCMD for Linux

I'm running the Microsoft SQLCMD tool for Linux (CTP 11.0.1720.0) on a Linux box (Red Hat Enterprise Server 5.3 tikanga) with Korn shell. The tool is properly configured, and works in all cases except when using scripting variables.
I have an SQL script, that looks like this.
SELECT COLUMN1 FROM TABLE WHERE COLUMN2 = '$(param1)';
And I'm running the sqlcmd command like this.
sqlcmd -S server -d database -U user -P pass -i input.sql -v param1="DUMMYVALUE"
When I execute the above command, I get the following error.
Sqlcmd: 'param1=DUMMYVALUE': Invalid argument. Enter '-?' for help.
Help lists the below syntax.
[-v var = "value"...]
Am I missing something here?
You don't need to pass variables to sqlcmd. It auto picks from your shell variables:
e.g.
export param1=DUMMYVALUE
sqlcmd -S $host -U $user -P $pwd -d $db -i input.sql
In the RTP version (11.0.1790.0), the -v switch does not appear in the list of parameters when executing sqlcmd -?. Apparently this option isn't supported under the Linux version of the tool.
As far as I can tell, importing parameter values from environment variables doesn't work either.
If you need a workaround, one way would be to concatenate one or more :setvar statements with the text file containing the commands you want to run into a new file, then execute the new file. Based on your example:
echo :setvar param1 DUMMYVALUE > param_input.sql
cat input.sql >> param_input.sql
sqlcmd -S server -d database -U user -P pass -i param_input.sql
You can export the variable in linux. After that you won't need to pass the variable in sqlcmd. However, I did notice you will need to change your sql script and remove the :setvar command if it doesn't have a default value.
export dbName=xyz
sqlcmd -Uusername -Sservername -Ppassword -i script.sql
:setvar dbName --remove this line
USE [$(dbName)]
GO
I think you're just not quoting the input variables correctly. I created this bash script...
#!/bin/bash
# Create a sql file with a parameterized test script
echo "
set nocount on
select k = '-db', v = '\$(db)' union all
select k = '-schema', v = '\$(schema)' union all
select '-', 'static'
go" > ./test.sql
# capture input variables
DB=$1
SCHEMA="${2:-dbo}"
# Exec sqlcmd
sqlcmd -S 'localhost\lemur' -E -i ./test.sql -v "db=${DB}" -v "schema=${SCHEMA}"
... and tested it like so:
$ ./test.sh master
k v
------- ------
-db master
-schema dbo
- static

Resources