Greenplum: Purging database Logs - database

is there any direct utility available to purge older logs from GP database, If i do it manually it is taking lot of time as there are 100+ segments, i have to go to each server and delete the logs files manually.
Other details: GP version - 4.3.X.X(Software Only Solution)
Cluster Config- 2+10
Thanks

I suggest you create a cron job and use gpssh to do this. For example:
gpssh -f ~/host_list -e 'for i in $(find /data/primary/gpseg*/pg_log/ -name "*.csv" -ctime +60); do rm $i; done'
This will remove files in pg_log on all segments that are over 2 months old. Of course, you should test this and make sure the path to pg_log is correct.

Related

How do I create a crontab job in unix that will move all files in my home directory to another directory at a specific time/date?

Hello I'm new to Unix and I am trying to create a crontab job that moves all the files I have in my home directory where the name contains the letter f followed by a digit 1,3 or 7 to a directory called backups, on the 12th of April and November at 9:30 PM.
This is my home directory:
arsenal.by flhome list1 stmnpgs
arsenal.pass flhome2 list2 test.c
assignment foreachScript1 list2.c testdir
availisting.csv funxdir local.cshrc testfile
backups funxdir2 local.login tmp.test
backups1 homlnk local.profile train
biglist lab4 myfile treat
biglist.c lab5 myfile2 trick
biglist2 lab6 Myhome.list tricking
CFiles.tar.Z lab7 myinfo.fl troll
clssnotes.txt lab8 myList typescript
delfh lec3 names.txt workdir
If anyone could help me out with this it'd be much appreciated!
Firstly home rolling a backup solution for work, professional or college, is usually a bad idea because the stakes of an error are potentially very high and local backups obviously have the possibility of being lost by whatever causes the original files to be inaccessible.
However it's a worthwhile exercise to show how you would do it in cron as it's a frequent type of task and it would provide you some cover while looking for a better solution.
Your date specification can be safely done as one cron entry as only the day of the year varies, if both the minute of the day and the day of the year (or the day of week) changed you would need two entries.
# M H DoM MoY DoW
30 21 12 4,10 * BACKUPDIR=~/backups; ds=$(date +\%Y\%m\%d\%H\%M\%S); mkdir -p $BACKUPDIR; find ~/* -type d -prune -o -type f -name f\*\[137\] -exec mv {} $BACKUPDIR/{}.$ds \;
The find command is told to look at all entries in your home directory that do not start with a . ("visible" files), if they are directories to ignore them (do not descend the directory tree) and if they are files that start with an f to move them (not copy them) to the $BACKUPDIR. If you wanted any file containing an f instead the find pattern would be \*f\*\[137\]
Above we define two variables for the backup dir and a datestamp (the \ before the % are because it is is a special character to cron).
The file globbing patterns * and [] are similarly escaped because they are shell special and we want to pass them to the find command.
The reason to use a timestamp is that moving or copying files frequently causes unintentional overwriting of files so if the backup directory path does not contain a date stamp then the target file name should.
Lastly it might be better to use a tar command to create a compressed date stamped archive that you can easily copy elsewhere, a local backup directory is asking for trouble, particularly if nested underneath the directory you are working in.
eg: Something like
#!/bin/bash
backup_file=~/backups/backup.$(date +%Y%m%d%H%M%S).tar.gz
tar czf $backup_file $(find ~/* -type d -prune -o -type f -name f\*\[137\] -print)
# <Commands to copy the file elsewhere here>
# You should then copy this file elsewhere (another system) or email it to yourself (after possibly encrypting it)

Delete old database backups for Linux SQL Server 2017 after backups created with Ola Hallengren?

I have a problem when I want to delete older backups, I've created with Ola Halengreen scripts.
USE Maintenance
EXECUTE dbo.DatabaseBackup
#Databases = 'USER_DATABASES',
#Directory ='/mssql/backup/',
#DirectoryStructure ='${InstanceName}{DirectorySeparator}{DatabaseName}',
#Filename='{DatabaseName}_{BackupType}_{Year}{Month}{Day}_{Hour}{Minute}{Second}_{FileNumber}.{FileExtension}',
#BackupType = 'FULL',
#Compress = 'Y',
#CleanupTime = 3
I got the following error message:
The value for the parameter #CleanupTime is not supported. Cleanup is not >supported on Linux.
So far so good, but whats the best way now to delete old Backups without to destroy the Backup chain?
My frist thought was to have a script to delete
#!/bin/sh
find /mssql/backup -name "*.bak" -type f -mtime +3 -exec rm -f {} \;
find /mssql/backup -name "*.trn" -type f -mtime +4 -exec rm -f {} \;
The internal system of the SQL Server Instance will not recognize the clean up after the script was executed. The Backup History in the internal Backup Management will not updated and in the case of a recovery the system wants to have backups which are not existing anymore.

Import old apache access logs to webalizer - ignoring records

I installed webalizer on my apache 2 webserver yesterday and came across the problem, that all the old access logs are not used. The directory list looks like that:
/var/log/apache2/
access.log
access.log1
access.log.10.gz
access.log.11.gz
...
How can I import all my files at once?
I tried several things, but it was telling me, that the records were ignored.
Hope somone can help. Thanks!
I ran into the same problem. I had just installed webalizer, and changed it to incremental mode (here are the relevant entries from my /etc/webalizer/webalizer.conf):
LogFile /var/log/apache2/access.log.1
OutputDir /var/www/htdocs/w
Incremental yes
IncrementalName webalizer.current
And then I ran webalizer by hand, which initialized the non-gz files in my logs directory. After that, any attempt to manually import an older gz logfile (by running webalizer /var/log/apache2/access.log.2.gz for instance) resulted in all of the entries being ignored.
I suspect this is because the entries found in the gz logs were older than the last import- I had to delete my webalizer.current file (really I cleared the whole dir- either way should work). Finally, in reverse order (oldest first), I could import the old gz files one at a time:
bhs128#home:~$ cd /var/log/apache2
bhs128#home:/var/log/apache2$ sudo rm -rf /var/www/htdocs/w/*
bhs128#home:/var/log/apache2$ ls -1t /var/log/apache2/access.log*gz | grep -o [0-9]* | tail -n1
52
bhs128#home:/var/log/apache2$ for i in {52..2}; do webalizer /var/log/apache2/access.log.$i.gz; done
I just had the same problem, and I took a look into the webalizer.current file:
$ head -n 2 webalizer.current
# Webalizer V2.21-02 Incremental Data - 11/05/2019 22:29:02
2019 11 5 22 29 2
The second line seems to contain the timestamp of the last run, so I just changed the year to 2018. After that, I was able to import older log files than the last imported ones, without having to delete all the data first.

copy SVN modified files including directory to a another directory

I have a list of files in my current working copy that have been modified locally. There are about 50 files that have been changed.
I am using the following command to copy files that have been modified in subversion to a folder called /backup. Is there a way to do this but maintain the directories they are in? So it would do something similar to exporting a SVN diff of files. For example if I changed a file called /usr/lib/SPL/RFC.php then it would copy the usr/lib/SPL directory to backup also.
cp `svn st | ack '^M' | cut -b 8-` backup
It looks strange, but it is really easy to copy files with tar. E.g.
tar -cf - $( svn st | ack '^M' | cut -b 8- ) |
tar -C /backup -xf -
Why not create a patch of your changes? That way you have one file containing all of your changes which you can timestamp in the name - something like 2012-05-28-17-30-00-UnitTestChanges.patch, one per day.
Then you can roll up your changes to a fresh checkout once you're ready, and then commit them.
FYI: Subversion 1.8 should have checkpointing / shelving (which is what you seem to want to do), but that's a long way off, and might only be added in Subversion 1.9.

Mercurial, stop versioning cache directory but keep directory

I have a CakePHP project under Mercurial version control. Right now all the files in the app/tmp directory are being versioned, which are always changing.
I do not want to version control these files.
I know I can stop by running hg forget app/tmp/*
But this will also forget the file structure. Which I want to keep.
Now I know that Mercurial doesn't version directories, just files, but the CakePHP folks were also smart enough to put an empty file called empty in every empty directory (I am guessing for this reason).
So what I want to do is tell Mercurial to forget every file under app/tmp except files whos name is exactly empty.
What would the command be for this?
Well, if nothing else works, you can always just ask Mercurial to forget everything, and then revert empty before committing:
Here's how I reproduced it, first create initial repo:
hg init
md app
md app\tmp
echo a>app\empty
echo a>app\tmp\empty
hg commit -m "initial" -A
Then add some files we later want to get rid of:
echo a >app\tmp\test1.txt
echo a >app\tmp\test2.txt
hg commit -m "adding" -A
Then forget the files we don't want:
hg forget app\tmp\*
hg status <-- will show all 3 files
hg revert app\tmp\empty
hg status <-- now empty is gone
echo glob:app/tmp>.hgignore
hg commit -m "ignored" -A
Note that all .hgignore does is to prevent Mercurial from discovering new files during addremove or commit -A, if you have explicitly tracked files that match your ignore filter, Mercurial will still track changes to those files.
In other words, even though I asked Mercurial to ignore app/tmp above, the file empty inside will not be ignored, or removed, since I have explicitly asked Mercurial to track it.
At least theoretically (I don't have time to try it right now), pattern matching should work with the hg forget command. So, you could do something like hg forget -X empty while in the directory (-X means "exclude").
You may want to consider using .hgignore, of course.
Since you only need to do it once I'd just do this:
find app/tmp -type f | grep -v empty | xargs hg forget
hg commit
from then on just put this in your `.hgignore'
^app/tmp
Mercurial has built-in support for globbing and regexes, as explained in the relevant chapter in the mercurial book. The python regex implementation is used.
This should work for you:
hg forget "re:app/tmp/.*(?<!/empty)$"

Resources