rpm(1) provides a -V option to verify installed files against the installation database, which can be used to detect modified or missing files.
This might be used as a form of intrusion detection (or at least part of an audit). However, it is of course possible that the rpm database installed may be modified by a hacker to hide their tracks (see http://www.sans.org/security-resources/idfaq/rpm.php, last sentence)
It looks like it should be possible to back up the rpm database /var/lib/rpm after every install (to some external medium) and to use that during an audit using --dbpath. Such a backup would have to be updated fo course after every install or upgrade etc.
Is this feasible? Are there any resources that detail methods, pitfalls, suggestions etc for this?
Yes feasible. Use "rpm -Va --dbpath /some/where/else" to point to
some saved database directory.
Copy /var/lib/rpm/Packages to the saved /some/where/else directory,
and run "rpm --rebuilddb --dbpath /some/where/else" to regenerate
the indices.
Note that you can also verify files using the original packaging
like "rpm -Vp some*.rpm" which is often less hassle (and more
secure with RO offline media storing packages) than saving copies
of the installed /var/lib/rpm/Packages rpmdb.
Related
Currently I am using the redis.conf file to provide the fixed directory and filename to my instance to save the redis dump.rdb snapshot.
My intention is to compare two redis snapshots taken at different times.
But Redis rewrites over the old dump file after creating the new one.
I checked the redis repo on github and found the rdb.c file, which has the code that executes the SAVE commands and rewrites over old snapshots.
Before messing with code(since i'm not an experienced developer), I wanted to ask if there is a better way to save snapshots taken at different times? Or if I could just save the last 2 snapshots at a time?
You can use incron to watch the dump directory and execute a script
sudo apt-get install incron
echo "redis" >> /etc/incron.allow
export EDITOR=vi
incrontab -e
/path/where/you/dump/files IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /bin/copy_snapshot
then create a /bin/copy_snapshot script file rename it with a date or something and make sure there are X number of copies
I have a file "db-connection.php" that has to be different for each version of my server. (Localhost, Dev and Production). At first I thought .gitignore was the answer, but after much pain and research, I realized that .gitignore only works on untracked file: e.g. files NOT already in the Repo.
For obvious reasons, the localhost version I'm using with xampp requires that the db file be within the repo. Of course, this means that every time I push it to Dev, it ruins the Dev db connection.
Is there a way to tell .git "Yes, I realize this file exists, but leave it alone anyway"?
This is a common problem, and there are two solutions, depending on your needs.
First, if you always are going to have the same configuration files and they will change depending only on the environment (but not the developer machine), then simply create the three versions of the file in your repository (e.g., in a config directory), and copy the appropriate one into place, either with a script or manually. You then remove the db-connection.php file and ignore it.
If this file actually needs to depend on the user's system (say, it contains personal developer credentials or system-specific paths), then you should ship a template file and copy it into place with a script (which may fill out the relevant details for the user). In this case, too, the db-connection.php would be ignored and removed from the repository.
There are two things people try to do that don't work. One of them is to try to keep multiple branches each with their own copy of the file. This doesn't work because Git doesn't really provide a way to not merge certain files between branches.
The other thing people try to do is just ignored the changes to a tracked file using some invocation of git update-index. That breaks in various cases because doing that isn't supported, and the Git FAQ entry and the git update-index manual page explain why.
You can use the skip worktree option with git-update-index when you don't want git to manage the changes to that file.
git update-index --skip-worktree db-connection.php
Reference: Skip worktree bit
I am working on a PostgreSQL database and recently we had a server upgrade, during which we changed our drive from a 2Tb raid Hard disk to a SSD. Now I mounted the RAID drive on a partition and can even access it.
Next what I would like to do is to get the database out of the mounted drive and restore it on the currently running PostgreSQL. How can I achieve this?
root#check03:/mnt/var/lib/postgresql/9.1/main/global# ls
11672 11674 11805 11809 11811 11813_fsm 11816 11820 11822 11824_fsm 11828 11916 11920 pg_internal.init
11672_fsm 11675 11807 11809_fsm 11812 11813_vm 11818 11820_fsm 11823 11824_vm 11829 11918 pg_control pgstat.stat
11672_vm 11803 11808 11809_vm 11813 11815 11819 11820_vm 11824 11826 11914 11919 pg_filenode.map
root#check03:/mnt/var/lib/postgresql/9.1/main/global# cd ..
As you can see I am able to access the drives and the folders, but I don't know what to do next. Kindly let me know. Thanks a lot.
You need the same version of PostgreSQL (9.1), also the same or later minor version. copy main/ and everything below that to the new location. Copy the configuration of the old instance and adapt the paths to fit to the new location (the main/ ist the ''data directory'' (also sometimes called PGDATA)). Start the new instance and look carefully at the logs. You should probably rebuild any indexes.
Also read about the file layout in the fine documentation.
EDIT: If you have any chance to run the old configuration, read about backup and restore, this is a much more safe way to transfer data.
the Postgres binaries must be the same version
make sure that postgres is not running
copy using cp -rfp or tar | tar or cpio , or whatever you like. Make sure you preserve the file owners and mode (top-level-directory must be 0700, owned by postgres)
make sure that the postgres-startup (in /etc/init.d/postxxx) refers to the new directory; sometimes there is an environment variable $PGDATA contiaining the name of the postgres data directory; maybe you need to make changes to new_directory/postgres.conf, too (pg_log et al)
for safety, rename the old data directory
restart Postgres
try to connect to it; check the logs.
Extra:
Seasoned unix-administrators (like the BOFH ;-) might want to juggle with mountpoints and/or symlinks (instead of copying). Be my guest. YMMV
Seasoned DBAs might want to create a tablespace, point it at the new location and (selectively) move databases, schemas or tables to the new location.
I am hosting a website on Heroku, and using an SQLite database with it.
The problem is that I want to be able to pull the database from the repository (mostly for backups), but whenever I commit & push changes to the repository, the database should never be altered. This is because the database on my local computer will probably have completely different (and irrelevant) data in it; it's a test database.
What's the best way to go about this? I have tried adding my database to the .gitignore file, but that results in the database being unversioned completely, disabling me to pull it when I need to.
While git (just like most other version control systems) supports tracking binary files like databases, it only does it best for text files. In other words, you should never use version control system to track constantly changing binary database files (unless they are created once and almost never change).
One popular method to still track databases in git is to track text database dumps. For example, SQLite database could be dumped into *.sql file using sqlite3 utility (subcommand .dump). However, even when using dumps, it is only appropriate to track template databases which do not change very often, and create binary database from such dumps using scripts as part of standard deployment.
you could add a pre-commit hook to your local repository, that will unstage any files that you don't want to push.
e.g. add the following to .git/hooks/pre-commit
git reset ./file/to/database.db
when working on your code (potentially modifying your database) you will at some point end up:
$ git status --porcelain
M file/to/database.db
M src/foo.cc
$ git add .
$ git commit -m "fixing BUG in foo.cc"
M file/to/database.db
.
[master 12345] fixing BUG in foo.cc
1 file changed, 1 deletion(-)
$ git status --porcelain
M file/to/database.db
so you can never accidentally commit changes made to your database.db
Is it the schema of your database you're interested in versioning? But making sure you don't version the data within it?
I'd exclude your database from git (using the .gitignore file).
If you're using an ORM and migrations (e.g. Active Record) then your schema is already tracked in your code and can be recreated.
However if you're not then you may want to take a copy of your database, then save out the create statements and version them.
Heroku don't recommend using SQLite in production, and to use their Postgres system instead. That lets you do many tasks to the remote DB.
If you want to pull the live database from Heroku the instructions for Postgres backups might be helpful.
https://devcenter.heroku.com/articles/pgbackups
https://devcenter.heroku.com/articles/heroku-postgres-import-export
I want to cut Postgres to its minimal size for purpose of including just database function with my application. I'm using Portable Postgres found on internet.
Any suggestions what I can delete from Postgres installation which is not needed for normal database use?
You can delete all the standalone tools in /bin - it can all be done with psql. Keep anything that starts wth pg_, postgres and initdb.
You can probably delete a bunch of conversions in lib/ (the some_and_some.so) files, but probably not until after you've initdb'ed. And be careful not to delete one you'll be using at some point - they are dynamically loaded so you won't notice until a client connects with a different encoding for example.
But note that this probably won't get you much - on my system with debug enabled etc, the binaries take 17Mb. The clean data directory with no data at all in it takes 33Mb, about twice as much. Which you will need if you're going to be able to use your database..