How to take a recursive snapshot of a btrfs subvol? - c

Assume that a btrfs subvol named "child-subvol" is within a another subvol say, "root-subvol" and if we take snapshot of "root-subvol" then, the "child-subvol" should also be taken a snapshot.
Since recursive snapshot support is not yet there in btrfs file system, how can this be achieved alternatively ?

Step 1:
Get all the residing btrfs sub-volumes. Preferably in the sorted order as achieved by the command below.
$ btrfs subvolume list --sort=-path < top_subvol >
Step 2:
In the order of preference as obtained, perform delete/Snapshot operation.
$ btrfs subvolume delete < subvol-name >

I've been wondering this too and haven't been able to find any recommended best practices online. It should be possible to write a script to create a snapshot that handles the recursion.

As Peter R suggests, you can write a script. However, if you want to send the subvolume it must be marked as readonly, and you can't snapshot recursively into readonly volumes.
To solve that you can use btrfs-property (found through this answear) in the script that handles recursion, making it (after all snapshots are taken) mark the snapshots readonly, so you can send them.
Alternatively, you can do
cp -a --reflink=always /path/to/root_subvol/ /path/to/child_subvol/
(--reflink=auto never worked for me before, and could also help you catch errors)
It should be fast, and afaik with the same advantages as a snapshot, although you don't keep the old subvolume structure.

Related

How to make atomic operation with both file system and database in Postgres?

I think the following should be a pretty common pattern :
A database is used to store file paths
The files themselves are stored in the file system
Issues may occur when say we want to modify a file path : we need to both modify
the database file path and to move the file in the filesystem. It is important that this is done "atomically". Indeed, while we are doing the modification, another process may attempt to read the file path in the datadase and then tries to access the file in the file system. We should make sure that the tuple
("file path", "actual file location")
remains consistant all the time.
Is there a canonical/simple way to achieve this with Postgres/Linux ?
One of the major features of the database is that the processes see it consistently. That also means that different clients see different state of the database.
This means that when you correct a file path in the database and commit the change any transactions that started before the commit can see the old path for some time after the commit.
So actually to make sure nobody would try to read the old file path you have to wait until all transactions from before the commit would end. That can take milliseconds or, in extreme situations, days. If you have a
I'd try to implement the following scheme (pseudocode):
sql("begin")
os.hardlink(old_path, new_path)
sql("update files set path=? where path=?, new_path, old_path)
sql("insert into files_to_clean values (?, txid_current())", old_path)
sql("commit")
if random()<CLEANUP_PROBABILITY:
sql("begin")
for delete_path in sql("
delete from files_to_clean
where txid<txid_snapshot_xmin(txid_current_snapshot())
returning path skip locked
"):
os.delete(delete_path)
sql("commit")

Apache Spark: batch processing of files

I have directories, sub directories setup on HDFS and I'd like to pre process all the files before loading them all at once into memory. I basically have big files (1MB) that once processed will be more like 1KB, and then do sc.wholeTextFiles to get started with my analysis
How do I loop on each file (*.xml) on my directories/subdirectories, do an operation (let's say for the example's sake, keep the first line), and then dump the result back to HDFS (new file, say .xmlr) ?
I'd recommend you just to use sc.wholeTextFiles and preprocess them using transformations, after that save all of them back as a single compressed sequence file (you can refer to my guide to do so: http://0x0fff.com/spark-hdfs-integration/)
Another option might be to write a mapreduce that would process the whole file at a time and save them to the sequence file as I proposed before: https://github.com/tomwhite/hadoop-book/blob/master/ch07/src/main/java/SmallFilesToSequenceFileConverter.java. It is the example described in 'Hadoop: The Definitive Guide' book, take a look at it
In both cases you would do almost the same, both Spark and Hadoop will bring up a single process (Spark task or Hadoop mapper) to process these files, so in general both of the approaches will work using the same logic. I'd recommend you to start with a Spark one as it is simpler to implement given the fact you already have a cluster with Spark

tokyo cabinet: .tcb.wal file created along with .tcb file. Db size doesnot decreases while deleting records

I am using tokyo cabinets B+ tree API to create a lookup database. On linux environment I see a .tcb.wal file created along with the actual .tcb database file. The size of this file is 0. I wonder whether its a lock file that is created to help synchronization. Also when I delete records from the database the size of the file does not decrease. Any reasons why its behaving like that?
The extension .wal stands for Write Ahead Logging file. This file is only relevant if you use any transaction functions; most applications do not use these. (For details, search for "ahead" in the documentation.)
The file size does not change for every deletion for efficiency reasons. Similarly, if you create an empty database, it will reserve space for faster insertions.

Following multiple log files efficiently

I'm intending to create a programme that can permanently follow a large dynamic set of log files to copy their entries over to a database for easier near-realtime statistics. The log files are written by diverse daemons and applications, but the format of them is known so they can be parsed. Some of the daemons write logs into one file per day, like Apache's cronolog that creates files like access.20100928. Those files appear with each new day and may disappear when they're gzipped away the next day.
The target platform is an Ubuntu Server, 64 bit.
What would be the best approach to efficiently reading those log files?
I could think of scripting languages like PHP that either open the files theirselves and read new data or use system tools like tail -f to follow the logs, or other runtimes like Mono. Bash shell scripts probably aren't so well suited for parsing the log lines and inserting them to a database server (MySQL), not to mention an easy configuration of my app.
If my programme will read the log files, I'd think it should stat() the file once in a second or so to get its size and open the file when it's grown. After reading the file (which should hopefully only return complete lines) it could call tell() to get the current position and next time directly seek() to the saved position to continue reading. (These are C function names, but actually I wouldn't want to do that in C. And Mono/.NET or PHP offer similar functions as well.)
Is that constant stat()ing of the files and subsequent opening and closing a problem? How would tail -f do that? Can I keep the files open and be notified about new data with something like select()? Or does it always return at the end of the file?
In case I'm blocked in some kind of select() or external tail, I'd need to interrupt that every 1, 2 minutes to scan for new or deleted files that shall (no longer) be followed. Resuming with tail -f then is probably not very reliable. That should work better with my own saved file positions.
Could I use some kind of inotify (file system notification) for that?
If you want to know how tail -f works, why not look at the source? In a nutshell, you don't need to periodically interrupt or constantly stat() to scan for changes to files or directories. That's what inotify does.

A simple log file format

I'm not sure if it was asked, but I couldn't find anything like this.
My program uses a simple .txt file for log purposes, It just creates/opens a file and appends lines.
After some time, I started to log quite a lot of activities, so the file became too large and hardly readable. I know, that it's not write way to do this, but I simply need to have a readable file.
So I thought maybe there's a simple file format for log files and a soft to view it or if you'd have any other suggestions on this question?
Thanks for help in advance.
UPDATE:
It's access 97 application. I'm logging some activities like Form Loading, SELECT/INSERT/UPDATE to MS SQL Server ...
The log file isn't really big, I just write the duration of operations, so I need a simple way to do this.
The Log file is created on a user's machine. It's used for monitoring purposes logging some activities' durations.
Is there a way of viewing that kind of simple Log file highlighted with an existing tool?
Simply, I'd like to:
1) Write smth like "'CurrentTime' 'ActivityName' 'Duration in milliseconds' " (maybe some additional information like query string) into a file.
2) Open it with a tool and view it highlighted or somehow more readable.
ANSWER: I've found a nice tool to do all I've wanted. Check my answer.
LogExpert
The 3 W's :
When, what and where.
For viewing something like multitail ("tail on steroids") http://www.vanheusden.com/multitail/
or for pure ms windows try mtail http://ophilipp.free.fr/op_tail.htm
And to keep your files readable, you might want to start new files when if the filesize of the current log file is over certain limit. Example:
activity0.log (1 mb)
activity1.log (1 mb)
activity2.log (1 mb)
activity3.log (1 mb)
activity4.log (205 bytes)
A fairly standard way to deal with logging from an application into a plain text file is to:
split the logs into different program functional areas.
rotate the logs on a daily/weekly basis (i.e. split the log on a size or date basis)
so the current log is "mylog.log" or whatever, and yesterday's was "mylog.log.1" or "mylog.ddmmyyyy.log"
This keeps the size of the active log manageable. And then you can just have expiry rules so that old logs get thrown away on a regular basis.
In addition it would be a good idea to have different log levels for your application (info/warning/error/fatal) so that you're not logging more than is necessary.
First, check you're only logging things that are useful.
If it's all useful, make sure it is easily parsable by tools such as grep, that way you can find the info you want. Make sure you have the type of log entry, the date/time all conforming to a layout.
Build yourself a few scripts to extract the information for you.
Alternatively, use separate log files for different types of entries.
Basically you better just split logs according to severity. You'll rarely need to read all logs for the whole system. For example apache allows to configure error log and access log, pretty obvious what info exactly they have.
If you're under linux system grep is your best tool to search through logs for specific entries.
Look at popular logfiles like /var/log/syslog on Unix to get ideas:
MMM DD HH:MM:SS hostname process[pid]: message
Example out of my syslog:
May 11 12:58:39 raphaelm anacron[1086]: Normal exit (1 job run)
But to give you the perfect answer we'd need more information about what you are logging, how much and how you want to read the logs.
If only the size of the log file is the problem, I recommend using logrotate or something similar. logrotate watches log files and, depending on how you configured it, after a given time or when the log file exceeds a given size, it moves the log file to an archive directory and optionally compresses it. Then the original log file is truncated. For example, you could configure it to archive the log file every 24 hours or whenever the files size exceeds 500kb.
If this is a program, you might investigate apache logging libraries (http://logging.apache.org/) Out of the box, they'll give you a decent logging format out of the box. They're also customizable, so you can simplify your parsing job.
If this is a script, see some of the other answers.
LogExpert
I've found it here. Filter is better, than in mtail. There's an option of highlighting just adding a string and the app is nice and readable. You can customize columns as you like.

Resources