Locking file in distributed system - database

I have a distributed application; that is, I have a homogeneous process running on multiple computers talking to a central database and accessing a network file share.
This process picks up a collection files from a network file share (via CIFS), runs an transformation algorithm on those files and copies the output back onto the network file share.
I need to lock the input files so that other servers -- running the same process -- will not work on the same files. For the sake of argument, assume that my description is oversimplified and that the locks are an absolute must.
Here are my proposed solutions, and some thoughts.
1) Use opportunistic locks (oplocks). This solution uses only the file system to lock files. The problem here is that we have to try to get the lock to find out if the lock exists. This seems that it can be expensive as the network redirectors negotiate the locks. The nice thing about this is that oplocks can be created in such a way that they self delete when there is an error.
2) Use database app locks (via sp_getapplock). This seems that it would be much faster, but now we are using a database to lock a file system. Also, database app locks can be scoped via transaction or session which means that I must hold onto the connection if I want to hold onto -- and later release -- the app lock. Currently, we are using connection pooling, which would have to change and that may be a bigger conversation unto itself. The nice thing about this approach is that the locks will get cleaned up if we lose our connection to the server. Of course, this means that if we lose connection to the database, but not the network file share, the lock goes away while we are still processing the input files.
3) Create a database table and stored procedures to represent the items which I would like to lock. This process is straight forward. The down side of this is of course potential network errors. If for some reason, the database becomes unreachable, the lock will remain in effect. We would need to then derive some algorithm to clean this up at a later date.
What is the best solution and why? Answers are not limited to those mentioned above.

For your situation you should use share-mode locks. This is exactly what they were made for.
Oplocks won't to what you want - an oplock is not a lock, and doesn't prevent anyone doing anything. It's a notification mechanism to let the client machine know if anyone accesses the file. This is communicated to the machine by "breaking" your oplock, but this is not something that makes its way to the application layer (i.e. to your code) - it just generates a message to the client operating system to tell it to invalidate it's cached copy and fetch again from the server.
See MSDN here:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365433(v=vs.85).aspx
The explanation of what happens when another process opens a file on which you hold an oplock is here:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa363786(v=vs.85).aspx
However the important point is that oplocks do not prevent other processes opening the file, they just allow coordination between the client computers. Therefore, oplocks do not lock the file at the application level - they are a feature of the network protocol used by the network file system stack to implement caching. They are not really for applications to use.
Since you are programming on windows the appropriate solution seems to be Share-mode locks, i.e. opening the file with SHARE_DENY_READ|SHARE_DENY_WRITE|SHARE_DENY_DELETE.
If share-mode locks are not supported on the CIFS server, you might consider flock() type locks. (Named after a traditional Unix technique).
If you are processing xyz.xml create a file called xyz.xml.lock (with the CREATE_NEW mode so you don't clobber an existing one). Once you are done, delete it. If you fail to create the file because it already exists, that means that another process is working on it. It might be useful to write information to the lockfile which is useful in debugging, such as the servername and PID. You will also have to have some way of cleaning up abandoned lock files since that won't occur automatically.
Database locks might be appropriate if the CIFS is for example a replicated system so that the flock() lock will not occur atomically across the system. Otherwise I would stick with the filesystem as then there is only one thing to go wrong.

Related

Camel file reading: race condition with 2 active servers

In our ESB project, we have a lot of routes reading files with file2 or ftp protocol for further processing. Important to notice, that the files we read locally (file2 protocol) are mounted network shares via different protocols (NFS, SMB).
Now, we are facing issues with race conditions. Both servers read the file and process it. We have reduced the possibility of that by using the preMove option, but from time to time the duplicate reading still occurs when both servers poll at the same millisecond. According to the documentation, an idempotentRepository together with readLock=idempotent could help, for example with HazelCast.
However, I'm wondering if this is a suitable solution for my issue as I don't really know if it will work in all cases. It is within milliseconds that both servers read the file, so the information that one server has already processed the file need to be available in the HazelCast grid at the point in time when the second server tries to read. Is that possible? What happens if there are minimal latencies (e.g. network related)?
In addition to that, the setting readLock=idempotent is only available for file2 but not for ftp. How to solve that issue there?
Again: The issue is not preventing dublicate files in general, it is solely about preventing the race condition.
AFAIK the idempotent repository should prevent in your case that both consumers read the same file.
The latency between detection of the file and the entry in hazelcast is not relevant because the file consumers do not enter what they read. Instead they both ask the repository for an exclusive read-lock. The first one wins, the second one is denied, so it continues to the next file.
If you want to minimize the potential of conflicts between the consumers you can turn on shuffle=true to randomize the ordering of files to consume.
For the problem with the missing readLock=idempotent on the ftp consumer: you could perhaps build a separate transfer-route with only 1 consumer that downloads the files. Then your file-consumer route can process them idempotent.

How can I serialize access to a directory in Linux?

Lets say 4 simultaneous processes are running on a processor, and data needs to be copied from an HDFS (used with Spark) file system to a local directory. Now I want only one process to copy that data, while the other processes just wait for that data to be copied by the first process.
So, basically, I want some kind of a semaphore mechanism, where every process tries to obtain semaphore to try copying the data, but only one process gets the semaphore. All processes who failed to acquire the semaphore would then just wait for the semaphore to be cleared (the process who was able to acquire the semaphore would clear it after its done with copying), and when its cleared they know the data has already been copied. How can I do that in Linux?
There's a lot of different ways to implement semaphores. The classical, System V semaphore way is described in man semop and more broadly in man sem_overview.
You might still want to do something more easily scalable and modern. Many IPC frameworks (Apache has one or two of those, too!) have atomic IPC operations. These can be used to implement semaphores, but I'd be very very careful.
Generally, I regularly encourage people who write multi-process or multi-threaded applications to use C++ instead of C. It's often simpler to see where a shared state must be protected if your state is nicely encapsulated in an object which might do its own locking. Hence, I urge you to have a look at Boost's IPC synchronization mechanisms.
In addition of Marcus Müller's answer, you could use some file locking mechanism to synchronize.
File locking might not work very well on networked or remote file systems. You should use it on a locally mounted file system (e.g. Ext4, BTRFS, ...) not on a remote one (e.g. NFS)
For example, you might adopt the convention that your directory contains (or else you'll create it) some .lock file and use an advisory lock flock(2) (or a POSIX lockf(3)) on that .lock file before accessing the directory.
If using flock, you could even lock the directory directly....
The advantage of using such a file lock approach is that you could code shell scripts using flock(1)
And on Linux, you might also use inotify(7) (e.g. to be notified when some file is created in that directory)
Notice that most solutions are (advisory, so) presupposing that every process accessing that directory is following some convention (in other words, without more precautions like using flock(1), a careless user could access that directory - e.g. with a plain cp command -, or files under it, while your locking process is accessing the directory). If you don't accept that, you might look for mandatory file locking (which is a feature of some Linux kernels & filesystems, AFAIK it is sort-of deprecated).
BTW, you might read more about ACID properties and consider using some database, etc...

Embedded File System and power-off

I am working on an embedded application without any OS that needs the use of a File System. I've been over this many times with the people in the project and some agree with me that the system must make a proper shut down of the system whenever there is a power failure or else the file system might go crazy.
Some people say that it doesn't matter if you simply power off the system and let nature run its course, but I think that's one of the worst things to do, especially if you know this will bring you a problem and probably shorten your product's life span.
In the last paragraph I just assumed that it is a problem, but my question remains:
Does a power down have any effect on the file system?
Here is a list of various techniques to help an embedded system tolerate a power failure. These may not be practical for your particular application.
Use a Journaling File System - Can tolerate incomplete writes due to power failure, OS crash, etc. Most modern filesystems are journaled, but do your homework to confirm.
Unless your application needs the write performance, disable all write caching. Check your disk drivers for caching options. Under Linux/Unix, consider mounting the filesystem in sync mode.
Unless it must be writable, make it read-only. Try to keep your application executables and operating system files on their own partition(s), with write protections in place (e.g. mount read only in Linux). Your read/write data should be on its own partition. Even if your application data gets corrupted, your system should still be able to boot (albeit with a fail safe default configuration).
3a. For data that is only written once (e.g. Configuration Settings), try to keep it mounted as read-only most of the time. If there is a settings change mount is as R/W temporarily, update the data, and then unmount/remount it as read-only.
3b. Use a technique similar to 3a to handle application/OS updates in the field.
3c. If it is impractical for you to mount the FS as read-only, at least consider opening individual files as read-only (e.g. fp=fopen("configuration.ini", "r")).
If possible, use separate devices for your storage. Keeping things in separate partitions provides some protection, but there are still edge cases where a partition table may become corrupt and render the entire drive unreadable. Using physically separate devices further isolates against one corrupt device bringing down the whole system. In a perfect world, you would have at least 4 separate devices:
4a. Boot Loader
4b. Operating System & Application Code
4c. Configuration Settings
4e. Application Data
Know the characteristics of your storage devices, and control the brand/model/revision of devices used. Some hard disks ignore cache flush commands from the OS. We had cases where some models of CompactFlash cards would corrupt themselves during a power failure, but the "industrial" models did not have this problem. Of course, this information was not published in any datasheet, and had to be gathered by experimental testing. We developed a list of approved CF cards, and kept inventory of those cards. We periodically had to update this list as older cards became obsolete, or the manufacturer would make a revision.
Put your temporary files in a RAM Disk. If you keep those writes off-disk, you eliminate them as a potential source of corruption. You also reduce flash wear and tear.
Develop automated corruption detection and recovery methods. - All of the above techniques will not help you if the application simply hangs because a missing config file. You need to be able to recover as gracefully as possible:
7a. Your system should maintain at least two copies of its configuration settings, a "primary" and a "backup". If the primary fails for some reason, switch to the backup. You should also consider mechanisms for making backups whenever whenever the configuration is changed, or after a configuration has been declared "good" by the user (testing vs production mode).
7b. Did your Application Data partition fail to mount? Automatically run chkdsk/fsck.
7c. Did chkdsk/fsck fail to fix the problem? Automatically re-format the partition and get it back to a known state.
7d. Do you have a Boot Loader or other method to restore the OS and application after a failure?
7e. Make sure your system will beep, flash an LED, or something to indicate to the user what happened.
Power Failures should be part of your system qualification testing. The only way you will be sure you have a robust system is to test it. Yank the power cord from the system and document what happens. Try yanking the power at multiple points in the system operation (during runtime, while booting, mid configuration, etc). Repeat each test multiple times.
If you cannot mitigate all power failure problems, incorporate a battery or Supercapacitor into the system - Keep in mind that you will need a background process in your OS to initiate a graceful shutdown when power gets low. Also, batteries will require periodic testing and replacement with age.
Addition to msemack's response, unfortunately my rating is too low to post a comment to his answer vs. a separate answer.
Does a power down have any effect on the file system?
Yes, if proper measures aren't put in place to prevent corruption. See previous answers for file system options to help mitigate. However if ATA flush/sleep aren't properly implemented on your device you may run into the scenario we did. In our scenario the device was corrupt beyond the file system, and fdisk/format would not recover the device.
Instead an ATA security-erase was required to recover the device once corruption occurs. In order to avoid this, we implemented an ATA sleep command prior to power loss. This required hold-up of 400ms to support the 160ms ATA sleep took, and leave some head room for degradation of the caps over the life of the product.
Notes from our scenario:
fdisk/format failed to repair/recover the drive.
Our power-safe file system's check disk utility returned that the device had bad blocks, but there really weren't any.
flush/sync returned success, quickly, and most likely weren't implemented.
Once corrupt, dd could not read the device beyond the 1st partition boundary and returned i/o errors after.
hdparm used to issue ATA security-erase, as only method of recovery for some corruption scenarios.
For non-journalling filesystem unexpected turn-off can mean corruption of certain data including directory structure. This happens if there's unsaved data in the cache or if the FS is in the process of writing multi-block update and interruption happens when only some blocks are written.
Journalling addresses this problem mostly - if there's interruption in the middle, recovery routine or check-and-repair operation done by the FS (usually implicitly) brings the filesystem to consistent state. However this state is not always the latest - i.e. if there were some data in the memory cache, they can be lost even with journalling. This is because journalling saves you from corruption of the filesystem but doesn't do magic.
Write-through mode (no write caching) reduces possibility of the data loss but doesn't solve the problem completely, as journalling will work as a cache (for a very short time).
So unfortunately backup or data duplication are the main ways to prevent data loss.
It totally depends on the file system you are using and if it is acceptable to loose some data at power off based on your project requirements.
One could imagine using a file system that is secured against unattended power-off and is able to recover from a partial write sequence. So on the applicative side, if you don't have critic data that absolutely needs to be written before shuting down, there is no need for a specific power off detection procedure.
Now if you want a more specific answer for your project you will have to give more information on the file system you are using and your project requirements.
Edit: As you have critical applicative data to save before power-off, i think you have answered the question yourself. The only way to secure unattended power-off is to have a brown-out detection that alerts your embedded device coupled with some hardware circuitry that allows keeping delivering enought power to the device to perform the shutdown procedure.
The FAT file-system is particularly prone to corruption if a write is in progress or a file is open on shutdown - specifically if ther is a buffered operation that is not flushed . On one project I worked on the solution was to run a file system integrity check and repair (essentially chkdsk/scandsk) on start-up. This strategy did not prevent data loss, but it did prevent the file system becoming unusable.
A number of vendors provide journalling add-on components for FAT to counter exactly this problem. These include Segger, Quadros and Micrium for example.
Either way, your system should generally adopt a open-write-close approach to file access, or open-write-flush if you feel the need to keep the file open.

Ensure that file state on the client is in sync with NFS server

I'm trying to find proper way to handle stale data on NFS client. Consider following scenario:
Two servers mount same NFS shared storage with number of files
Client application on 1 server deletes some files
Client application on 2 server tries to access deleted files and fails with: Stale NFS file handle (nothing strange, error is expected)
(Also it may be useful to know, that cache mount options are pretty high on both servers for performance reasons).
What I'm trying to understand is:
Is there reliable method to check, that file is present? In the scenario given above lstat on the file returns success and application fails only after trying to move file.
How can I manually sync contents of directory on the client with server?
Some general advise on how to write reliable file management code in case of NFS?
Thanks.
Is there reliable method to check, that file is present? In the scenario given above lstat on the file returns success and application fails only after trying to move file.
That's it normal NFS behavior.
How can I manually sync contents of directory on the client with server?
That is impossible to do manually, since NFS pretends to be a normal POSIX-compliant file system.
I have tried once to code close()/open() in an attempt to somehow mitigate the effects of the NFS client-side caching. In my case I needed to read the info written to the file on other server. But even the reopen trick had close to zero effect. And I can't add fdatasync() to the writing side, since that slows whole application down.
My experience with NFS to date is that nothing you can do. In critical code paths I simply coded to retry the file operations which return ESTALE.
Some general advise on how to write reliable file management code in case of NFS?
Mod me down all you want, but if your customers want reliability then they shouldn't use NFS.
My company for example advertises use of proper distributed file system (I intentionally omit the brand) if customer wants reliability. Our core software is not guaranteed to run on NFS and we do not support such configurations. But in our case we really need the guarantees that as soon as the data are written to FS they become accessible on all other nodes.
Coherency in NFS can be achieved, but at the cost of performance, making NFS barely usable. (Check its mount options.) NFS is caching like crazy to hide the fact that it is a server file system. To make all operations coherent, NFS client would have to go to the NFS server synchronously for every little operation, bypassing the local cache. And that would never be fast.
But since we are talking Linux here, one can advise customers of the software to evaluate available cluster file systems. E.g. RedHat now officially support GFS. I have heard about people using CodaFS, but have no hard info on it.
i have had success with doing ls -l on the directory which contains the file.
You could try the ''noac'' mount option
from man nfs:
In addition to preventing the client
from caching file attributes, the noac
option forces application writes to
become synchronous so that local
changes to a file become visible on
the server immediately. That way,
other clients can quickly detect
recent writes when they check the
file's attributes.
Using the noac option provides
greater cache coherence among NFS
clients accessing the same files, but
it extracts a significant performance
penalty. As such, judicious use of
file locking is encouraged instead.
You could have two mounts, one for critical fast changing data that you need synchronized and another mount for other data.
Also, look into NFS locking and its limitations.
As for general advice:
One way to truncate a file that is concurrently read from multiple hosts is to write the content into a temporary file and then rename that file to the final location.
On the same filesystem this operation should be atomic.

How do filesystems handle concurrent read/write?

User A asks the system to read file foo and at the same time user B wants to save his or her data onto the same file. How is this situation handled on the filesystem level?
Most filesystems (but not all) use locking to guard concurrent access to the same file. The lock can be exclusive, so the first user to get the lock gets access - subsequent users get a "access denied" error. In your example scenario, user A will be able to read the file and gets the file lock, but user B will not be able to write while user A is reading.
Some filesystems (e.g. NTFS) allow the level of locking to be specified, to allow for example concurrent readers, but no writers. Byte-range locks are also possible.
Unlike databases, filesystems typically are not transactional, not atomic and changes from different users are not isolated (if changes can even be seen - locking may prohibit this.)
Using whole-file locks is a coarse grained approach, but it will guard against inconsistent updates. Not all filesystems support whole-file locks, and so it is common practice to use a lock file - a typically empty file whose presence indicates that its associated file is in use. (Creating a file is an atomic operation on most file systems.)
Wikipedia - File Locking
For Linux, the short answer is you could get some strange information back from a file if there is a concurrent writer. The kernel does use locking internally to run each read() and write() operation serially. (Although, I forget whether the whole file is locked or if it's on a per-page granularity.) But if the application uses multiple write() calls to write information to the file, a read() could happen between any of those calls, so it could see inconsistent data. This is an atomicity violation in the operating system.
As mdma has mentioned, you could use file locking to make sure there is only one reader and one writer at a time. It sounds like NTFS uses mandatory locking, where if one program locks the file, all other programs get error messages when they try to access it.
Unix programs generally don't use locking at all, and when they do, the lock is usually advisory. An advisory lock only prevents other processes from getting an advisory lock on the same file; it doesn't actually prevent the read or write. (That is, it only locks the file for those who check the lock.)

Resources