I am building a web service which needs to query 1 to n file servers, and receive multiple files as a result. Does anyone have a good idea on doing this? Would Threads do any good job?
What if the connection with some servers takes longer than the others? How do I know if I really have ALL the queried files?
Thanks
Your question is quite generic and so will be my answer, anyway I hope it will be useful.
I would say you have two options here:
Use an asynchronous model. You open the connections to the N file servers, and set up a callback (or an event) that will fire whenever data from one server is received (usually these callbacks will be invoked in a new thread, but check the documentation for your working framework). You get the connection identifier from the data passed to the callback/event, and update the appropriate file.
Use a syncrhonous polling model. You open the connections to the N file servers, and enter a loop in which you poll each connection for new data; when new data is available, you update the appropriate file. You exit the loop when all files are completely downloaded.
As per how you know when all files are complete, there is no automatic way for that. You need to establish a convention, known by both you and the server, as to how to know that a file has been completely sent. Options include: the server closes the connection when the file is complete (not very safe as the connection can be accidentally closed), the server sends the file size prior to the file contents, and the end of file is signaled by a special character sequence (this works better on text files where the bytes of the end of file sequence will not normally occur).
Related
In our ESB project, we have a lot of routes reading files with file2 or ftp protocol for further processing. Important to notice, that the files we read locally (file2 protocol) are mounted network shares via different protocols (NFS, SMB).
Now, we are facing issues with race conditions. Both servers read the file and process it. We have reduced the possibility of that by using the preMove option, but from time to time the duplicate reading still occurs when both servers poll at the same millisecond. According to the documentation, an idempotentRepository together with readLock=idempotent could help, for example with HazelCast.
However, I'm wondering if this is a suitable solution for my issue as I don't really know if it will work in all cases. It is within milliseconds that both servers read the file, so the information that one server has already processed the file need to be available in the HazelCast grid at the point in time when the second server tries to read. Is that possible? What happens if there are minimal latencies (e.g. network related)?
In addition to that, the setting readLock=idempotent is only available for file2 but not for ftp. How to solve that issue there?
Again: The issue is not preventing dublicate files in general, it is solely about preventing the race condition.
AFAIK the idempotent repository should prevent in your case that both consumers read the same file.
The latency between detection of the file and the entry in hazelcast is not relevant because the file consumers do not enter what they read. Instead they both ask the repository for an exclusive read-lock. The first one wins, the second one is denied, so it continues to the next file.
If you want to minimize the potential of conflicts between the consumers you can turn on shuffle=true to randomize the ordering of files to consume.
For the problem with the missing readLock=idempotent on the ftp consumer: you could perhaps build a separate transfer-route with only 1 consumer that downloads the files. Then your file-consumer route can process them idempotent.
I am writing a small server/client program. I am not sure how to use select() to choose between a client that already is connected to the server, and to add a new client.
i.e.: The server program will start and be listening for clients. How can I use a select statement to know whether the server is receiving from an existing client, or a new connection?
Does the server always have to listen() and accept() every new client?
Thank you.
Before getting your hands dirty dealing with a selector you should read something about Non-Blocking I/O or asynchronous networking. Basically what your selector does is loop through the file descriptors that you have created and check whether someone wants to perform one of the following actions:
Read
Write
Accept
Connect
I could go further into how it does but if you really do want to know please do search into reactor pattern and maybe how programming through events work.
Anyway, to detect a new connection or an already existing one might be trivial or not so trivial, depending on how much control you want to have on the actions performed.
First. You register your server socket on the selector. This socket will stay listening forever and when a client connects the accept event will be triggered and one selector cycle will occur. This will create another file descriptor that you will have to register in your selector.
From this point forward you have to control your connection intention. Do you want to read? Write? Not only this, since this is asynchronous programming and you can't or should not block the information will have to be transferred in chunks. It will be up to you to receive all the data chunks and coordinate all the file descriptors. This is the non-trivial part.
If you want to know anything else please say so and i will edit this answer.
I have a distributed application; that is, I have a homogeneous process running on multiple computers talking to a central database and accessing a network file share.
This process picks up a collection files from a network file share (via CIFS), runs an transformation algorithm on those files and copies the output back onto the network file share.
I need to lock the input files so that other servers -- running the same process -- will not work on the same files. For the sake of argument, assume that my description is oversimplified and that the locks are an absolute must.
Here are my proposed solutions, and some thoughts.
1) Use opportunistic locks (oplocks). This solution uses only the file system to lock files. The problem here is that we have to try to get the lock to find out if the lock exists. This seems that it can be expensive as the network redirectors negotiate the locks. The nice thing about this is that oplocks can be created in such a way that they self delete when there is an error.
2) Use database app locks (via sp_getapplock). This seems that it would be much faster, but now we are using a database to lock a file system. Also, database app locks can be scoped via transaction or session which means that I must hold onto the connection if I want to hold onto -- and later release -- the app lock. Currently, we are using connection pooling, which would have to change and that may be a bigger conversation unto itself. The nice thing about this approach is that the locks will get cleaned up if we lose our connection to the server. Of course, this means that if we lose connection to the database, but not the network file share, the lock goes away while we are still processing the input files.
3) Create a database table and stored procedures to represent the items which I would like to lock. This process is straight forward. The down side of this is of course potential network errors. If for some reason, the database becomes unreachable, the lock will remain in effect. We would need to then derive some algorithm to clean this up at a later date.
What is the best solution and why? Answers are not limited to those mentioned above.
For your situation you should use share-mode locks. This is exactly what they were made for.
Oplocks won't to what you want - an oplock is not a lock, and doesn't prevent anyone doing anything. It's a notification mechanism to let the client machine know if anyone accesses the file. This is communicated to the machine by "breaking" your oplock, but this is not something that makes its way to the application layer (i.e. to your code) - it just generates a message to the client operating system to tell it to invalidate it's cached copy and fetch again from the server.
See MSDN here:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365433(v=vs.85).aspx
The explanation of what happens when another process opens a file on which you hold an oplock is here:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa363786(v=vs.85).aspx
However the important point is that oplocks do not prevent other processes opening the file, they just allow coordination between the client computers. Therefore, oplocks do not lock the file at the application level - they are a feature of the network protocol used by the network file system stack to implement caching. They are not really for applications to use.
Since you are programming on windows the appropriate solution seems to be Share-mode locks, i.e. opening the file with SHARE_DENY_READ|SHARE_DENY_WRITE|SHARE_DENY_DELETE.
If share-mode locks are not supported on the CIFS server, you might consider flock() type locks. (Named after a traditional Unix technique).
If you are processing xyz.xml create a file called xyz.xml.lock (with the CREATE_NEW mode so you don't clobber an existing one). Once you are done, delete it. If you fail to create the file because it already exists, that means that another process is working on it. It might be useful to write information to the lockfile which is useful in debugging, such as the servername and PID. You will also have to have some way of cleaning up abandoned lock files since that won't occur automatically.
Database locks might be appropriate if the CIFS is for example a replicated system so that the flock() lock will not occur atomically across the system. Otherwise I would stick with the filesystem as then there is only one thing to go wrong.
This means, for example, a module can
start compressing the response from a
backend server and stream it to the
client before the module has received
the entire response from the backend.
Nice!
I know it's some kind of asynchronous IO but simple like that isn't enough.
Anyone knows?
Without looking at the source code of an actual implementation, I'm speculating here:
It's most likely some kind of stream (abstract buffered IO) that is passed from one module to the other ("chaining"). One module (maybe a servlet container) writes to a stream that is read by another module (the compression module in your example), which then writes its output to another stream. The contents of that stream may then be processed further or transmitted to the client.
The backend may need to wait on IO before it can fully produce the page. Modules can begin compressing the start of the message before the backend is entirely done writing it.
To understand why this is useful, you need to understand how ngnix is structured. ngninx is a server that relies on non-blocking input and output. Normally, a server will use blocking input and output: it will listen on a connection, and when a connection is found, it will process the page. In order to increase throughput, multiple threads are spawned, called 'workers'.
Contrast this to ngnix: It continually asks the kernel, "Are any of my IO requests ready?" This allows it to handle the same amount of pages with 1) less overhead from all the different processes, and 2) lower memory usage. It has some downsides, however. For extremely low-volume applications, ngnix may use more CPU than a blocking server. Second, it's much less portable. Windows uses an entirely different model for non-blocking IO.
Getting back to your original question, compressing the beginning of a page is useful because it can be ready for the rest of the page when it's done accessing a database or reading from a disk or what-have-you.
I wonder what the most efficient file logging strategy would be in a server written in C?
I can see the following options:
fopen() append and then fwrite() the data for a time frame of say 1 hour, then fclose()?
Caching the data and then occasionally open() append write() and close()?
Using a thread is usually a good solution, we adopted it with interesting results.
The main thread that needs to log prepare the log string and passes it to a second thread. To feed the second thread we use a lockless queue + a circular memory in order to minimize amount of alloc/free and wait time.
The secon thread waits for the lockless queue to be available. When it finds there's some job to do, a new slot of the lockless queue is consumed and the data logged.
Using a separate thread you can save a great amount of time.
After we decided to use a secon thread we had to face another problem. Many istances of the same program (a full text serach engine) must log all together on the same file so the resource shoud be regularly shared among every instance of the server.
We could decide to use a semaphore or another syncornizing methiod but we found another solution: the second thread sends an UDP packet to a local log server that listen on a known port. This server reads each message and logs it on the file (the server is actually the only one that owns he file while it's written). The UDP socket itself grants serialization of logs.
I've been using this solution for more than 10 years and never loose a single line of my logs file, using the second thread I also saved a great percentage of time for every operation (we use to log a lot of information for any single command the server receives).
HTH
Why don't you directly log your data when the events occur?
If your server crashes, you want to retrieve those data at the time it crashed. If you only flush your buffered logs once an hour, you'll miss interesting logs.
File streams are usually buffered by the OS.
If you believe it makes your server slow, due to hard drive writing, you might consider to log into a separate thread. But I wonder if it is the problem. Premature optimizations?
Unless you've benchmarked and found that it's a bottleneck, use fopen and fprintf. There's no reason to put your own complex buffering layer on top unless stdio is too slow for you (and if it is too slow, you might consider whether to rethink the OS/C library your server is running).
The slowest part of writing a system log is the output operation to the physical disks.
Buffering and checksumming the log records are necessary to ensure that you don't lose any log data and that the log data can't be tampered with after the fact, respectively.