I have a SCADA system (Siemens WinCC Classic 7.5) where there is data exchange between a storage facility and my system, based on text files.
Sometimes (very rarely, maybe 1 in 2000) it crashes the connection to the network drives where the files are exchanged.
The only way to fully recover is to restart the server (Windows 2019).
I suspect that what happens is, that one file is reopened by the SCADA program, before it is actually closed again, because the file is processed cyclically every 1 seconds.
Closing of the file is implemented (with error handling as well) and works during normal operation.
However, if the file is opened and not closed by the same function, I lack a way to "forcefully" close it.
Does anyone know of the golden solution to finding and closing/terminating open files without restarting the entire server?
I use fopen() to open the file, and it's normally closed with fclose().
This works fine normally.
But if the file is opened with fopen() and not closed in the same function, the file remains open and cannot be renamed/deleted without restarting.
I hope the above makes sense, because it's a pretty complex system so it's difficult to summarize in such short terms.
I've searched far and wide and not been able to find a suitable solution.
This is made even more difficult by being locked by only having the Siemens-enabled C-functions.
Related
I am using visual studio code server to work on my server. Since I have a low connection speed (approximately 50 Mbps(4-5Mb/s)), I am struggling with opening high size file. In vscode server if you click to open a file, it is reading all binaries of a file and trying to send these binaries to client side but here is the problem, since my file reading speed is depend on my network speed, it takes minutes to read a high size file. Even if I click to close button of that corresponding file, vscode trying to download completely this file and make my vscode unusable.
Idle situation
When trying to open file
File closed
Vscode should stop download file, when user closed file.
I want to open a file using Erlang but I want this file locked as long as it is opened. Every time I use the below code for opening the file I am able to delete with no problem even though it is still opened by the process.
file:open("myfile.txt", [append]),
Please note that I can't use the exclusive option because it will fail if the file already exists, and I want to keep opening this same file and appending to it, but while it is opened it should be marked as locked and prevent it from being edited or deleted while in use. I did this with no problem before in both Java, Python and VB, but I tried with Erlang and failed.
I'm using readTextFile(/path/to/dir) in order to read batches of files, do some manipulation on the lines and save them to cassandra.
Everything seemed to work just fine, until I reached more than 170 files in the directory (files are deleted after a successful run).
Now I'm receiving "IOException: Too many open files", and a quick look at lsof I see thousands of file descriptors opening once I run my code.
Almost all of the file descriptors are "Sockets".
Testing on a smaller scale with only 10 files resulted in more than 4000 file descriptors opening, once the script has finished, all the file descriptors are closed and back to normal.
Is it a normal behavior of flink? should I increase the ulimit?
Some notes:
The environment is Tomcat 7 with java 8, flink 1.1.2 using DataSet API.
Flink job is scheduled with quartz.
All those 170+ files sum to about ~5MB.
Problem solved.
After narrowing the code, I found out that using "Unirest.setTimeouts" inside a highly parallel "map()" step caused too many threads allocations which in turn consumed all my file descriptors.
I'm currently working an a rather large web project which is written using C servlets ( utilizing GWAN Web server ). In the past I've used a couple of IDEs for my LAMP/PHP jobs, like Eclipse.
My problems with Eclipse are that you can either mirror the project locally, which isn't possible in this case as I'm working on a Mac (server does not run on OSX), or use the "remote" view, which would re-upload files when you save them.
In the later case, the file is only partly written while uploading, which makes this a no-go for a running web server, or the file could become corrupted if the connection was lost during uploading. Also, for changing some character, uploading the whole file seems rather inefficient to me.
So I was thinking:
Wouldn't it be possible to have the IDE open Vim per SSH and mirror my changes there, and then just :w (save) ? Or use some kind of diff-files for changes?
The first one would be preffered, as it has the added advantage of Vim .swp files, which makes it possible that others know when someone is already editing the file.
My current solution is using ssh+vim, but then I lose all the cool features I have with Eclipse and other more advanced IDEs.
Also, regarding X-Forwarding: The reason I don't like it is speed. It feels way slower than just editing locally, and takes up unneeded bandwidth, when all I want to do is basically "text editing".
P.S.: I couldn't find any more appropriate tags for the question, especially no "remote" tag, but if you know any, feel free to add them. Also, if there is another similar question, feel free to point it out - I couldn't find any.
Thank you very much.
If you're concerned about having to transmit the entire file for minor changes, the only solution that comes to my mind is running (either continuously, or on demand) an rsync job that mirrors the remote site to your local system (and back). The rsync protocol just transmits the delta information. According to Are rsync operations atomic at file level?, the change is atomic.
Another possibility: run everything in a virtual machine on your Mac. The server and the IDE/text editor are both on the same virtual machine so you don't have to fear network issues.
Because the source code on the virtual machine is under some kind of VCS the classic code → test → commit process is trivial (at least theoretically).
I was recently downvoted (which only bugged me a little :) ) for an answer I gave to this question. The person offered no explanation for the down vote which started me thinking: "Why would you avoid producing intermediate files?" Especially in a language like Python where File IO is laughably easy.
There seemed to be consensus that it was a bad idea, but I know for a fact that intermediate files are used regularly in practice. I worked for a very well respected research firm (let's just say S.O. wouldn't exist without this firm) where it was assumed that your programs would produce files as output. We did this because if your program indeed deserved to be a standalone program then it would need debuggable output and some way of passing its output between processes that could later be examined in case we discovered an error in our output further downstream.
Is it considered bad practice (in cases like the question linked above) to use intermediate files? Why?
One problem with intermediate files happens when multi-threading.
If Clients C1 and C2 are handled simultaneously by server process S (which may or not have forked into seperate processes, used threads, or whatever concurrency system..), you may get weird issues when both try to create the same intermediate file.
I believe one of Unix philosophies is that all programs should act as filters, however this doesn't necessarily mean creating files on the disk, and using intermediate files leads to unwieldly behaviour in my opinion. One should also treat the disk as a last resort and only use it for storing/retrieving data that should be available after powering off the computer, and maybe even take care to allow programs to run on read-only media.
Well, there are some issues when you use files, especially there may be many unexpected failures while accessing or creating the files. The following listed are all the issues that I personally have experienced.
1) The file location is on the remote machine and the network is down. (NFS mounted).
2) There is not enough free space while creating the file.
3) In between the process the user press Ctrl-C to cancel the process the file is not deleted.
4) The file is mounted on the NFS and the network is slow.
5) The folder in which file was created was a soft link and the original link was deleted.
But still we have to use file because there are hardly any options while working in bash. But in C,C++ i think disk access should be considered as the last resort. Program producing files as output is ok, if that is the only way to communicate with the user. But atleast for intermediate savings use of disk files should be minimized.
If you create temporary files properly (with setting platform-specific 'temporary' flag meaning do not flush cache to disk when no urgent need) they are perfectly good if task requires them.
There are almost no things in IT that you can't use while having a good reason to. :-)