A partner who cannot support a real-time web service interface must SFTP CSV files to my linux environment.
The file is zipped and encrypted. The sftp server is a different virtual server than the one that will process the CSV data into my application's database.
I don't need help with the technical steps (bash script, etc) but I'm looking for file management conventions that assist with the following requirements:
Good auditabilty
Non-destructive
Recoverable
Basically I'm trying to figure out when it makes sense to make copies of the file, when to rename it to indicate some process step has been completed to a file, etc. (e.g. Do I keep the zip files or do I delete them once unzipped?)
There is going to be personal preference in the response, but I'm looking for that; to learn from someone who has more experience working with this type of interface. This seems better than me inventing something myself.
If the files are encrytped upon the network and within the files settings, then it cannot be successfully transmitted across unless the file is parsed within another file. You could try to make the sftp server foward the file onto a seperate machine,but this would only cause more issues because of the encryption type based on the files.
Related
net project as well as a setup project. I also have it so that during installation it asks the users to enter a file location to store their database. the plan is to have an empty .mdf file, with all the tables setup, copied into that folder and I store the folder path in a config file.
this is mainly because I am planning on having multiple separate applications that all need the ability to access the same database. I have it storing the folder path in my config file the only thing I'm having trouble with is
storing the template files I don't know if i should do this in the setup project or main project
how to copy said template files into a new folder
so far I have been unable to find a solution so any help is appreciated
Well here is what I do this in a few of my projects - something that has proven reliable enough for me over the years (which you may or may want to do as well):
I have the program itself create the database files in an initialization routine. First however, it creates the sub folders in which the database files will be stored, if they don't already exist.
To do this, the program just checks if the folder exists and if the database file exists and if they do not, it creates them on the spot:
If Directory.Exists(gSQLDatabasePathName) Then
Else
Directory.CreateDirectory(gSQLDatabasePathName)
End If
If File.Exists(gSQLiteFullDatabaseName) Then
Else
...
I also have the program do some other stuff in the initialization routine, like creating an encryption key to be used when storing / retrieving the data - but that may be more than you need (also, for full disclosure, this has some rare issues that I haven't been able to pin down).
Here too are some addition considerations:
I appreciate you have said that you want to give the user the choice of where to store their database files. However, I would suggest storing them in the standard locations
Where is the correct place to store my application specific data?
and only allowing the users to move them if the really need to (for example if the database needs to be shared over the network) as it will make the support of your app harder if every user has their data stored in different places.
I have found letting the user see in their options/settings windows where their database is stored is a good idea.
Also to encourage them to back those files /directories up.
Also to create automatic backups of several generations for the user.
Hope this helps.
Our current system database system is a clipper DOS application. The database inside its folder is fragmented/divided into many parts. I want to decrypt the database so that I will have only one database in all and avoid reshuffling of data. I'll attached the file folder Screenshot.. the database is on .DBF format
VScreenshot of files
Often you can decompile the CLIPPER exe file to source code and work from the .prg I've done it many times. The program to use is called WALKYRIE.
In Clipper and Fox Pro for DOS .dbf file is a simple table file.
If You want to use as data base with many tables in one unit.
You can import these tables in MS SQL data base and/or part of a MS Access database.
I see that you got several answers. Most are partially right. Let's address these one at a time:
All those files essentially comprise the "database" for the application you're using. They could be used by other applications as well. Besides having a lot of files, what is the problem you're trying to solve?
People mentioned indexes. You can generally ignore these. There are there primarily to make access to the data files faster. Any properly written clipper application will recreate these if they're missing or corrupted. You could test this by renaming one, running the app, and seeing what happens. If it doesn't recreate it you can name it back. Not replacing missing index files would be unusual behavior.
The DBF file format is binary, but barely. Most of what's in a DBF is text and is readable with an editor. But there's no reason to do so - I'm sure there are several free DBF utilities out there to to read DBF files. Getting the structure of the files could be very helpful.
Getting the data out of the files would also be fairly simple with a utility. If you look up the DBF format you could even write one fairly easily in Clipper, any other language that uses DBF files, or in something like Python. Any language that can open and write files, really. It's not hard - any competent developer could do this in a matter of hours. Must less if you're using Clipper or another language that natively reads DBX files.
Most people create dBase/Clipper programs with relational data, like SQL Server. Where SQL Server has tables that relate to each other dBase/Clipper has a file for each "table." This isn't a requirement, but it was almost certainly done this way.
Given that, if you get the table structures through a utility or by reading the headers in an editor (don't save them from an editor!) you could quite likely recreate the database schema (i.e. the map of the data). Once you have that it's fairly trivial to get the data into another type of database (SQL Sever, Access, or whatever you like to use.) If non of the files are too large it's conceivable to put all the files into Excel sheets. It really depends on what you want to do with it.
As others have said, you may be able to get the code by Valkyrie. Some people have used it very successfully. I don't know where you get it and I've never used it. Why do you not have the code? If this is a commercial application you likely should not have it. If it's a custom app who ever wrote it or paid to have it written should have the code.
Again, it's not clear to me what problem you're trying to solve. But there are many options for doing something with those DBF files. Fortunately they are one of the easier to read data formats you could be working with.
Let me know if you have any questions. Apologies for the typos that are no doubt scattered throughout this reply.
You sort of can get an idea of how they relate to each other by opening the index files they use (.NTX files). If you have the DBU utility (executable) around, you can open the DBF and load the index (NTX). LibreOffice Calc is also able to open DBFs (haven't tested .NTX).
If you open the .NTX on a text editor you will see the indexes in the beginning.
I open with Access, but I can save the data using a PrintFill Program.
Sorry if I didn't express myself precisely in the title, I'll try to explain what i meant to say here.
My application uses a lot of small files like DB files, xml files, fonts, etc. There is folder and file presence check when application starts, but I would like to make sure that user can not accidentally change or delete some important file from disk.
Only thing that comes to mind is archiving files in few archives by usage frequency, changing archive extension to something unfamiliar and hiding those archives.
But compressing and uncompressing those files all the time through application doesn't seem like efficient solution.
Is there some standard procedure for keeping those important files from tampering?
Only thing that comes to mind is archiving files in few archives by usage frequency, changing archive extension to something unfamiliar and hiding those archives
That is security through obscurity, which is not a recommended practice.
Instead, use the file security mechanisms built-in to your operating system. Allow appropriate file access only to a specific group/role or user, and ensure your application runs in that group/role or as that user.
I'm looking for a solution to transfer files from one computer to another without any human interaction. I have a build server that needs to get the nightly build to another testing computer that evaluates the software and saves the results in a text file. When that is done, I need to get the text file back to the build server for the emailing and posting of the results. I've looked around a bit and found that it's not really possible to do this with forms for security reasons but this really isn't my area of expertise. I can't use a network share location because the network drives are not always in sync.
Some other ideas I had were running a ftp upload with the command line, having some kind of listen socket for both machines, and putting the file in a "to download" folder visible on a web server and notifying the other machine of what the filename is.
Are any of these better than the others or even possible?
Using FTP & PHP for your condition is very possible. Back then I created a page that has to scan through a network and download some images using FTP (I used VSFTPD) regularly.
So perhaps all you need to do is setup your FTP server, then create the php script to download your file, and run the script regularly using cron job at when your file is created + 5 or 10 minutes.
Before I begin, I would like to express my appreciation for all of the insight I've gained on stackoverflow and everyone who contributes. I have a general question about managing large numbers of files. I'm trying to determine my options, if any. Here it goes.
Currently, I have a large number of files and I'm on Windows 7. What I've been doing is categorizing the files by copying them into folders based on what needs to be processed together. So, I have one set that contains the files by date (for long term storage) and another that contains the copies by category (for processing and calculations). Of course this doubles my data each time. Now I'm having to create more than one set of categories; 3 copies to be exact. This is quadrupling my data.
For the processing side of things, the data ends up in excel. Originally, all the data was brough into excel. Then all organization and filtering was performed in excel. This was time consuming and not easily maintainable over the long term. Later the work load was shifted to the file system itself, which lightened the work in excel.
The long and short of it is that this is an extremely inefficient use of disk space. What would be a better way of handling this?
Things that have come to mind:
Overlapping Folders
Is there a way to create a folder that only holds the addresses of a file, rather than copying the file. This way I could have two folders reference the same file.
To my understanding, a folder is a file listing the memory addresses of the files inside of it, but on Windows a file can only be contained in one folder.
Microsoft SQL Server
Not sure what could be done here.
Symbolic Links
I'm not an administrator, so I cannot execute the mklink command.
Also, I'm uncertain about any performance issues with this.
A Junction
Apparently not allowed for individual files, only folders in windows.
Search folders (*.search-ms)
Maybe I'm missing something, but to my knowledge there is no way to specify individual files to be listed.
Hashing the files
Creating hash tags for all the files, would allow for the files to be stored once. But then I have no idea how I would handle the hash tags.
XML
Maybe I could use xml files to attach meta data to the files and somehow search using them.
Database File System
I recently came across this concept in my search. Not sure how it would apply Windows.
I have found a partial solution. First, I discovered that the laptop I'm using is actually logged in as Administrator. As an alternative to options 3 and 4, I have decided to use hard-links, which are part of the NTFS file system. However, due to the large number of files, this is unmanageable using the following command from an elevated command prompt:
mklink /h <source\file> <target\file>
Luckily, Hermann Schinagl has created the Link Shell Extension application for Windows Explorer and a very insightful reading of how Junctions, Symbolic Links, and Hard Links work. The only reason that this is currently a partial solution, is due to a separate problem with Windows Explorer, which I intend to post as a separate question. Thank you Hermann.