I have old Progress database. and I want to read an old Progress .db file.
How can I do it?
Progress version year is 1994. 16 bit.
It was running Novell netware but now it is finish.
Please help.
You need to have the Progress executables that go with whatever release of Progress you have that is also on a compatible platform (DOS or Novell would work). I believe that the last Novell release was Progress version 6. (The current release of Progress is OpenEdge 11.)
If you have the executables still on a PC associated with that system then you should be able to access the db. However, if the database is associated with an application and the application was distributed without source code you probably only have a runtime license. Which probably means that you cannot easily extract the data.
If you have access to more recent copies of Progress you could also try converting and updating the db to something more up to date. But that will also be non-trivial as there have been significant architectural changes in the last 25 years.
The files are binary and the format is proprietary. You cannot just read them in an editor. That software also pre-dates things like ODBC by a decade or so -- so you won't find any relief that way either.
Your best bet is probably to find a consultant who has a fully capable copy of v6 who is willing to do the work.
Related
I'm having an intermittent problem that I'm trying to track down. Every now and then a significant portion of my src directory is being erased (like 90%+ of all files). I'll be working on my project and all of a sudden I'll get an error, look at git status and it will show nearly all of the files in my repo have been deleted. Then I have to run a bunch of git checkout -- commands and I'm lucky if I don't lose a bunch of work.
Can I use inotify or another program to watch my src directory and report which program is deleting the files? I have a feeling it's gulp but I have no evidence beyond the anecdotal, and I don't want to bother a specific project until I've nailed down the source of the problem.
OS X, by the way.
The first thing that comes to mind is to use lsof to monitor your directory and capture your output to a file (or have a terminal up.)
I tested lsof +D ~/Downloads/ -r 2 out on my OS X, and it seems to work fine.
https://unix.stackexchange.com/questions/157064/monitoring-files-continuously-with-lsof
Auditing. This is one thing that auditing is designed for.
Don't roll your own. Don't use tools designed for other purposes. Use the auditing facilities your operating system provides.
Basic tutorial for OS X is here:
OpenBSM auditing on Mac OS X
Way back in 10.3.x, Apple submitted Mac OS X and Mac OS X Server to
the National Information Assurance Partnership for Common Criteria
certification. Common Criteria certification means that the the
covered hardware and software has been tested and evaluated to make
sure that it meets an established set of requirements for security and
data protection. 10.3.6 and 10.3.6 Server were tested and were found
to meet Evaluation Assurance Level 3 (EAL3) for Common Criteria
certification.
As part of that certification effort, a new piece of software appeared
from Apple: the Common Criteria Tools audit software. This software
was OpenBSM, which is an open source implementation of Sun’s Basic
Security Module (BSM) security audit API and file format. ...
Yes, it's a pain to do properly. But it will work, and the results will be definitive.
I have a legacy system which used to run on dos. It is an ERP system for retail stores (fashion). It think it stores it's data in flat files.
I have files ending with *.KEY and other files ending with *.D00 (counting up).
I think the key files hold the key informationen and the D-Files hold some data ... there are alot D77 files...
As far as my investigation concerns this is not dfb or foxpro it could proprietary...
The company who wrote it is out of business of course so no chance for support or any hints.
When I open these files in vim or other editors I get some binary signs and some text... I tryed it in hex mode but still nothing to use...
Is there any chance I can dump out the data... in csv, ascii, xml?
I am pretty sure that this is not a standard format. Can someone point me in a direction how those data were stored back in the days and how could I make them read-able...
Any tools, tips or tricks?
// EDIT
After some time I made some progress and can now post some details which I did not now of back then and made a good answer impossible.
I asume that the dos system was written in visual cobol and that the files could be b-tree files stored in ISAM format. I assume the closet thing I could provide is, that there is a possibility that the format is C-ISAM.
How can I access / view or modify these files... C#, JAVA, ruby.... everything new age language would be cool... I am not sure if I can handle cobol... It would be great to have a converter or a viewer tool preferable opensource...
Hope this clearifies more my question =)
OpenCOBOL has a very active user group. The language itself is free and runs on Linux and Windows and perhaps MacOSX. Have a chat to the user group there; they may be able to help.
Peachtree Accounting Software used those file extensions back in 1992.
I have all my COBOL source code located on a z/OS mainframe. What is a way to migrate all this code to ClearCase?
Rational Developer for System z (RDz) is the tool you should be using for this. It's basically Eclipse with a large number of IBM proprietary plug-ins which give you access to your mainframe data sets, including those under the control of SCLM (the default z/OS source code management system).
You can use RDz to connect to the mainframe and check out your code directly into an Eclipse project. That code can then be added to any other source code management system that has an Eclipse plug-in.
There's more to it of course, such as the ability to kick off mainframe builds from the Eclipse environment, something that will be important since, no matter how hard you look, you probably won't find many distributed platform compilers that can compile mainframe source.
If you just need a one-time move, a file packing tool -- like PKZip/MVS or UnXMIT -- to bundle the source up. You can then transmit it using IND$FILE, ISPF File Transfer or FTP to your clearcase server and check it in.
If you need ongoing updates of your mainframe resources on a server based source control system, you might be better off setting up some shared DASD using samba, NFS or the like between your mainframe and your server.
Unless you plan on doing your development on PCs, I don't think Rational Developer for Z is going to be a good fix. It will do what you need, but the mainframe setup is kind of headache-y and the cost of the product is excessive if all you need is to move resources to/from your clearcase server.
IIRC, RDz costs about 6k per seat. You might spend a few days writing some procs to ftp to/from your clearcase server and check-in/check-out and save some heafty expense. Actually, IBM ought to already have those tools built already. Clearcase supports remote machines doing checkin/checkout, maybe all you need is USS and a TCP/IP connection.
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and wrote a little on 'why' at my blog.
In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it.
What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random.
The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true:
You have to manually edit the HDL files produced by Coregen.
The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an output file).
You have to copy-and-paste the output from Coregen into your top-level design.
This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I suggest CM tools that support version labeling and binary files. Most Software CM applications are fine with ASCII text files. They may just store a "difference" file rather than the entire file for updates.
My recommendations: PVCS, ClearCase and Subversion. DO NOT USE Microsoft SourceSafe. I don't like it because it only supports one label per revision.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc.
I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it.
And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use.
Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts.
For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) FPGA open-source project on github for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the *.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the *.xco files. There are a few other Xilinx-specific files you should version control too: *.ucf, *.coe, *.xcf, etc.
I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.
Is it possible to read damaged media (cd, hdd, dvd,...) even if windows explorer bombs out?
What I mean to ask is, whether there is a set of APIs or something that can access the disk at a very low level (below explorer?) and read whatever can be retrieved even if it is only partial, especially if you can still see the file is there from explorer, but can't do anything with it because it is damaged somehow (scratch on cd, etc)?
The main problem with Windows Explorer is that it doesn't support resuming copying after a read error. Most superficially scratched CDs, for example, will fail on different areas of the disk every time you eject and reinsert them.
Therefore, with a utility that supports resuming copy operations, it is possible to read the entire contents of a damaged CD with by doing "eject/reload/resume" a few times.
In fact, this is what a utility I wrote does, and I've never needed anything fancier to read scratched disks. (It simply uses ReadFile and WriteFile.)
One step lower would be opening the raw partition (i.e. disk image) by passing a string such as "\.\F:" (note: slashes are literal here) to CreateFile. It would allow you to read raw sectors from a drive, but reconstructing files from that data would be hard.
In fact, the "\.\" syntax allows you to open devices in the "\GLOBAL??" branch of the Windows Object Manager namespace as if they were files. It's not unlike calling dd with /dev/x as a parameter. There is also a "\Device" branch, but that's only accessible via DeviceIoControl() (i.e. ioctl()), meaning there's no simple ReadFile()/WriteFile() interface.
Anything lower level than that would be device-specific, I guess; like reading raw CD-ROM data (including ECC bits) the way some CD-burning programs do. You'd have to do some research on the specific media (CD, flash, DVD) and what your hardware allows you to do on them.
Note: The backslashes seem to get lost on the way to the web page; you need to pass "backslash backslash dot backslash DeviceName" to CreateFile. You need to escape them, too, of course.
If you want to do it, do it from the Linux side - see: http://sourceforge.net/projects/monkeycity/ opensource
or ready made app and freeware too: http://www.theabsolute.net/sware/dskinv.html
the first step is dd_rescue. After that, you're free to try anything to reconstruct the data.
And there's GNU ddrescue
GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in case of read errors.
Make sure to use the 3-arg version (manual):
ddrescue [options] infile outfile [mapfile]
That is, do use a mapfile even if it's optional, because:
If you use the mapfile feature of ddrescue, the data is rescued very efficiently, (only the needed blocks are read). Also you can interrupt the rescue at any time and resume it later at the same point. The mapfile is an essential part of ddrescue's effectiveness. Use it unless you know what you are doing.
And it's also included in Cygwin and Homebrew.
I don't know what layer exists between Windows Explorer and the Win32 APIs. You can try to write a program with the Win32 File I/O stuff. If that doesn't work, then you have to write your own device driver to get any lower.
I've had some luck from the linux side, or using BartPE (http://www.nu2.nu/pebuilder/), but just seeing the file doesn't always mean the file is going to be recoverable, whether you're trying from Windows or Linux. You're best bet might be to use a trial of a recovery program.
I have had two disks start to disintegrate on me. From the pattern of unreadable sectors I think they had internal flaking of their emulsion. WinXP Explorer just threw up its hands and said the drive didn't even exist.
In both cases I used "GetDataBack for NTFS" from Runtime Software (http://www.runtime.org/). You can download a free trial which will show you what you could get back if you paid for it. When I bought it it was $49, but I see it is now $79.
This program is amazing. It's not necessarily fast as it will reread some sectors over and over, trying to get a consensus value from multiple tries, but when it's done you can get back stuff that you thought was gone forever. I had one drive that it took over 10 hours to analyze, but when it was done I got back over 97% of a 500GB drive. Definitely worth the price.
Another great tool is Beyond Compare. I have rev 2.5.3, but it is currently at 3.?? and costs $30. They have a full-functionality, 30-day trail. It does a great job of copying large quantities of files (and only those that need to be copied) and, unlike Explorer, it doesn't blow up if something fails. It's sort of like a visual rsync for Windows, if you're familiar with that program from the Samba people.
I have no connection with either of the comapnies mentioned other than being a very satisfied customer.
The gold standard for recovering data from a magnetic storage device would have to be SpinRite. It's a commerical app though, so you probably wouldn't learn much from it.
If you have a Linux machine around, I can recommend dvdisaster. It is originally meant for creating error correction files, but it also reads DVDs into an image and ignores read errors; and you can use different drives one after another to get missing sectors filled in the image.