I would like to create (or use, if one already exists) a command-line based app that creates, modifies and searches a database.
This database would ideally be a simple text file where each line is an entry.
As in this simple example:
Apple Fruit Malus Green/Red 55
Banana Fruit Musa acuminata Yellow 68
Carrot Veget. D. carota Orange 35
Let's say this text is stored in ~/database.txt
I'd like to be able to search for all entries that are of the type fruit (returning, Apple and Banana) or all entries that have kilocalories that are less than 60 (returning Apple and Carrot) on the command line.
The returns should happen through standard terminal output and should look like this:
$mydatabasesearch cal '<60'
Apple Fruit Malus Green/Red 55
Carrot Veget. D. carota Orange 35
Also, being able to add to the database through the command line would be awesome!
Is there anything around that does this? If not, how would you recommend I write such an app? I know a bit of C++ but that's it...
Take a look at sqlite. It is a bit more complex than plain text files, but a lot more powerful.
Plain text files are not really considered databases.
If you want to stick to textfiles and the commandline, have a look at the usual unix utilities like grep, awk, and the coreutils package (cat, cut, uniq,...) which work on plaintext files. Add those commands to a shellscript and you're done.
As soon as you move to some sort of database system you won't have textfiles as storage anymore.
Aside the already mentioned sqlite, the Berkeley DB library might also be worth a look if you want to write your own program. Both libraries should be good in your case since they don't require an external database server (like mysql)
You can do this in Unix anyway with a delimited file SED and maybe some simple commandline perl which you can wrap in a bash script. There is a good tutorial for the data manipulation at wikibooks and the SED you could use is probably all here and in the part two tutorial.
Related
I am making a c version of pacman and am keeping my highscores in a separate .txt file called highscores.txt. In the program, after the completion of a game, it checks to see if the highscores file should be updated and writes over it if it should. The high scores view within the program reads the scores and names from the file. The issue is that it is very easy for someone to simply edit the .txt file and say they got any score they wanted. Is there a way to make it so that the file can only be written on by the program? This is in a Linux Red Hat environment.
I'd say just encript the file, then rename it something obscure.
An easy way will be to gzip it, and name as a file called "data" or whatever, so people couldn't guess it's gzip.
This method is easy to break once you know gzip is used, so a more secure way is to encript it using an encription key internal in your code.
Your best option is probably encryption. A good, quick study on Encoding vs. Encrypting vs Hashing
You could also look at other ways of storing the data. For example, something like HSQLDB or SQLLite, where you can create databases that have usernames and passwords available.
We have a system based on AcuCobol and vision-files.
We need to dump the entire datafiles in to txt-based files to be imported in to a sql server.
We are using vutil to dump the data right now, but it's creating a fixed-width file and we would want this to be a delimited file of some sort. The command we are using right now is this:
vutil -unload -t sourcefile destinationfile
Now does anyone have any experiance with this, and if so, what would be the best utility for this?
what I'll do to get the job done is this:
Unload your table with a JCL into a file.
Use this file in a batch,
with a copybook matching the description of the table :
(ex: if your table has 3 columns [ID], [NAME] and [ADDRESS], then create a copy CC-1 with:
O1 CC-1
05 ID PIC X (16).
05 NAME PIC X (35).
05 ADDRESS PIC X (35).
For each line move the line into the copy CC-1.
Create a second copy CC-2
O1 CC-2
05 ID PIC X (16).
05 FILLER PIC X VALUE ';'.
05 NAME PIC X (35).
05 FILLER PIC X VALUE ';'.
05 ADDRESS PIC X (35).
05 FILLER PIC X VALUE ';'.
Do a move corresponding of CC-1 to CC-2
Write in the output.
This for each line. You get in the output a CSV file of all your data.
You can reuse the batch and JCL, you just have to change the size of the records and the descriptions of the copy to match the table you are unloading.
Free and easy. :)
Google is your friend... I did a quick search and came up with NextForm which claims to convert Vision to CSV (and other formats).
From the "fine print" it looks like you can download a trial copy but will have to fork out about $400.00 to get a licenced version (that won't drop random records).
I have absolutely no experience with this product so cannot tell you if it really does the job or not.
Edit
Sometimes it is more productive to work with rather than fight against a legacy system.
Have you looked at AcuODBC? This lets Windows programs read/import
Vision files. You could potentially use this to access Vision files from any Windows ODBC enabled application. I believe you may need to compile a data dictionary to make this work, but it could provide a bridge between where you are and where you want to be.
Another approach might be to write your own dump programs in AcuCobol. Read a record, format and
write it. Writting a Cobol program of this complexity should not be too challenging a task provided the files
do not have a complex structure (eg. multiple record types/layouts in the same file, or repeating fields within a
single record).
Your task will be significantly more complex if a single Vision file needs to be post-processed into
multiple SQL Server tables. Unfortunately, complex conversions are fairly
common with Cobol legacy systems
because of their tendancy to use rich/complex/denormalized file structures (unlike SQL which works best with
normalized tabular data).
If the Vision files are more complex that a single SQL Server table, then maybe you could consider using the
AcuXML Interface
to dump the files into XML and then use an XSLT processor to create CSV or whatever other format you
need from there. This would allow you to work with fairly complex Cobol record structures. Again,
this approach means writting some fairly basic AcuCOBOL font end programs.
Based on your comments to my original post, you are not a die-hard Cobol programmer. Maybe it would be
a good idea to team up with someone in your shop that has a working knowledge of the language and
environment before pushing this any further. I have a feeling that this task is going to be a bit more complicated than "dump and load".
If you have Acubench under the Tools Menu there is an option for Vision File Utility, from there you can Unload your Vision Data to a text file which is tab delimited.
From there you can import to Excel as a tab delimited file and then re-save as a csv file.
Alright, so, I haven't programmed anything useful in ages - last time I did was a year ago and as you can imagine my knowledge of programming is seriously rusty. (last thing I 'programmed' was a ren'py game over the weekend. One can imagine the limited uses of this. The most advanced C program I wrote was a tic-tac-toe game a year ago. So yeah.)
Anyways, I've been given a job to write a program that takes two Excel files, both of which have a list of items, each associated with an ID. I need to write a program to search both files for IDs and if the IDs match, the program will need to create a new file with the matched IDs and items. This is insanely beyond my limited C capabilities.
If anyone could help, I would seriously appreciate it.
(also, if this is not possible with C, I'll do my best to work with any other languages)
Export the two files to .csv format and write a script to process the two files. For example, in PHP, you have built in csv read/write capabilities.
You can do this with VBA and create a Macro in one of the files which iterates over the cells in your column in file 1 and compares them to cells in file 2 and writes them to a new .xls file if they match.
Dana points out that the VLOOKUP function will do this quite easily.
Install GnuWin32
Output the excel files as text (csv for example)
sort each file with the -u option to remove duplicates if needed
mix and sort the 2 files
count unique IDs with uniq -c
filter out lines with a value of 1 for the count with grep
remove the count leaving the ID and whatever else you need with cut
If you know Java then you can use Apache POI for your project. You can use the examples given on the Apache POI website to accomplish your task.
Apache POI Excel Documentation: http://poi.apache.org/spreadsheet/quick-guide.html
If you absolutely have to do this on xls/xlsx file from a process, you probably need a copy of Excel controlled by COM automation. You can do this in BV/VBA/C#/C++ whatever, some easier than others. Google for 'Excel automation'.
Rgds,
Martin
Not C, but you may be able to cobble something together very quickly using xlsperl.
It has come in handy for me in the past.
Basically I want to do a program almost like a keylogger. The thing is that I as network admin sometimes I don't remember what I did to a machine on certain case, or same times I make howto's and tutorials for linux. I want to record what have i done.
So basically the idea of this program is:
you type the name of the program, (I call it rat for the moment)
$ rat
Welcome everything from now on will be recorded
recording $ ls
file1 file2 file3
recording $ quit
Bye bye
Everything you do will go out to an xml file. Something like this
<?xml version='1.0' encoding='UTF-8' ?>
<rat>
<command>
<input>ls</input>
<output>file1 file2 file3</output>
<err><err>
</command>
</rat>
i am doing some tests with fp_in = popen( input, "w");
and system, but first with popen i cant change directories and with "system i cant properly manage the input and output.
I was also checking if there is something I can do to bash like a plugin but haven't find any information.
At some points if feels like it I should create another shell (which is way beyond my current abilities) or fork bash sh. But it should been that complicated right.
I am open to suggestion where to start.
I am rusty with C, so I am reading again a lot of basic stuff.
With the xml file, later i was thinking on making a program to store this data and/or editing this data so i can create tutials and howto.
I can think of many ways of expanding this up to using printscreen so all the stored images go to a file you can upload to a server (for the moment i am glad to store the data). It could be a usefull tool.
ps. I do know this can be use for evil things too.
There already exists the script command, which will record all input and output into the terminal, writing it into a transcript. I would recommend just using that, unless you have particular needs that it doesn't meet. Actually, the nicest version of script that I've seen has been the NetBSD version, so you may want to look into that if the Linux version doesn't meet your needs.
If you would like to write it yourself, instead of using system, I would recommend that you use fork/exec to create a single shell process, which you copy all input and output into. To get an idea of how this works, I'd recommend looking at the source code for an existing version of script.
Most shells have a script built-in which will simply record the text in- and out- from the command line. Not quite what you're looking for... To my surprise script is not a built in, which means it is a model for building what you want.
The script command does almost what you want: it simply records the text in- and out- from the command line.
If you make your prompt distinctive (so that you can reliably tell the difference between shell commands and everything else) you can post-process the output of script to achieve your goals. Alternately you can hack script to get it to emit the XML you're looking for.
You can also try approaching this from a different angle. Instead of using a regular shell, connect to the machine using ssh or telnet and run your commands that way. Many ssh/telnet clients (PuTTY, for instance) have an option to log all console input and output during the session. You should be able to post-process this log to generate whatever type of logfile that you need.
Depending on your setup, you might not even have to use a second machine (you should be able to ssh into yourself).
I am building a basic POS app for my cousin's pharmacy store so that he can dump the software he is currently using and save on license cost.All the medicines name which he has painfully entered into the software have been stored in a file with .d01 extension.
What i want is a way to read the contents of the .d01 file programmatically so that i can import the name of the medicines into my app.
The s/w from which my cousin uses is built in Foxpro(coz i see a lot of .cdx,.idx,.dbf files) and the file which i want to import is with .d01 extension. when i open the file in notepad it is something like this
http://img192.imageshack.us/img192/5528/foxpro.jpg
So I assume its somekind of database table or something. So can anyone please help me in reading this file, as i am not at all aware of foxpro.
Thanks a lot in advance to all those who take out time to reply.
hey guys thank you very much for replying so promptly.. I tried the solution suggested by Otávio and it worked, i will now write a small utility to read the dbf.
It has a good chance of being just a regular .dbf file. Copy it somewhere safe, change the extension to dbf and see if you can open it from foxpro.
Although it may have .cdx files, the actual paste of the file does not appear to be a visually recognizable header format of a VFP table... even if part of a database container. The characters around each column name don't look right. It may be from another language that also utilized "Compound Indexes". I even saw an article about Sybase's IAnywhere too. If worst-case scenario, and it is determined to be a possible fixed-length per row and no dynamic column sizes, you might take the file, strip off what appears to be the header and leave just the data and stream read it in based on how many constant characters are determined for the length. yeah, brute force, but just an option. Again, it doesn't LOOK like a VFP table.
BTW, what is the name of the software he is using... I'd look into that to see if any other type of indication to its source.
It looks sort of like a DBF file - maybe Clipper or something.