So, to be even more specific. I am making a program that contains a database of cars (models, years, horsepower, etc). Of course every time i compile, obviously, the contents are reseted and the vector is empty. Is there any way to save the contents of the vector within the program without the involvements of saving it to a text file? Sorry for the stupidity of this question...I am a noob.
I am assuming that you are working in c++, so you can use MySQL Connector/C++ to save your vector data in MySQL rows for this refer to : MySQL Connector/C++ 8.0 Developer Guide or mysql-connector-cpp but this will not be simple to implement.
Else, you can save it in csv or text file and then use python scripting which will be very easy to implement. MySQL Connector/Python or Python MySQL
Also if you are comfortable with shell scripts you can automate it all, and it will take care of all the storage stuffs.
Related
Our current system database system is a clipper DOS application. The database inside its folder is fragmented/divided into many parts. I want to decrypt the database so that I will have only one database in all and avoid reshuffling of data. I'll attached the file folder Screenshot.. the database is on .DBF format
VScreenshot of files
Often you can decompile the CLIPPER exe file to source code and work from the .prg I've done it many times. The program to use is called WALKYRIE.
In Clipper and Fox Pro for DOS .dbf file is a simple table file.
If You want to use as data base with many tables in one unit.
You can import these tables in MS SQL data base and/or part of a MS Access database.
I see that you got several answers. Most are partially right. Let's address these one at a time:
All those files essentially comprise the "database" for the application you're using. They could be used by other applications as well. Besides having a lot of files, what is the problem you're trying to solve?
People mentioned indexes. You can generally ignore these. There are there primarily to make access to the data files faster. Any properly written clipper application will recreate these if they're missing or corrupted. You could test this by renaming one, running the app, and seeing what happens. If it doesn't recreate it you can name it back. Not replacing missing index files would be unusual behavior.
The DBF file format is binary, but barely. Most of what's in a DBF is text and is readable with an editor. But there's no reason to do so - I'm sure there are several free DBF utilities out there to to read DBF files. Getting the structure of the files could be very helpful.
Getting the data out of the files would also be fairly simple with a utility. If you look up the DBF format you could even write one fairly easily in Clipper, any other language that uses DBF files, or in something like Python. Any language that can open and write files, really. It's not hard - any competent developer could do this in a matter of hours. Must less if you're using Clipper or another language that natively reads DBX files.
Most people create dBase/Clipper programs with relational data, like SQL Server. Where SQL Server has tables that relate to each other dBase/Clipper has a file for each "table." This isn't a requirement, but it was almost certainly done this way.
Given that, if you get the table structures through a utility or by reading the headers in an editor (don't save them from an editor!) you could quite likely recreate the database schema (i.e. the map of the data). Once you have that it's fairly trivial to get the data into another type of database (SQL Sever, Access, or whatever you like to use.) If non of the files are too large it's conceivable to put all the files into Excel sheets. It really depends on what you want to do with it.
As others have said, you may be able to get the code by Valkyrie. Some people have used it very successfully. I don't know where you get it and I've never used it. Why do you not have the code? If this is a commercial application you likely should not have it. If it's a custom app who ever wrote it or paid to have it written should have the code.
Again, it's not clear to me what problem you're trying to solve. But there are many options for doing something with those DBF files. Fortunately they are one of the easier to read data formats you could be working with.
Let me know if you have any questions. Apologies for the typos that are no doubt scattered throughout this reply.
You sort of can get an idea of how they relate to each other by opening the index files they use (.NTX files). If you have the DBU utility (executable) around, you can open the DBF and load the index (NTX). LibreOffice Calc is also able to open DBFs (haven't tested .NTX).
If you open the .NTX on a text editor you will see the indexes in the beginning.
I open with Access, but I can save the data using a PrintFill Program.
I am helping out an organization which are planning on changing their members system. Right now their system is developed in Plone and all their data is in a Data.fs file.
Their system is down for the moment and it would take some time and effort to get it up and running.
Is there a way to get the data out from the database into a standard format such as csv files or SQL? Or do they need to get the system up and running beforehand and export the files from "within" plone?
Thanks for your help and ideas!
Kind regards,
Samuel
The Data.fs file is a Object Oriented Database file, and it is written by a framework called the ZODB. The data within it represent python instances, layed out in a tree structure.
You could open this database from a python script, but in order for you to make sense of the contained structures, you'll need access to the original class definitions that make up the stored instances. Without those class definitions all you'll get is placeholder objects (Broken objects) that are of no use at all.
As such, it's probably easier to just get the Plone instance back up and running, as it'll be easier to export the exact data you want out if you have things like the catalog (basically a specialized database index) to build your export.
It could be that this site is down because of something trivial, something we can help you with here on Stack Overflow, or on the Plone users mailinglists or in the #plone IRC channel. If you do get it up and running and have some details on what you are trying to export, we certainly can help.
You'll need to get the system up and running to export data. Data in the data.fs file is stored as Python pickles and is not intelligible to "outside" systems.
As the others have pointed out before, your best course would be to have Plone running back again. After doing so, try csvreplicata to export existing data to csv format. And for user accounts, try atreal.usersinout.
If you need professional help, you can search for available providers from http://plone.org/support/providers
For free support, post specific problems here.
Recently I managed to export Plone 4 site to sqlite using SQLExporter: http://plone.org/products/proteon.sqlexporter. But you need to get your Plone instance working first to use it.
I want to export all my queries as individual files for purposes of putting them into mercurial source control, but I don't know how to export the individual queries as individual files without having to open each one, then save to the folder, then add into the project, or some equally convoluted process.
I wouldn't mind having to add each one individually, but how do I get them out of the database as individual files without opening them all and doing each one save as? Ostensibly I would like them named with the name they have in the database right now.
I could easily dump the whole lot into one long file using database tasks, but that's not really super helpful is it?
I have SSMS 2k5 and 2k8 (and VS 2k5, 2k8, 2010 to boot) to work with, any thoughts?
Right click on the database. Select Generate Script. On the last page. Script To file you can choose single file or file per object
When you script a database in SSMS you have the option of one file per objects.
SMO is useful with a small app to iterate through
Third party tools like Red Gate SQL Compare (there are other free tools) can script too
I would write a small C# program which extracts your database object via SMO and stores them in your filesystem the way you want.
It is rather easy to write stored procedures which fetches the definition into the result as text. sp_helptext could be used as start.
Than you can use PowerShell to write the Output to the file system.
It sounds as if this would fit rather good into the Really Simple Data Dictionary codeplex project. link text
Working on a school project, the program is supposed to read from a text file that has a record about a song in every line, fields separated by ";".
Anyways I have no knowledge of databases, and I just want the quickest way to create a database from that text file, and also i will need to change some of the fields of the records once in a while from the program... Also the program needs to search through the database based on certain fields.
Anyways so far all our projects didn't keep a database, so when we closed the program, every info was gone, now i actually need to keep some info for the next time the program runs. What's the fastest way to accomplish this?
Also I wanna be able to keep some info about the software, like the path of the original text file for weekly updates. Where can i save info like that?
EDIT: it doesn't have to an actual database, as long as i can search and edit it efficiently.
If you can use SQL database, I'd suggest simple file-based database SQLite
With SQLite, you can query, insert and update records by executing regular SQL statements.
Here you will find introduction to C++ interface It's easy to embed SQLite support in an application because SQLite comes as a library, meaning a bunch of header files and 1-2 binary archive with library.
Your comma-delimited textfile is aleady a database. You can add records, delete records, and modify records using the standard textfile routines provided by the standard C++ libraries.
Alternatively, you can import your textfile into SQL Server using BULK INSERT.
Finally, you can access your CSV (comma-delimited text) file using SQL queries. You need to find the correct connection string. See http://www.connectionstrings.com/textfile.
I am developing a software with vb6.0 :( .
I wanna to know that what the best code for saving a file in SQL server and then reading from it is?
I should say that I use ADODB.Stream when saving file.....
what is you suggestion?
Do you need to save to sql. An alternative, and IMO better solution is to save the docs to a folder on your regular file system and save the reference to it in sql.
Unless you have specific reason for wanting to save in sql?
Is there a specific reason you need to use vb6. Legacy?
Here is an example in .NET on achieving what you require, it should be pretty straightforward to convert to VB6
http://www.jstawski.com/archive/2007/05/17/save-documents-with-sql-server-and-display-them.aspx
UPDATE: With VB6 an example using ADODB.Stream to store and retrieve images, which should be similar to what you need:
http://www.devx.com/tips/Tip/14246