How can I export data from a c-tree program? - database

I'm in the process of consolidating our old stuff into TFS. One of the old things is a bug report repository. I believe it's using c-tree to store data, because it has .idx and .dat files. Id like to try to export all of this data into a txt/csv so someone else can sort through what is still relevant, and the import the good stuff into TFS.
The problem is I'm not sure how to go about exporting the data from the c-tree files. Any ideas?
Thanks,
Makolyte

If you have access to any Faircom tools or APIs, http://www.faircom.com/ace/support_doc_t.php would be a good place to start.
Otherwise, you can get an express edition of C-treeACE here: http://www.faircom.com/ace/download_t.php

Related

Create and make analysis CortexDB

I expose my problem: I have recently started using CortexDB, a NoSQL software to database analysis. I have read the (poor) documentation on https://docs.cortex-ag.com/en/CortexDB/CortexDB/, and purchased a free license to evaluate the operation of the program. As the documentation is unclear I would have some questions to ask you:
1) How do I create a database?
2) how can I import a database contained in an excel file (.csv)?
3) how do I create charts or analyzes regarding the data entered?
Thanks
because the question is very old I hope I can still help you.
First of all: you should download the latest release of the free version (simple registration and download)
if you downloaded the free version you got the server and two databases. A server process handles one database. For a second database you have to start a second server (different port of course). If you start the free version you should have an empty database (or the filled and configured demo db). If you want to create a complete new one without any predefined configuration, you have to start the server process with the command line and the parameter -n (ctxserver64 -n). If you did that you have to configure everything by hand with the tool ‘remote admin’.
the question is not clear for me. Do you mean how to import a csv file into a CortexDB or do you mean how to import the database content into an excel file?
If you want to import the csv file into a CortxDB, the easiest way is to use the tool CortexImplex. It’s completely explained in the online docs (https://docs.cortex-ag.com/en/CortexImplex/CortexImplex-Basics/)
If you want to export datasets as csv file the only thing you have to do is to configure a list in the CortexUniplex as a view for your datasets and export them as csv (you find the export function in the list menu).
I would do the charting with d3j. For this you can use the so called ‘DataService’ of the CortexUniplex. It’s a kind of an API for posting requests and getting JSON objects. If you have a completely configured UniPlex you can use all of your configuration as json objects for other apps (for example charts or an individual application).
The full version has a simple dashboard inside of the CortexUniplex. Maybe the vendor offers it in the free free version.
By the way: it’s always good to write an email to the info address. Because this database is not so famous and known, the guys are very helpful. Or contact them via twitter or other channels (see at the bottom of the cort ex-ag.com webpage).

Export data into SPSS file (*.sav)

Is there any solution to export data into SPSS (*.sav) files?
I have a web service with surveys, and result needed to be exported to different formats.
I cant find any solution for SPSS.
(in any language, free or non-free products - but need to execute on the server!)
I found a good library https://github.com/tiamo/spss.
It was updated, and works as a charm

Work with Database using Spock and Geb.

I hope someone have already faced an issue to verify that application shows correct data from database. I reviewd how groovy used SQL, but I have no idea where and how I should do that. I'm just starting to use gradle+Spock+Geb for testing application. I have a few files where I described a couple of pages from application, a couple of modules and a file with spock specification. Where and how I need to connect to Oracle DB, use SQL and compare result's data with application's ones?
P.S. I write everything in notepad++ and launch from command line writing 'gradlew firefoxTest'. Does exist any more comfortable way to work with gradle+spock+geb?
Thanks in advance.
Because there are no other answers, I wanted to provide a solution someone at my company thought of. This assumes you already have a project that uses some sort of JDBC. In our case it is JDBI.
The idea is to extend Classloader and then use that to directly access the data access object class via the JVM. That idea should work.
I have not tested it out because it doesn't completely fit our use-case. I'll admit that this does not completely apply to your use case, but technically you could just run the jar of an existing project, which can access the database.

Convert plone database to csv or SQL

I am helping out an organization which are planning on changing their members system. Right now their system is developed in Plone and all their data is in a Data.fs file.
Their system is down for the moment and it would take some time and effort to get it up and running.
Is there a way to get the data out from the database into a standard format such as csv files or SQL? Or do they need to get the system up and running beforehand and export the files from "within" plone?
Thanks for your help and ideas!
Kind regards,
Samuel
The Data.fs file is a Object Oriented Database file, and it is written by a framework called the ZODB. The data within it represent python instances, layed out in a tree structure.
You could open this database from a python script, but in order for you to make sense of the contained structures, you'll need access to the original class definitions that make up the stored instances. Without those class definitions all you'll get is placeholder objects (Broken objects) that are of no use at all.
As such, it's probably easier to just get the Plone instance back up and running, as it'll be easier to export the exact data you want out if you have things like the catalog (basically a specialized database index) to build your export.
It could be that this site is down because of something trivial, something we can help you with here on Stack Overflow, or on the Plone users mailinglists or in the #plone IRC channel. If you do get it up and running and have some details on what you are trying to export, we certainly can help.
You'll need to get the system up and running to export data. Data in the data.fs file is stored as Python pickles and is not intelligible to "outside" systems.
As the others have pointed out before, your best course would be to have Plone running back again. After doing so, try csvreplicata to export existing data to csv format. And for user accounts, try atreal.usersinout.
If you need professional help, you can search for available providers from http://plone.org/support/providers
For free support, post specific problems here.
Recently I managed to export Plone 4 site to sqlite using SQLExporter: http://plone.org/products/proteon.sqlexporter. But you need to get your Plone instance working first to use it.

How to keep Stored Procedures and other scripts in SVN/Other repository?

Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository.
Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this.
What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN.
Ideas?
Sounds like you're not wanting to use Revision Control properly, to me.
Obviously one solution is to have the
script files for all the different
components in a directory or more
somewhere and simply using TortoiseSVN
or the like to keep them in SVN
This is what should be done. You would have your local copy you are working on (Developing new, Tweaking old, etc) and as single components/procedures/etc get finished, you would commit them individually until you have to start the process over.
Committing half-done code just because it's been 'X' time since it was last committed is sloppy and guaranteed to cause anyone else using the repository grief.
I find it best to treat Stored Procedures just like any other compilable code: Code lives in the repository, you check it out to make changes and load it in your development tool to compile or deploy the code.
You can create a batch file and schedule it:
delete the contents of your scripts directory
using something like ExportSQLScript to export all objects to script/scripts
svn commit
Please note: That although you'll have the objects under source control, you'll not have the data or it's progression (is that a renamed field, or 1 new field and 1 deleted?).
This approach is fine for maintaining change history. But, of course, you should never be automatically committing to the "production build" (unless you like broken builds).
Although you didn't ask for it: This approach also won't produce a set of scripts that will upgrade a current DB. You'll only have initial creation scripts. Recording data progression and creation upgrade scripts is beyond basic source control systems.
I'd recommend Redgate SQL Compare for this - it allows you to compare database versions and generate change scripts - it's also fairly easily scriptable.
Based on your expanded question, you really want to use DDL triggers. Check out this article that details how to create a changelog system for your database.
Not sure on your price range, however DB Ghost could be an option for you.
I don't work for this company (or own the product) but in my researching of the same issue, this product looked quite promising.
I should've been a little more descriptive. The database in question is for an internal ERP system and thus we don't have many versions of our database, just Production/Testing/Development. When we've done a change request, some new fancy feature or something, we simply execute a script or series of scripts to update the procedures in question on the Testing database, if that is all good, then we do the same to Production.
So I'm not really after a full schema script per se, just something that can keep track of the various edits to the stored procedures over time. For example, PROCESS_INVOICE does stuff. It gets updated in some minor way in March. Some time later in say May it is discovered that in a rare case customers get double invoiced (or some other crazy corner case). I'd like to be able to see what has happened over time to this procedure. Currently the way the development environment is setup here I don't have that, which I'm trying to change.
I can recommend DBPro which is part of Visual Studio Team Edition. Have been using it for a few months for storing all parts of the database in Team Foundation Server as well as for deployment and database compares, etc.
Of course, as someone else mentioned, it does depend on your environment and price range.
I wrote a utility for dumping all of the relevant parts of my db into a directory structure that I use SVN on. I never got around to trying to incorporate it into the Manager but, if you're interested, it's here: http://www.reluctantdba.com/dbas-and-programmers/sqltools/svnforsql2005.aspx
It's free and, since I regularly run it, you know any bugs get fixed quickly.
You can always try integrating SourceSafe with SQL Server. Here's a quick start : link . To work with it you've got to have Managment Studio Developers Edition.

Resources