Export from Intersystems Caché database - database

I have a product based on Intersystems Caché database, I can see no classes, no schemas, no tables, only globals. Is there any smart way to export data from these globals and get "human-readable structure"?

First question is... what version of Caché?
Second question is... what tools do you have access to? Terminal, Studio, Management Portal??
If the data is in tables/classes, you should be able to access it via ODBC, at least.
If there aren't any tables/classes, the data is probably in Globals.
If the data is in Globals (persistent sparse array storage) they can look a bit wierd if you aren't used to the common patterns.
Even if it is in Globals, it may be possible to define classes with custom mapped storage to make them appear in a table-like way via SQL.
Cache is EXTREEMELY flexible, but can be a steep learning curve. :-(

Globals in Intersystems Cache is schemaless type of storage, so the best "human-readable" format you can have is that one in System Management Portal.
Other options are:
* zw command in terminal
* d ^%G command in terminal

Are you able to view the Cache SMP or connect to the database using Cache Studio? I would think you'd find code somewhere in there (at least a bunch of routines if they're not using classes). Using the SMP to browse the globals is a good way to get familiar with the datasets they contain. From a terminal session, you can use the zw command to take a look at global node contents:
USER> zw ^GlobalName
http://docs.intersystems.com/cache20082/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_czwrite
Can you give a little more information about your situation?

Depending on the structure of your globals you could create classes for them and edit the storagemapping to point to them. Based on that you then could go ahead and create reports/(zen/csp)webpages to display the contents. However, depending on the complexity of your data this could take you anywhere between hours and months :/

My experience is to use Navicat Tool and export database Caché into a MySQL schema or Postgres to understand DB Model, using import Tool via ODBC.

ODBC works with cache. You can use the ODBC connection to export the data to another strucutre such as a set of free tables or text files.

You can use a system utility called D ^%GO which is Global Output. You specify the global(s) and the file you want them exported to. There’s also a ^%GI for global imports from this file

Related

Oracle Export and Import

Some Background:
Somewhere around Oracle 10, which was about a decade ago (give or take), Oracle added a new method of exporting and importing databases called the Oracle Data Pump. Aside from the silly name, the functionality works mostly the same as the Original Export and Import Utility.
The link to the Original Utility contains the following warning text, which appears to be at least somewhat self-contradictory:
Original export is desupported for general use as of Oracle Database
11g. The only supported use of Original Export in 11g is backward
migration of XMLType data to a database version 10g release 2 (10.2)
or earlier. Therefore, Oracle recommends that you use the new Data
Pump Export and Import utilities, except in the following situations
which require Original Export and Import:
You want to import files that were created using the original Export utility (exp).
You want to export files that will be imported using the original Import utility (imp). An example of this would be if you wanted to
export data from Oracle Database 10g and then import it into an
earlier database release.
As far as I can tell, the only reason Exp and Imp would not operate correctly is if the database uses features introduced in 11g onward. Otherwise, it appears that the old Exp and Imp commands should work just fine, and from the above, they do appear to be officially supported.
One of the key differences with "Data Pump" vs. "Original" export - and this is important for my application - is that the data pump operates server-side only, meaning that a user will require at least some degree of permissions to the server to access the file produced by the export. At best, this is inconvenient, and at worst, this results in a file that cannot be accessed by anyone other than the dba.
Issue:
When we upgraded to 12c from 11g, we had an issue using the original export utility. It would run successfully up to the point of exporting the triggers, then produce an error as follows:
EXP:00056 ORACLE error 4063 encountered
ORA-06508: package body XDB.DBMS_XDBUTIL_INT has errors
ORA-06508: PL/SQL: could not find program unit being called:
"XDB.DBMS_XDBUTIL_INT"
The Question:
This issue came up at least a dozen times in different contexts, and we are sort of playing whack-a-mole with it. The most recent attempt at solving it involves recompiling every package on the server, which takes about a half hour.
Why does this export issue keep coming up?
Are Exp and Imp actually, officially, deprecated, such that we are no longer able to use them reliably?
Are there any other straightforward ways to get a client-side export of the database?
Why does this export issue keep coming up?
Since the problem is intermittent I would guess it's caused by deferred segment creation. Since 11g, tables and partitions can be configured to not allocate any space until there is some data. (This can save significant space for tables with many empty partitions.) But Exp doesn't understand this and assumes every table must have a segment. Which means some tables and related features may appear to "randomly" cause problems depending on whether or not they've been populated or truncated lately.
You can find those tables with this query:
select * from dba_tables where segment_created = 'NO';
And then force them to have a segment with this statement:
alter table table_name allocate extent;
Are Exp and Imp actually, officially, deprecated, such that we are no longer able to use them reliably?
This is debatable but I'd say yes, the original Exp and Imp are truly "deprecated" now. It does feel like Oracle plays a lot of games with deprecating software. For example, deprecating the free change data capture for the super expensive Goldengate, or deprecating non-container architecture when almost nobody wants to use their expensive containers. But it's been a long time and Exp and Imp don't cut it anymore.
Are there any other straightforward ways to get a client-side export of the database?
Try OCP, Oracle CoPy. You still need to generate the export on the server. But OCP allows you to download files from the server filesystem to the client filesystem without any server filesystem permissions. It's still not as straight-forward as it should be but at least you don't have to give everyone privileges on the server filesystem.

Convert plone database to csv or SQL

I am helping out an organization which are planning on changing their members system. Right now their system is developed in Plone and all their data is in a Data.fs file.
Their system is down for the moment and it would take some time and effort to get it up and running.
Is there a way to get the data out from the database into a standard format such as csv files or SQL? Or do they need to get the system up and running beforehand and export the files from "within" plone?
Thanks for your help and ideas!
Kind regards,
Samuel
The Data.fs file is a Object Oriented Database file, and it is written by a framework called the ZODB. The data within it represent python instances, layed out in a tree structure.
You could open this database from a python script, but in order for you to make sense of the contained structures, you'll need access to the original class definitions that make up the stored instances. Without those class definitions all you'll get is placeholder objects (Broken objects) that are of no use at all.
As such, it's probably easier to just get the Plone instance back up and running, as it'll be easier to export the exact data you want out if you have things like the catalog (basically a specialized database index) to build your export.
It could be that this site is down because of something trivial, something we can help you with here on Stack Overflow, or on the Plone users mailinglists or in the #plone IRC channel. If you do get it up and running and have some details on what you are trying to export, we certainly can help.
You'll need to get the system up and running to export data. Data in the data.fs file is stored as Python pickles and is not intelligible to "outside" systems.
As the others have pointed out before, your best course would be to have Plone running back again. After doing so, try csvreplicata to export existing data to csv format. And for user accounts, try atreal.usersinout.
If you need professional help, you can search for available providers from http://plone.org/support/providers
For free support, post specific problems here.
Recently I managed to export Plone 4 site to sqlite using SQLExporter: http://plone.org/products/proteon.sqlexporter. But you need to get your Plone instance working first to use it.

CakePHP use MS Access database (.mdb, not accdb): Do I need to write a driver? How to build it?

I'm in the unlucky situation where my client requires to use a read-only MS Access database to render some webpage contents on his website (built by me).
Because this MS Access file will be updated roughly once per month and I don't want to do it manually by converting it into sql query and import in Mysql, I would like to make some webpages read directly from it.
What I discovered is that there isn't a driver (at least in my cakephp installation) for MS Access database (but in older versions of cake exist) or at least I didn't find any of them.
I need to know where eventually I can find such drivers or how to correctly implement one of them only for read operations (I think I should implement DboSource class, but I don't know what should I override because there isn't anything such virtual or abstract or interface that guide me through this).
I didn't even understand if I should create an Adodb driver or an OleDB driver. I also think that Sqlserver driver is a lot similar to what I should build (except for connectionand some details); if I can use it in some way to shorten my job, it will be helpful.
Edit 1:
Because no one is answering, I can say I'm ok to use MS Access also with "normal" queries (without requiring each table to be bound to a model) but I must be sure that are sanitized and will output arrays similar to what cakes output actually: $jobs['jobs']['name']
Any suggestion on how to achieve this?
Maybe the Adodb or Odbc datasources from https://github.com/cakephp/datasources/tree/2.0/Model/Datasource/Database work with MS Access?
This example is to big to write here it-s from cake bakery Please test it and i'm here for other problems

Are most LDAP administrators creating LDIFs by hand?

Are there tools that make the job easier? If command-line only tools exist, then can anyone speculate if there is a market for a GUI tool? For example, you can create a relational database by modeling visually. Should the same notion exist for LDAP?
Apache Directory Studio includes an ldif-Editor. It is still a text editor but with syntax highlighting, autocompletion and group collapsing for ldif files:
http://directory.apache.org/studio/
I don't know if there are any tools but it isn't that hard to create them by hand.
If you are using IPlanet LDAP then they had a nice interface for creating and modifying schemas though. :)
I don't know if you would consider that to be by hand otherwise that is one tool to use.
I've done some LDIF handling using Perl and the Net::LDAP::LDIF module and it made scripting custom LDAP conversions very easy.
Have you looked at the command-line tool, LDIFDE.exe? Should be on your domain controller.
Business people give me Excel spreadsheets with inconsistent formatting of user and group data and want it loaded right away (then they come back with a new version and tell me they've only added some new users, but some are missing, some data is now invalid, there's a missing column etc.) They want unique passwords assigned, group memberships set up based on department id fields, and so forth.
Then they come back two weeks later and want to know about the differences between that spreadsheet and one from six months ago. Sigh.
I generally just do it all with a few hand-crafted Python scripts.
A lot of times you may be copying objects from one tree to another. Or backing them up. In that case, most LDAP tools have some way of exporting as LDIF. Then you can easily modify the files as needed.
Or copy examples to reuse.
I have seen a number of tools that will do tasks and output the results as LDIF, which can be handy, but they are basically point usage tools.

Can you export packaging information (ERD or other data model) from Cognos 8.3?

I was wondering if there's a way to export package information from Cognos 8 from a regular user level or from the framework level.
For instance, I want the field names that cognos is pointing to on the database, i want the datatype, the description cognos uses when you right click a data element, etc..
Any suggestions?
(Unfortunately I'm not at my work computer right now) but Cognos saves everything in .xml files. I have an xml pretty printer that I use on model.xml before and after edits, so that I can use windiff to see what exactly changes in the model. I have also used an xml editor on model.xml on several occasions for global search and replace.
Having said that, I'm not sure how much of the database schema you can infer directly from model.xml, but I suspect if you had a script that could read and walk model.xml, and connect to the database to describe the objects, you could get what you need.
The answer appears to be yes, to anything that supports CWM (the Common Warehouse Model) but as for how...
One suggestion: ask IBM.
It appears that Powerdesigner 15 imports from xmi models.

Resources