Create and make analysis CortexDB - database

I expose my problem: I have recently started using CortexDB, a NoSQL software to database analysis. I have read the (poor) documentation on https://docs.cortex-ag.com/en/CortexDB/CortexDB/, and purchased a free license to evaluate the operation of the program. As the documentation is unclear I would have some questions to ask you:
1) How do I create a database?
2) how can I import a database contained in an excel file (.csv)?
3) how do I create charts or analyzes regarding the data entered?
Thanks

because the question is very old I hope I can still help you.
First of all: you should download the latest release of the free version (simple registration and download)
if you downloaded the free version you got the server and two databases. A server process handles one database. For a second database you have to start a second server (different port of course). If you start the free version you should have an empty database (or the filled and configured demo db). If you want to create a complete new one without any predefined configuration, you have to start the server process with the command line and the parameter -n (ctxserver64 -n). If you did that you have to configure everything by hand with the tool ‘remote admin’.
the question is not clear for me. Do you mean how to import a csv file into a CortexDB or do you mean how to import the database content into an excel file?
If you want to import the csv file into a CortxDB, the easiest way is to use the tool CortexImplex. It’s completely explained in the online docs (https://docs.cortex-ag.com/en/CortexImplex/CortexImplex-Basics/)
If you want to export datasets as csv file the only thing you have to do is to configure a list in the CortexUniplex as a view for your datasets and export them as csv (you find the export function in the list menu).
I would do the charting with d3j. For this you can use the so called ‘DataService’ of the CortexUniplex. It’s a kind of an API for posting requests and getting JSON objects. If you have a completely configured UniPlex you can use all of your configuration as json objects for other apps (for example charts or an individual application).
The full version has a simple dashboard inside of the CortexUniplex. Maybe the vendor offers it in the free free version.
By the way: it’s always good to write an email to the info address. Because this database is not so famous and known, the guys are very helpful. Or contact them via twitter or other channels (see at the bottom of the cort ex-ag.com webpage).

Related

Export Outlook Emails Into SQL (Vai ACCESS?)

I have a email folder in Outlook that contains 100s of emails which record my discussions with a developer of some bespoke software. I want to import these into SQL to create a knowledge base of information that can be searched upon to extract all the decisions that we have made during the course of the 2 year project.
Having sreached the net, I found that it is very easy to dump the contents of an email folder into Access using the import data functionality. In fact I have linked the table and so believe (never used Access before!!) that I now have an Access table that is connected in 'real-time' to the Outlook folder. This is eactly what I want BUT in SLQ as this is something that I am very familiar with using.
So I have tried to import the Access database into SQL (which also appears to be relatively easy) but keep getting the message that 'The source database ...contains no visible tables or views'. Checking SQL pemissions, I am owner of this new databse.
Two questions please. First, cant believe that going through Access is the simplest way to do this and presume that I will loose the 'real-time' link - am I right? Second, given that I can see my Access database has a visible table, why am I getting the error?
The easiest and quickest way is to create a VBA macro where you can populate your SQL database from Outlook emails. You can build the table structure according to your needs and extract the required information from Outlook using VBA. I'd suggest processing emails in chunks using the Find/FindNext or Restrict methods of the Items class, so you will not reach the reference counter limit. The MailItem properties you may find described in MSDN.
BTW The internal store (if you use the cached mode) in Outlook acts like a database. So, why do you need to introduce yet a new database?

Work with Database using Spock and Geb.

I hope someone have already faced an issue to verify that application shows correct data from database. I reviewd how groovy used SQL, but I have no idea where and how I should do that. I'm just starting to use gradle+Spock+Geb for testing application. I have a few files where I described a couple of pages from application, a couple of modules and a file with spock specification. Where and how I need to connect to Oracle DB, use SQL and compare result's data with application's ones?
P.S. I write everything in notepad++ and launch from command line writing 'gradlew firefoxTest'. Does exist any more comfortable way to work with gradle+spock+geb?
Thanks in advance.
Because there are no other answers, I wanted to provide a solution someone at my company thought of. This assumes you already have a project that uses some sort of JDBC. In our case it is JDBI.
The idea is to extend Classloader and then use that to directly access the data access object class via the JVM. That idea should work.
I have not tested it out because it doesn't completely fit our use-case. I'll admit that this does not completely apply to your use case, but technically you could just run the jar of an existing project, which can access the database.

Convert plone database to csv or SQL

I am helping out an organization which are planning on changing their members system. Right now their system is developed in Plone and all their data is in a Data.fs file.
Their system is down for the moment and it would take some time and effort to get it up and running.
Is there a way to get the data out from the database into a standard format such as csv files or SQL? Or do they need to get the system up and running beforehand and export the files from "within" plone?
Thanks for your help and ideas!
Kind regards,
Samuel
The Data.fs file is a Object Oriented Database file, and it is written by a framework called the ZODB. The data within it represent python instances, layed out in a tree structure.
You could open this database from a python script, but in order for you to make sense of the contained structures, you'll need access to the original class definitions that make up the stored instances. Without those class definitions all you'll get is placeholder objects (Broken objects) that are of no use at all.
As such, it's probably easier to just get the Plone instance back up and running, as it'll be easier to export the exact data you want out if you have things like the catalog (basically a specialized database index) to build your export.
It could be that this site is down because of something trivial, something we can help you with here on Stack Overflow, or on the Plone users mailinglists or in the #plone IRC channel. If you do get it up and running and have some details on what you are trying to export, we certainly can help.
You'll need to get the system up and running to export data. Data in the data.fs file is stored as Python pickles and is not intelligible to "outside" systems.
As the others have pointed out before, your best course would be to have Plone running back again. After doing so, try csvreplicata to export existing data to csv format. And for user accounts, try atreal.usersinout.
If you need professional help, you can search for available providers from http://plone.org/support/providers
For free support, post specific problems here.
Recently I managed to export Plone 4 site to sqlite using SQLExporter: http://plone.org/products/proteon.sqlexporter. But you need to get your Plone instance working first to use it.

How do I elegantly import an Excel file into Sql Server via a Coldfusion HTML form?

Does anyone have an elegant suggestion for how to get the contents of an Excel spreadsheet into SQL Server via a web form? I need to allow our clients to upload modest amounts of structured data, and I need that data to ultimately reside in a sql table. I really can't expect the clientele to produce anything but an Excel file, but I could require that it be an xlsx.
The web app is written in Coldfusion; it doesn't need to be able to handle huge numbers of simultaneous requests, but I don't want to consider some sort of server-side batch job processing or shunt the user to an asp.net page (which is what we are doing now).
Any recommendations (or examples of how others are successfully doing this) would be appreciated. Due to the sensitivity of the data, we really can't do anything to compromise the security of the web or sql servers.
If you are using CF9, then you could easily use the cfspreadsheet tag too. I mention this one specifically because Shawn's link did not (presumably due to its being relatively new on the CF scene). Here's the livedoc link: http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec17cba-7f87.html
For full use, I would create a web form with a standard file upload field. On the backend handling the form submission, get a copy of the file with
<cffile action="upload" destination="uploaded.xls".....>
Then use:
<cfspreadsheet action="read" query="myExcelData" src="uploaded.xls" ...>
At which point, your spreadsheet content will be available as a query object. You can then loop over this query, running insert queries into your sql server each time you loop. That should do it.
Here are the most notable options to help point you in the right direction; choose what you are most comfortable with (Source: Charlie Arehart).
CFXL
JXLS
CFX_Excel
My personal recommendation is to go the CFX_Excel route. Although a commercial product, it will grant you the most functionality/flexibility of the options listed.

Best strategy to initially populate a Grails database backend

I'd like to know your approach/experiences when it's time to initially populate the Grails DB that will hold your app data. Assuming you have CSVs with data, is is "safer" to create a script (with whatever tool fits you) that:
1.-Generates the Bootstrap commands with the domain classes, run it in test or dev environment and then use the native db commands to export it to prod?
2.-Create the DB's insert script assuming GORM's version = 0 and incrementing manually the soon-to-be autogenerated IDs ?
My fear is that the second approach may lead to inconsistencies for hibernate will have the responsability for the IDs generation and there may be something else I'm missing.
Thanks in advance.
Take a look at this link. This allows you to run groovy scripts in the normal grails context giving you access to all grails features including GORM. I'm currently importing data from a legacy database and have found that writing a Groovy script using the Groovy SQL interface to pull out the data then putting that data in domain objects appears to be the easiest thing to do. Once you have the data imported you just use the commands specific to your database system to move that data to the production database.
Update:
Apparently the updated entry referenced from the blog entry I link to no longer exists. I was able to get this working using code at the following link which is also referenced in the comments.
http://pastie.org/180868
Finally it seems that the simplest solution is to consider that GORM as of the current release (1.2) uses a single sequence for all auto-generated ids. So considering this when creating whatever scripts you need (in the language of your preference) should suffice. I understand it's planned for 1.3 release that every table has its own sequence.

Resources