D2RQ default mapping scheme - database

What is the default mapping scheme for D2RQ?
Is it triple-based mapping, value-based mapping or object-based mapping?

The documentation says:
The generate-mapping script automatically generates a D2RQ mapping
from the table structure of a database. The tool generates a new RDF
vocabulary for each database, using table names as class names and
column names as property names. Semantic Web client applications will
understand more of your data if you customize the mapping and replace
the auto-generated terms with terms from well-known and publicly
accessible RDF vocabularies.
The mapping file can be edited with any text editor. Its syntax is
described in the D2RQ language specification.
The DR2 Server Tutorial gives a bit more detail, although it uses a non-default mapping file. You might begin by working through the tutorial, examining the mapping generated by the script, and then comparing it to the one provided for the tutorial.

Related

How do I export data with attachments from a Lotus Notes Database into an Excel Spreadsheet or into a Microsoft Access Database?

Not a Lotus Notes Developer but have to get data in a Lotus Notes database into SharePoint. All of the LN entries have attachments. I tried to export to a csv file but that doesn't include the attachments. I think created a new view with the Attachments field but that only returns the number of attachments. How can I extract the associated attachments with each LN form. Thanks in advance
Your question is pretty broad. Attachments are (sometimes) treated as embedded objects in a Rich Text Field. This URL has some sample code:
https://www.ibm.com/support/knowledgecenter/en/SSVRGU_9.0.1/basic/H_EXAMPLES_EMBEDDEDOBJECTS_PROPERTY_RTITEM.html
Copy/paste may not work for you because the attachments may not be in a field called "Body" or there may be multiple "Body" fields on the document (which requires other considerations beyond the scope of this question), or the attachment may be embedded objects in the document. Or all the of the above. That that code will give you a sense of what you need to do.
Also, see this:
How to retrieve Lotus Notes attachments?
I have done this by writing LotusScript code to detach all the attachments from all docs into a single folder, using the document's UNID plus the attachment name for the filename in the folder. Adding the UNID covers cases where attachments with the same name exist in mulitple documents and might actually have different content. I do not attempt to de-duplicate.
The agent adds a NotesItem to each document giving the filename(s) of the detached attachment(s).
I then create a view containing all the fields that I want to export, including the new field with the filenames. I export that view to CSV. I hand the CSV and a zip file containing the attachments over to the SharePoint team.
Maybe a bit late but... I do have extensive experience (approx. 15 years) with data extraction from IBM Notes applications/databases - independent of the type of application - and have supported migrations of quite a few large IBM Notes applications to various targets for companies around the world.
You can access IBM Notes databases using the native C-API, LotusScript, COM or Java, for example or make a document available for further processing by exporting it to Domino XML (DXL) format.
The C-API is the foundation of IBM Notes, meaning that COM and Java APIs only offer a subset of the C-API's functionality. Any of the APIs should give you the ability to extract a document's metadata and attachments. However:
A document, including it's attachment, can be encrypted using an IBM Notes ID. If you do not have access to the ID that was used to encrypt the document, you will neither be able to extract the document nor the attachment.
Attachments can be "real attachments" or so called "embedded objects". Depending on the type of attachment, the attachment needs to be handled differently if it comes to the API calls required to do the export.
Attachments can be compressed. In most cases, the API should handle the decompression transparently. However, there is at least one proprietary compression algorithm (based on Hufman) that is widely used. If you extract documents in DXL format, you will not be able to read those attachments, as they are embedded into the DXL in compressed form.
Objects being embedded into a document using (Object Linking and Embeddeding (OLE)) cannot be extracted using the COM or Java API. I.e. even if you gain access to the documents, you will not be able to transform them into a readable format.
If the information you are trying to transfer from IBM Notes to SharePoint is important to the company you work for, I would recommend to rely on a proven solution for the export/migration rather than developing this on your own, as the details can really be tricky.
Should you have any further questions, don't hesitate to get in touch.

What is the formal model behind Sense/Net ECM?

First, I don't know if this is the right place to discuss idea related to Sense/Net SN evolution & learning process about it!
Anyway, this is my story:
I have tried & tested some SN functionality especially content type definition CTD; It is really elegant!
Sense/Net wiki documentation gives us "Know How" and we may write 200 wiki pages about SN. All included information are true. However, we don't have the complete model in which we can see the whole system model and how all cases derived from it.
I searched SN codeplex.com pages but didn't find how SN evolved to be mature ECM platform.
Also, searched google using the following KWs:
"Document Management System Modeling"
"Role-based access control (RBAC) model"
.....
Please collaborate & help.
It's curious that no one from SenseNet has answered, but I'll give it a shot even though I don't know a lot of the history. I've been working with SenseNet for the last 4+ years, developed the pysensenet extension, communicate with the developers, and am familiar with the source code, so I know a bit about the framework.
The framework has evolved over that last 15+ years and is pretty remarkable. Here are a few facts and highlights:
The data model is at it's core an XML Tree where each tree node has an internal representation as a C# class and can hold any number of properties/Fields. This is referred to as Content, and the database as the Content Repository.
The XML Tree is persisted in a SQL Database and uses Lucene.NET for indexing.
Content / data queries are made in Lucene and not SQL.
At one time the database was arbitrary (SQL), then stored procedures in MS SQL Server locked it into MS SQL, although recently (SenseNet 7) supports blob storage in MongoDB.
Fields can be one of 9 built-in field types, or a custom type that you define.
A node in the XML Tree, aka "Content", can hold a field that references another node somewhere else in the tree, like a linked list inside a tree! OK, a doubly linked list since both nodes can refer to each other. Very cool.
There is no "external model", or as SenseNet says, "Everything is Content".
The permission system is node based and is incredibly granular. For example, you can define permissions such that one role, group or person, can only see the Content at a particular node. And it integrates with Active Directory.
All Content can be versioned and tracked. For example, a Content Type of "Contact" (person) could have versioning on for the person's name. This way if someone changed their name, the Content Repository would have a history of the all name changes.
Hopefully this doesn't come off as a SenseNet marketing piece -- I don't work for them and don't benefit if you purchase a license -- but may help you compare it to other technologies such as SharePoint and Alfresco.

Database Driven Model Mapping in Java

So, I have a project where I get data from a dozen different sources, some are database objects, most often the data is in different JSON formats, or often XML formats. So, I need to take this disparate data and pull it into one single clean managed object that we control.
I have seen dozens of different posts on various tools to do object to object mapping. Orika being one of them, etc. But the problem is that Orika, like many of these still need solid classes defined to do the mapping. If there is a change to the mapping, then I have to change my class, re-commit it, then do a build and deploy new code ... BTW, testing would also have to be done like any code change. So, maybe some of these tools aren't a great solution for me.
Then I was looking to do some sort of database-driven mapping, where I have a source, a field, and then the new field or function I would like to take it to. So, with a database-driven tool, I could modify the fields in the database, and everything would keep working as it should. I could always create a front-end to modify this tool.
So, with that ... I am asking if there is any database-driven tool where I can map field to field, or fields to functions type of mapping? Drools was my first choice, but I don't know if it is my best choice? Maybe it is overkill for my needs? So, I was looking for advice on what might be the best tool to do my mapping.
Please let me know if you need any more information from me, and thanks for all the help!
Actually Orika can handle dynamic Data source like that, there is even an example on how to convert from XML Element (DOM API) or even JsonObject.
You can use an XML parser to convert your data into Element object, or Jackson to get JsonObject
Then define you class map between your "Canonical" Java Class and these dynamic "Classes"
http://orika-mapper.github.io/orika-docs/advanced-mappings.html Customizing the PropertyResolverStrategy
Here is an example of Orika mapping to MongoDB DBObject to Java Bean
https://gist.github.com/elaatifi/ade7321a1405c61ff8a9
However converting JSON is more straightforward than XML (the semantic of Attributes/Childs/Custom tags do not match with JavaBeans)

Plomino multilingual site

I've got Zope 2.13.15, Plone 4.2.0.1, LinguaPlone 4.1.3 and CMFPlomino 1.17.2
I need to make a multilingual website on Plone and am using Plomino. I see that I can translate plomino database, forms and views with LinguaPlone but not documents. I have seen the procedure on http://www.plomino.net/how-to/multilingual-applications (Multilingual applications - How to build a multilingual Plomino application) and more detailed on https://github.com/plomino/Plomino/issues/296. I'm not sure can I translate content of documents using this procedure because the mentioned tutorial states "If the text does not match any msgid from the i18n domain, it remains unchanged".
Does this mean that all the translations of the content of documents should be in the .po files or what. Can anybody clear this mechanism to me please and is this tutorial the right way to document content translation ?? At this moment I'm not sure if there is a document content translation solution for Plomino
What is the procedure to translate document content in Plomino? The tutorials are not clear to me.
At the moment there is no document content translation solution for Plomino.
The mentioned solution can be used to translate pre-defined contents (like labels in forms, computed values, etc.) but obviously it is not applicable to content freely entered by users in documents.
Nevertheless, Plomino is already used in multilingual contexts.
A basic solution is to:
create a field to store the document language,
provide a Document ID formula which will use this lang field value (so we can guess the id of any translated version from the current doc id),
implement the different actions you might need (like "Translate this doc", "Switch to language xx", etc.) as basic Plomino actions.

Default Values for Dates in ArcGIS

In Microsoft SQL I can use the GETDATE() function as the default value for a DATETIME field. I'd like to be able to do the same kind of thing for a date field in an ArcGIS geodatabase. Is this possible, or am I limited to literal values?
My geodatabase is using ArcSDE 9.1. The Feature Class with the defining attributes is versioned.
Thanks,
Camel
ArcGIS generally leverages an external database engine, so unless you are talking about an individual shapefile, your data is being stored in Access, SQL Server, or Oracle. Unless you have ArcSDE, it is probably Access. You can define data directly in the database and assign defaults there and then link to the tables from your map authoring tool.
EDIT After your last comment I consulted with one of my more GIS savvy friends and she had the following to offer
they will have to define the table and its defaults in the database and then join the table to the feature class via a common field. It is important not to join the date field to the feature class, in that case, the feature class would hold onto the
values set up in the feature class and
ignore the table value.
Hope that is of some help.
I ended up talking to Esri support about this issue. They confirmed that versioned tables do not inherit the default values of the original table (well, in SQL Server anyway).
With regards to creating a join between a table and the feature class:
The data is exported to a shape file and copied to a PocketPC device
Data entry is via an ArcPad application
The shape file is synchronised and re-imported into the SDE
So basically, the DATETIME default would have to survive the export/import process. I didn't test whether this is possible. In the end, I inserted the default value programmatically on the PocketPC.

Resources