I'm new to Jackrabbit Oak, I'm trying to create nodes for storing files (pdf, word, excel, etc). When I try to create the node by specifying a type similar to this exceptions they are thrown.
javax.jcr.RepositoryException: Unable to auto-create value for / e0a02fd0-d328-48f7-8874-685c591afe9a / jcr: createdBy.
Example code:
Related
I have this JSON on a Bucket that has been crawled with a classifier that splits arrays into record with this JSON classifier $[*].
I noticed that the JSON is on a single line - nothing wrong with syntax - but this results in the table being created having a single column of type array containing a struct which contains the actual fields I need.
In Athena I wasn't able to access the data and Glue was not able to read the columns as in array.field; so I manually changed the structure of the table making a single struct type with the other fields inside. This I'm able to query on Athena and get the Glue wizard to recognise the single columns as part of the struct.
When I create the job and map the fields accordingly (this is what is automatically generated, note the array.field notation applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("array.col1", "long", "col1", "long"), ("array.col2", "string", "col2", "string"), ("array.col3", "string", "col3", "string")], transformation_ctx = "applymapping1")) I test the output on a table in an S3 Bucket. The Job does not fail at all, BUT creates files in the Bucket that are empty!
Another thing I've tried is to modify the Source JSON and add return lines:
this is before:
[{"col1":322,"col2":299,"col3":1613552400000,"col4":"TEST","col5":"TEST"},{"col1":2,"col2":0,"col3":1613552400000,"col4":"TEST","col5":"TEST"}]
this is after:
[
{"col1":322,"col2":299,"col3":1613552400000,"col4":"TEST","col5":"TEST"},
{"col1":2,"col2":0,"col3":1613552400000,"col4":"TEST","col5":"TEST"}
]
Having the file modified as stated before lets me correctly read and write data; this led me to believe that the problem is having a bad JSON at the beginning. Before asking to change the JSON is there something I can implement in my Glue Job (Spark 2.4, Python 3) to handle a JSON on a single line? I've searched everywhere but found nothing.
The end goal is to load data into Redshift, we're working S3 to S3 to check on why data isn't being read.
Thanks in advance for your time and consideration.
I am using Gremlin to access data in AWS Neptune. I need to modify 2 edges going out from a single vertex to point to 2 vertices which are different from the ones it points to at the moment.
For instance if the current relation is as shown below:
(X)---(A)---(Y)
(B) (C)
I want it to modified to:
(X) (A) (Y)
/ \
(B) (C)
To ensure the whole operation is done in a single transaction, I need this done in a single traversal (because manual transaction logic using tx.commit() and tx.rollback() is not supported in AWS Neptune).
I tried the following queries to get this done but failed:
1) Add the new edges and drop the previous ones by selecting them using alias:
g.V(<id of B>).as('B').V(<id of C>).as('C').V(<id of A>).as('A').outE('LINK1','LINK2')
.as('oldEdges').addE('LINK1').from('A').to('B').addE('LINK2').from('A').to('C')
.select('oldEdges').drop();
Here, since outE('LINK1','LINK2') returns 2 edges, the edges being added after it, executes twice. So I get double the number of expected edges between A to B and C.
2) Add the new edges and drop the existing edges where edge id not equal to newly added ones.
g.V(<id of B>).as('B').V(<id of C>).as('C').V(<id of A>).as('A')
.addE('LINK1').from('A').to('B').as('edge1').addE('LINK2').from('A').to('C').as('edge2')
.select('A').outE().where(__.hasId(neq(select('edge1').id()))
.and(hasId(neq(select('edge2').id())))).drop();
Here I get the following exception in my gremlin console:
could not be serialized by org.apache.tinkerpop.gremlin.driver.ser.AbstractGryoMessageSerializerV3d0.
java.lang.IllegalArgumentException: Class is not registered: org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal
Note: To register this class use: kryo.register(org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal.class);
at org.apache.tinkerpop.shaded.kryo.Kryo.getRegistration(Kryo.java:488)
at org.apache.tinkerpop.gremlin.structure.io.gryo.AbstractGryoClassResolver.writeClass(AbstractGryoClassResolver.java:110)
at org.apache.tinkerpop.shaded.kryo.Kryo.writeClass(Kryo.java:517)
at org.apache.tinkerpop.shaded.kryo.Kryo.writeClassAndObject(Kryo.java:622)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.writeClassAndObject(ShadedKryoAdapter.java:49)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.writeClassAndObject(ShadedKryoAdapter.java:24)
...
Please help.
You can try:
g.V(<id of A>).union(
addE('Link1').to(V(<id of B>)),
addE('Link2').to(V(<id of C>)),
outE('Link1', 'Link2').where(inV().hasId(<id of X>,<id of Y>)).drop()
)
Is it possible via a SLING query to access to a whole node by GUID?
I know that it is possible to do a search by GUID but it means that after doing the search we must do an other query to get the node.
I would like to get a node with only one query.
You can access the node by identifier programatically using this the java.jcr.Session.getNodeByIdentifier
http://www.day.com/maven/javax.jcr/javadocs/jcr-2.0/javax/jcr/Session.html#getNodeByIdentifier(java.lang.String)
If you want to be able to have access to it through a HTTP request, then create a servlet that would expose this functionality.
You can get the node by UUID using either an XPATH query such as
/jcr:root//*[#jcr:uuid='b1e1d3c3-983c-33d6-811c-18d2a8824e03']
or
node = Session.getNodeByIdentifier(String id);
there's a good code sample here:
Jackrabbit Running Queries against UUID
You can also try
propertyIterator = node.getReferences();
This appears to rely on mix:referenceable, which may not be the case for your nodes.
Javadoc: http://www.day.com/maven/javax.jcr/javadocs/jcr-2.0/javax/jcr/Node.html#getReferences()
Related question: Jackrabbit - node.getReferences() not returning anything
I've created a content type in Drupal 7 with 5 or 6 fields. Now I want to use a function to query them in a hook_view call back. I thought I would query the node table but all I get back are the nid and title. How do I get back the values for my created fields using the database abstraction API?
Drupal stores the fields in other tables and can automatically join them in. The storage varies depending on how the field is configured so the easiest way to access them is by using an EntityFieldQuery. It'll handle the complexity of joining all your fields in. There's some good examples of how to use it here: http://drupal.org/node/1343708
But if you're working in hook_view, you should already be able access the values, they're loaded into the $node object that's passed in as a parameter. Try running:
debug($node);
In your hook and you should see all the properties.
If you already known the ID of the nodes (nid) you want to load, you should use the node_load_multiple() to load them. This will load the complete need with all fields value. To search the node id, EntityFieldQuery is the recommended way but it has some limitations. You can also use the database API to query the node table for the nid (and revision ID, vid) of your nodes, then load them using node_load_multiple().
Loading a complete load can have performance impacts since it will load way more data than what you need. If this prove to be an issue, you can either try do directly access to field storage tables (if your fields values are stored in your SQL database). The schema of these tables is buld dynamicaly depedning on the fields types, cardinality and other settings. You will have to dig into your database schema to figure it out. And it will probably change as soon as you change something on your fields.
Another solution, is to build stub node entities and to use field_attach_load() with a $options['field_id'] value to only load the value of a specific field. But this require a good knowledge and understanding of the Field API.
See How to use EntityFieldQuery article in Drupal Community Documentation.
Creating A Query
Here is a basic query looking for all articles with a photo that are
tagged as a particular faculty member and published this year. In the
last 5 lines of the code below, the $result variable is populated with
an associative array with the first key being the entity type and the
second key being the entity id (e.g., $result['node'][12322] = partial
node data). Note the $result won't have the 'node' key when it's
empty, thus the check using isset, this is explained here.
Example:
<?php
$query = new EntityFieldQuery();
$query->entityCondition('entity_type', 'node')
->entityCondition('bundle', 'article')
->propertyCondition('status', 1)
->fieldCondition('field_news_types', 'value', 'spotlight', '=')
->fieldCondition('field_photo', 'fid', 'NULL', '!=')
->fieldCondition('field_faculty_tag', 'tid', $value)
->fieldCondition('field_news_publishdate', 'value', $year. '%', 'like')
->fieldOrderBy('field_photo', 'fid', 'DESC')
->range(0, 10)
->addMetaData('account', user_load(1)); // Run the query as user 1.
$result = $query->execute();
if (isset($result['node'])) {
$news_items_nids = array_keys($result['node']);
$news_items = entity_load('node', $news_items_nids);
}
?>
Other resources
EntityFieldQuery on api.drupal.org
Building Energy.gov without Views
I am trying to connect to a SQLite database and have a method that specifies a specific row from the database (the first column in the database is “ID” and is a primary key) then extract the information from a few other columns in that row and display them in text fields.
This will be used for a simple Trivia game I am making; I will later make a random method that will choose the row at random.
I have been struggling with this problem for several weeks and I have been through loads of tutorials but all of them deal with displaying the data in a table view, I want to display it simply on text fields in a View based app. I am fairly confused at this point so any help starting from loading the database to displaying the data in the text fields would be GREATLY APPRECIATED!
Thanks!
Link to libsqlite3.dylib (and import <sqlite3.h>) to access the power of SQLite. There are a number of lightweight Objective-C front ends and I suggest you pick one. In this example, I use fmdb (https://github.com/ccgus/fmdb) to read the names of people out of a previously created database:
NSString* docsdir = [NSSearchPathForDirectoriesInDomains(
NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
NSString* dbpath = [docsdir stringByAppendingPathComponent:#"people.db"];
FMDatabase* db = [FMDatabase databaseWithPath:dbpath];
if (![db open]) {
NSLog(#"Ooops");
return;
}
FMResultSet *rs = [db executeQuery:#"select * from people"];
while ([rs next]) {
NSLog(#"%# %#",
[rs stringForColumn:#"firstname"],
[rs stringForColumn:#"lastname"]);
}
[db close];
/* output:
Snidely Whiplash
Dudley Doright
*/
That illustrates talking to the database; knowing SQL is up to you (and is a different topic). You can include a previously constructed SQLite file in your app bundle, but you can't write to it there; the solution is to copy it from your app bundle into another location, such as the Documents directory, before you start working with it.
Finally, to put strings into text fields (UITextField), set their text property. So for example instead of the while loop shown above, where I log the database results, I could use those results to set text field values:
myTextField.text = [rs stringForColumn:#"firstname"];
myOtherTextField.text = [rs stringForColumn:#"lastname"];