How do you update a single value in xtdb? - database

Given the following document
{:xt/id 1
:line-item/quantity 23
:line-item/item 20
:line-item/description "Item line description"}
I want to update the quantity to 25
As I can tell so far, I will need to first query the DB, get the full doc, merge in the change, then transact the new doc back.
Is there a way to merge in just a change in quantity without doing the above?
Thank you

You should be able to use transaction functions for this. These will allow you to specify those multiple steps and push them down into the transaction log to ensure that they execute in-sequence (i.e. that you will always retrieve the latest doc to update against at the point in time the transaction function call itself is pushed into the transaction log).
For your specific example I think it would look something like this (untested):
(xt/submit-tx node [[::xt/put
{:xt/id :update-quantity
;; note that the function body is quoted.
;; and function calls are fully qualified
:xt/fn '(fn [ctx eid new-quantity]
(let [db (xtdb.api/db ctx)
entity (xtdb.api/entity db eid)]
[[::xt/put (assoc entity :line-item/quantity new-quantity)]]))}]])
This creates the transaction function itself, then you just need to call it to make the change:
;; `[[::xt/fn <id-of-fn> <id-of-entity> <new-quantity>]]` -- the `ctx` is automatically-injected
(xt/submit-tx node [[::xt/fn :update-quantity 1 25]])

Related

Laravel skip and delete records from Database

I'm developing an app which needs to record a list of a users recent video uploads. Importantly it needs to only remember the last two videos associated with the user so I'm trying to find a way to just keep the last two records in a database.
What I've got so far is the below, which creates a new record correctly, however I then want to delete all records that are older than the previous 2, so I've got the below.
The problem is that this seems to delete ALL records even though, by my understanding, the skip should miss out the two most recent records,
private function saveVideoToUserProfile($userId, $thumb ...)
{
RecentVideos::create([
'user_id'=>$userId,
'thumbnail'=>$thumb,
...
]);
RecentVideos::select('id')->where('user_id', $userId)->orderBy('created_at')->skip(2)->delete();
}
Can anyone see what I'm doing wrong?
Limit and offset do not work with delete, so you can do something like this:
$ids = RecentVideos::select('id')->where('user_id', $userId)->orderByDesc('created_at')->skip(2)->take(10000)->pluck('id');
RecentVideos::whereIn('id', $ids)->delete();
First off, skip() does not skip the x number of recent records, but rather the x number of records from the beginning of the result set. So in order to get your desired result, you need to sort the data in the correct order. orderBy() defaults to ordering ascending, but it accepts a second direction argument. Try orderBy('created_at', 'DESC'). (See the docs on orderBy().)
This is how I would recommend writing the query.
RecentVideos::where('user_id', $userId)->orderBy('created_at', 'DESC')->skip(2)->delete();

Delete duplicate nodes between two nodes in Neo4j

Due to unwanted scrip execution my database has some duplicate nodes and it looks like this
From the image, there are multiple nodes with 'see' and 'execute' from p1 to m1.
I tried to eliminate them using this:
MATCH (ur:Role)-[c:CAN]->(e:Entitlement{action:'see'})-[o:ON]->(s:Role {id:'msci'})
WITH collect(e) AS rels WHERE size(rels) > 1
FOREACH (n IN TAIL(rels) | DETACH DELETE n)
Resulting in this:
As you can see here, it deletes all the nodes with 'see' action.
I think I am missing something in the query which I am not sure of.
The good graph should be like this:
EDIT: Added one more scenario with extra relations
this works if there is more than one extra :) and cleanses your entire graph.
// get all the patterns where you have > 1 entitlement of the same "action" (see or execute)
MATCH (n:Role)-->(e:Entitlement)-->(m:Role)
WITH n,m,e.action AS EntitlementAction,
COLLECT(e) AS Entitlements
WHERE size(Entitlements) > 1
// delete all entitlements, except the first one
FOREACH (e IN tail(Entitlements) |
DETACH DELETE e
)
Your query pretty explicitly is matching the "See" action.
MATCH (ur:Role)-[c:CAN]->(e:Entitlement{action:'see'})
You might try the query without specifying the action type.
Edit:
I went back and played with this and this worked for me:
MATCH (ur:Role)-[c:CAN]->(e:Entitlement {action: "see"})-[o:ON]->(s:Role {id:'msci'})
with collect(e) as rels where size(rels) > 1
with tail(rels) as tail
match(n:Entitlement {id: tail[0].id})
detach delete n
You'd need to run two queries one for each action but as long as it's only got the one extra relationship it should work.

Talend avoid duplicate external ID with Salesforce Output

We are importing data on Salesforce through Talend and we have multiple items with the same internal id.
Such import fails with error "Duplicate external id specified" because of how upsert works in Salesforce. At the moment, we worked that around by using the commit size of the tSalesforceOutput to 1, but that works only for small amount of data or it would exhaust Salesforce API Limits.
Is there a known approach to it in Talend? For example, to ensure items that have same external ID ends up in different "commits" of tSalesforceOutput?
Here is the design for the solution I wish to propose:
tSetGlobalVar is here to initialize the variable "finish" to false.
tLoop starts a while loop with (Boolean)globalMap.get("finish") == false as an end condition.
tFileCopy is used to copy the initial file (A for example) to a new one (B).
tFileInputDelimited reads file B.
tUniqRow eliminates duplicates. Uniques records go to tLogRow you have to replace by tSalesforceOutput. Duplicates records if any go to tFileOutputDelimited called A (same name as the original file) with the option "Throw an error if the file already exist" unchecked.
OnComponent OK after tUniqRow activates the tJava which set the new value for the global finish with the following code:
if (((Integer)globalMap.get("tUniqRow_1_NB_DUPLICATES")) == 0) globalMap.put("finish", true);
Explaination with the following sample data:
line 1
line 2
line 3
line 2
line 4
line 2
line 5
line 3
On the 1st iteration, 5 uniques records are pushed into tLogRow, 3 duplicates are pushed into file A and "finish" is not changed as there is duplicates.
On the 2nd iteration, operations are repeated for 2 uniques records and 1 duplicate.
On the 3rd iteration, operations are repeated for 1 unique and as there not anymore duplicate, "finish" is set to true and the loop automatically finishes.
Here is the final result:
You can also decide to use an other global variable to set the salesforce commit level (using the syntax (Integer)globalMap.get("commitLevel")). This variable will be set to 200 by default and to 1 in the tJava if any duplicates. At the same time, set "finish" to true (without testing the number of duplicates) and you'll have a commit level to 200 for the 1st iteration and to 1 for the 2nd (and no need more than 2 iterations).
You'll decide the better choice depending on the number of potential duplicates, but you can notice that you can do it whitout any change to the job design.
I think it should solve your problem. Let me know.
Regards,
TRF
Do you mean you have the same record (the same account for example) twice or more in the input?If so, can't you try to eliminate the duplicates and keep only the record you need to push to Salesforce?Else, if each record has specific informations (so you need all the input records to have a complete one in Salesforce), consider to merge the records before to push the result into Salesforce.
And finally, if you can't do that, push the doublons in a temporary space, push the records but the doublons into Salesforce and iterate other this process until there is no more doublons.Personally, if you can't just eliminate the doublons, I prefer the 2nd approach as it's the solution to have less Salesforce API calls.
Hope this helps.
TRF

Batch collecting in Solr PostFilter

What I want to do is exactly like this:
Solr: How to perform a batch request to an external system from a PostFilter?
and the approach I took is similar:
-don't call super.collect(docId) in the collect method of the PostFilter but store all docIds in an internal map
-call the external system in the finish() then call super.collect(docId) for all the docs that pass the external filtering
The problem I have: docId exceeds maxDoc "(docID must be >= 0 and < maxDoc=100000 (got docID=123456)"
I suspect I am storing local docIds and when Reader is changed, docBase is also changed so the global docId, which I believe is constructed in super.collect(docId) using the parameter docId and docBase, becomes incorrect. I've tried storing super.delegate.getLeafCollector(context) along with docId and call super.delegate.getLeafCollector(context).collect() instead of super.collect() but this doesn't work either (got a null pointer exception)
Look at the code for the CollapsingQParserPlugin in the Solr codebase, particularly CollapsingScoreCollector.finish.
The docId's you receive in the collect call are not globally unique. The Collapsing collector makes them unique by adding the docBase from the context to the local docId to create a globalDoc during the collect() phase.
Then in the finish() phase, you must find the context containing the doc in question and set the reader/leafDelegate depending on what version of Solr your running. Specifying the right docId with the wrong context will throw Exceptions. For the Collapsing collector, you iterate through the contexts until you find the first docBase smaller than the globalDoc.
Finally, if you added docBase in collect(), don't forget to subtract docBase in finish() when you call collect() on the appropriate DelegationCollector object, as the author may or may not have done the first time.

Zend_Db_Profiler How to extend class to log rows count

Like I said in the topic,
My team developed social website based on Zend Framework (1.11).
The problem is that our client wants a debug (on screen list of DB queries) with
execution time, how many rows were affected and statement sentence.
Time and statement Zend_DB_profiler gets for us with no hassle, but
we need also the amount of rows that query affected (fetched, update, inserted or deleted).
Please help, how to cope with this task?
This is the current implementation which prevents you from getting what you want.
$qp->start($this->_queryId);
$retval = $this->_execute($params);
$prof->queryEnd($this->_queryId);
Possible solution would be:
Create your own class for Statement, let's say extended from Zend_Db_Statement_Mysqli or what have you.
Redefine its execute method so that $retval is carried on into $prof->queryEnd($this->_queryId);
Redefine $_defaultStmtClass in Zend_Db_Adapter_Mysqli with your new statement class name
Create your own Zend_Db_Profiler with public function queryEnd($queryId) redefined, so it accepts $retval and handles $retval

Resources