Why send a nonce as part of a Chainweb transaction? - pact-lang

The Pact documentation describes metadata that can be sent with your transaction to a Chainweb node. In that metadata is an optional nonce field:
https://pact-language.readthedocs.io/en/stable/pact-reference.html#yaml-exec-command-request
What benefit is there in specifying my own nonce?

It's not optional in the API, see https://api.chainweb.com/openapi/pact.html#tag/model-payload . The yaml file tool simply inserts the current time if you leave it out.
It's salt for the transaction hash that you can use as you see fit, ie to change the transaction hash without changing anything functional. Note that georgep's technique does not need the nonce, as changing the gas price would change your transaction hash all by itself.

Related

How to know a Salesforce table field is auto-calculated?

Salesforce provides CaseMilestone table. Each time I call the API to get a same object, I noticed that TimeRemainingInMins field has a different value. So I guessed this field is auto-calculated each time I call the API.
Is there a way to know what fields in a table are auto-calculated ?
Note : I am using python simple-salesforce library.
Case milestone is special because it's used as countdown to service level agreement (SLA) violation, drives some escalation rules. Depending on how admin configured the clock you may notice it stops for weekends, bank holidays or maybe count only Mon-Fri 9-17...
Out of the box other place that may have similar functionality is OpportunityHistory table. Don't remember exactly but it's used by SF for for duration reporting, how long oppty spent in each stage.
That's standard. For custom fields that change every time you read them despite nothing actually changing the record (lastmodifiedate staying same) - your admin could have created formula fields based on "NOW()" or "TODAY()", these would also recalculate every time you read them. You'd need some "describe" calls to get the field types and the formula itself.

Is there any way to retrieve deleted records from salesforce using REST API?

I have loaded my Salesforce object data into Azure SQL. Now I want that one or multiple record in Salesforce get deleted then I can retrieve those records using REST API.
Is there any way to create REST API for those records for particular object?
"Yes, but".
By default SF soft deletes the records, they can still be seen in UI in Recycle Bin and undeleted from there. (There's also a hard delete call to skip the Recycle Bin).
Records stay in there for 15 days max. And bin's capacity depends on your org's data storage, see https://help.salesforce.com/articleView?id=home_delete.htm&type=5. So if you mass deleted a lot of data there's chance the bin will overflow.
To retrieve these you need to call /queryAll instead of /query service. And filter by isDeleted column which doesn't show up in Setup but is on pretty much every object. See https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_queryall.htm
/services/data/v49.0/queryAll/?q=SELECT+Name+from+Account+WHERE+isDeleted+=+TRUE
If this is not good enough for you, if you risk the Bin overflows or the operation was hard delete - you could make your own soft delete (move records to some special owner outside of role hierarchy so they become invisible to everybody except sysadmins?) or change strategy. Push info from SF instead of pulling. Send a platform even on delete, manual or with Change Data Capture. (I think CDC doesn't generate events on hard delete though, you'd have to read up)

Flink job production readiness - validate UUIDs assigned to all operators

The flink production readiness (https://ci.apache.org/projects/flink/flink-docs-stable/ops/production_ready.html) suggests assigning UUIDs to all operators. I'm looking for a way to validate that all operators in a given job graph have been assigned UUIDs -- ideally to be used as a pre-deployment check in our CI flow.
We already have a process in place that uses the PackagedProgram class to get a JSON-formatted 'preview plan'. Unfortunately, that does not include any information about the assigned UUIDs (or lack thereof).
Digging into the code behind generating the JSON preview plan (PlanJSONDumpGenerator), I can trace how it visits each of the nodes as a DumpableNode<?>, but from there, I can't find anything that leads me to the definition of the operator with it's UUID.
When defining the job (using the DataStream API), the UUID is assigned on a StreamTransformation<T>. Is there anyway to connect the data in the PackagedProgram back to the original StreamTransformation<T>s to get the UUID?
Or is there a better approach to doing this type of validation?

Refresh index in Solr

In case of integration tests, before a tests starts I put documents in Solr, and wait (with a sleep...) for Solr to index.
With Elasticsearch, I know it is possible to refresh an index.
Is it possible to do the same with Solr ? And how to proceed ?
I suppose the reason you want to refresh the index is because you want near real time search. Essentially you want the search to reflect the added document instantaneously.
In solr usually its controlled by softCommits or hardCommit with openSearcher=true.
Read here more about this
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
The gist is this
Hard commits are about durability, soft commits are about visibility
Now if i understand you are doing this all for testing purpose, so you can not probably change the softcommit time for your collection(As this will have other implications).
I think however you can force solr to commit the changes while indexing as follows:
http://localhost:8983/solr/my_collection/update?softCommit=true
So adding softCommit=true will cause an explicit commit to happen. you can use the above after you add bunch of docs so that make all of them appear in the index together or alternatively you can add softCommit=true in each request for indexing doc.
However every time you do a soft commit it invalidates all top level caches.(Read more about all this in the above link.)
Note: Please be aware however the usual recommendation is to not call commits externally.

Working with accumulated bucket values in Entity Framework

I'm attempting to find design patterns/strategies for working with accumulated bucket values in a database where concurrency can be a problem. I don't know the proper search terms to use to find information on the topic.
Here's my use case (I'm using code-first Entity Framework, so EF-specific advice is welcome):
I have a database table that contains a quantity value. This quantity value can be incremented or decremented by multiple clients at the same time (due to this, I call this value a "bucket" value as it is a bucket for a bunch of accumulated activity; this is in opposition of the other strategy where you keep all activity and calculate the value based on the activity). I am looking for strategies on ensuring accuracy of this "bucket" value (within the context of EF) that takes into consideration that multiple clients may attempt to change it simultaneously (concurrency).
The answer "you must track activity and derive your value from that activity" is acceptable, but I want to consider all bucket-centric solutions as well.
I am looking for advice on search terms to use to find good information on this topic as well as specific links.
Edit: You may assume that all activity is relative to the "bucket" value (no clients will be making an absolute change to the value; they will only increment or decrement).
Without directly coding the SQL Queries that update the buckets, you would have to use client-side Optimistic Concurrency. See Entity Framework Optimistic Concurrency Patterns. Clients whose update would overwrite a change will get an exception, after which you can reload with the current value and retry. This pattern requires a ROWVERSION column on the target table.
If you code the updates in TSQL you can code an atomic update, something like
update foo with (updlock)
set bucket_a = bucket_a + 1
output inserted.*
where id = #id
(The 'updlock' isn't strictly necessary in this query, but is good form any time you want to ensure this kind of isolation)

Resources