A simple question, I am attempting to Upsert User records into Salesforce by using 'Username' as the External Id. But I am receiving the following 'DUPLICATE_USERNAME' error:
"Duplicate Username.The username already exists in this or another Salesforce organization. Usernames must be unique across all Salesforce organizations. To resolve, use a different username (it doesn't need to match the user's email address). DUPLICATE_USERNAME"
I know there is already a user in my org with this exact 'Username', so I would expect the record to be updated, and as it's not creating a new record the duplicate username shouldn't be an issue?... Is my understanding of the Upsert operation incorrect?
Postman Upsert Request:
curl --location --request PATCH 'https://************.my.salesforce.com/services/data/v51.0/sobjects/User/Username/srj#************.co.uk.Invalid' \
--header 'Authorization: Bearer ************' \
--header 'Content-Type: application/json' \
--data-raw '{
"FirstName": "S",
"LastName": "RJ",
"Alias": "srj",
"Email": "srj#example.com",
"ContactId": "0031w00000nviAVAAY",
"ProfileId": "00e1w000000I0GEAA0",
"CommunityNickname": "srj",
"EmailEncodingKey": "UTF-8",
"TimeZoneSidKey": "Europe/London",
"LocaleSidKey": "en_GB",
"LanguageLocaleKey": "en_US"
}'
Postman Upsert 400 Response:
[
{
"message": "Duplicate Username.<br>The username already exists in this or another Salesforce organization. Usernames must be unique across all Salesforce organizations. To resolve, use a different username (it doesn't need to match the user's email address). ",
"errorCode": "DUPLICATE_USERNAME",
"fields": [
"Username"
]
}
]
Many Thanks
Related
If I have the transaction hash of an RSK transaction, how can I get its internal transactions - i.e. when the smart contracts invoke functions on other contracts or do RBTC transfers?
I'm able to get the main transaction using web3.js, however once obtaining this,
I'm unable to parse it to extract the internal transactions that occur.
Another thing that I've tried was to use web3.js to query the block that the transaction occurred in, however was unable to parse this either to obtain internal transactions.
To reiterate my original comment:
The RSK virtual machine (like the EVM) does not define "internal transaction", and hence there's no RPC to query them. You will need to "debug" the transaction execution in order to reconstruct these internals - which is quite difficult to do. Block explorers typically do this for you.
Fortunately the RSK Block Explorer
exposes an API, and thus is programmatically queryable.
So while you won't be able to use web3.js for this,
as you've asked for in your question,
you will be able to get internal transactions nonetheless.
Let's use an example, with the following transaction 0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8, which happens to have a lot of internal transactions.
curl \
-X GET \
-H "accept: application/json" \
"https://backend.explorer.rsk.co/api?module=internalTransactions&action=getInternalTransactionsByTxHash&hash=0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8"
The above command retrieves the internal transactions of this particular transaction.
If you wish to do this for a different transaction,
simply change the value of the hash query parameter in the request URL.
This gives you a fairly large JSON response,
which I will not copy in full here.
You can then parse this using your JS code (since you're already using web3.js).
On the command line, you can explore the data a bit more using
the response filters available in the jq command line utility:
curl \
-X GET \
-H "accept: application/json" \
"https://backend.explorer.rsk.co/api?module=internalTransactions&action=getInternalTransactionsByTxHash&hash=0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8" \
| jq -c '.data[].action.callType'
The above pipes the output of the curl command into jq which then
applies a filter that:
looks at the data property, and returns all items in the array
within each item drills down into the action object, and returns its callType value
This results in the following output:
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"call"
So this transaction contains 18 internal transactions,
with a mix of delegatecall, staticcall, and call...
a fairly complex transaction indeed!
Now let's very the jq command to use a different filter,
such that we gets the full details on only the final internal transaction,
which happens to be the only call internal transaction:
curl \
-X GET \
-H "accept: application/json" \
"https://backend.explorer.rsk.co/api?module=internalTransactions&action=getInternalTransactionsByTxHash&hash=0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8" \
| jq -c '.data[17].action'
Note that the only difference from the previous command is that now the filter
is .data[17].action.
This results in the following output:
{
"callType": "call",
"from": "0x3f7ec3a190661db67c4907c839d8f1b0c18f2fc4",
"to": "0xa288319ecb63301e21963e21ef3ca8fb720d2672",
"gas": "0x20529",
"input": "0xcbf83a040000000000000000000000000000000000000000000000000000000000000003425443555344000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000086f36650548d5c400000000000000000000000000003f7ec3a190661db67c4907c839d8f1b0c18f2fc4000000000000000000000000000000000000000000000000000000000036430c000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000002800000000000000000000000000000000000000000000000000000000000000005000000000000000000000000000000000000000000000000000000000000001b000000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000001b000000000000000000000000000000000000000000000000000000000000001b000000000000000000000000000000000000000000000000000000000000001b0000000000000000000000000000000000000000000000000000000000000005d6328b4db96469d968348a852e6978d18b7dc9bda776727991b83f171abe4a4040ebab67dee8e9711683af91e05c3970bcb6a29502f9b35b14b7a9225d43f6e3e0cf4ae577be626ae350d8e103df88f55205167eaad7267fdbf247e4b35ec674457ac87e13451d2fa9985c854b2f84982e3b611c3b48f5045f2cdc3c6acff44d1735d2771581dc2cc7477fc846767ad088182fc317424d468477cf3a54724543000000000000000000000000000000000000000000000000000000000000000516a3d4cf7e73d17e2230c87f6ef48f38d82885c64d47fef646987f8d6fbb86405515760c786315cac84d7df048e2ba054868f2b9e2afeec0b63ebf2dcac59c8848f254382abf73cf6ce2d5134b5bc065c0706fb7a2f7886a15e79a8953ed11006c5a7d14b4fbf1bb6ff8d687a82a548dcdbd823ebec4b10e331bee332df1a7ae0e45fdac4f6648e093b90a6b56f33e31f36d4079526f871f51cafa710cdde4c3",
"value": "0x0"
}
I am trying to create a logic app that finds all the err$ tables in an oracle database (err$_table_name are the default names of the rejected row tables for the log errors option). The problem I am stuck on is that when I use the oracle get rows action, the dollar sign in the table name is causing a json error.
Error message - BadRequest. Http request failed: the content was not a valid JSON.
In the "Inputs" sections the table name is correct, in this case the table name is "CHEETAH.ERR$_ALL_D_MARKET_HIER"
Under the raw inputs though I see this and I can see the $ was switched to %2524
{
"method": "get",
"path": "/datasets/default/tables/CHEETAH.ERR%2524_ALL_D_MARKET_HIER/items",
"host": {
"connection": {
"name": "/subscriptions/.../resourceGroups/.../providers/Microsoft.Web/connections/oracle-3"
}
}
}
Here is the code view of the the get rows action:
"method": "get",
"path": "/datasets/default/tables/#{encodeURIComponent(encodeURIComponent(concat(variables('Owner'), '.', variables('Table') )))}/items"
I get the this json error if I enter in the table name or if I pass in the table name via a variable.
Anyone have any thoughts on how to get this to work. The only workaround I can think of is to use a stored procedure to create views without the $ in them.
Tried the suggestion of the slash . It changed the error at least.
Looking at below it took the single slash I added and replaced it with two slashes.
{
"status": 400,
"message": "The specified item 'CHEETAH.ERR\\$_ALL_D_PROD_HIER' is not found.\r\n inner exception: The specified item 'CHEETAH.ERR\\$_ALL_D_PROD_HIER' is not found.\r\nclientRequestId: b9038635-4007-48f5-aebd-ce94e1faf90a",
"error": {
"message": "The specified item 'CHEETAH.ERR\\$_ALL_D_PROD_HIER' is not found.\r\n inner exception: The specified item 'CHEETAH.ERR\\$_ALL_D_PROD_HIER' is not found."
},
"source": "oracle-cc.azconn-cc.p.azurewebsites.net"
}
I figured this out and it had nothing to do with the $ in the table name. It was the data being returned in one of the columns.
The problem column was of data type "UROWID"
How to disable casefolding using field-type string in vespa.ai?
search post {
document post {
field token type string {
indexing: index
match: word
rank: filter
rank-type: empty
stemming: none
normalizing: none
indexing-rewrite: none
}
}
}
Fill database:
curl -X POST -H "Content-Type:application/json"
--data-binary '{"fields":{"token":"TeSt"}}'
http://localhost:8080/document/v1/post/post/docid/TeSt
Query matches even case is different (due casefolding):
curl -s -H "Content-Type: application/json"
--data '{"yql" : "select * from post where token contains \"test\";"}'
http://localhost:8080/search/ | jq .
Vespa does not support case sensitive search even with match:word.
This is asked every few years but nobody has really actually needed it yet. It is easy to add, feel free to create an issue for it on github.com/vespa-engine/vespa if you really need it.
When I send a query xml doc like this
<query><text><![CDATA[
let $facts := fn:collection("factbook/factbook.xml")/mondial
let $c := ("Antarktika", "Atlantis")
for $name at $id in $c
return
insert node (<continent id="f0_aaa{$id}" name="{$name}" />) into $facts
]]></text></query>
to the REST API using
curl -i --data '...' 'http://localhost:8984/rest'
BaseX will report the following error:
[XPST0003] Incomplete FLWOR expression: expecting 'return'.
If I execute the same query on the web admin query page, the query is accepted and the nodes are inserted.
Why is the REST call rejected? Is there any further restriction that does not apply to the admin interface?
If I remove the lets and expand the corresponding variables, the query is accepted by REST API:
<query><text><![CDATA[
for $name at $id in ("Antarktika", "Atlantis")
return
insert node (<continent id="f0_aaa{$id}" name="{$name}" />) into fn:collection("factbook/factbook.xml")/mondial
]]></text></query>
The REST user has write permission. I'm using BaseX 9.0.2.
It turned out that the problem was not the query, but the --data option of curl in combination with # to send file content. This option strips linebreaks (CR and LF) before sending. With --data-binary '#...' the query works as expected.
I'm building an application using loopback as backend and angularjs as frontend with MySql as database choice.
Loopback version is 2.22.0, Loopback angular sdk version is 1.5.0
There are models Person and Post. Both have auto generated "id" fields by loopback (i.e. "idInjection": true).
They both are related as Person hasMany Post and Post belongsTo Person linked by a foreign key on personId column in Post model.
Suppose there are already some records in both the tables.
I generated lbServices.js file by using lb-ng command.
So now when I try to use the function
Person.posts.create({
content: "Some content",
id: $rootScope.currentUser.id
})
it gives me error of duplicate entry.
I investigated this and found out that it's because the rest api url "/People/:id/posts" in lbServices.js file has an id parameter and also the Post model has an id column too which is a primary key.
So it passes id into both of them and fails. An ambiguity is formed.
For this example, $rootScope.currentUser.id=1 and there already exists a row in Post table with id=1
Now when I change the Post model's property ("idInjection": false) and create a custom primary key column as "uid" with auto_increment.
I'm able to insert with
Person.posts.create({
content: "Some content",
id: $rootScope.currentUser.id
})
So I want to know if I am inserting into a related model in the correct way or is this some issue with loopback? Or is there a better way to insert from AngularJs frontend?
I really want to avoid to change the primary column names of every model to something other than "id".
Please help.
I figured out what I was doing wrong.
The correct way to insert should be:
Person.posts.create(
{id: $rootScope.currentUser.id},
{
content: "Some content",
title: "Some title"
})
As the id field is an autogenerated number, your call should be:
Person.posts.create({
content: "Some content",
personId: $rootScope.currentUser.id
})