How to get all custom labels information which is used in apex page? - salesforce

I want to get all custom labels through rest API. I used custom labels in apex page
like
{!$Label.MyCustomLabel}
. So can i get information about MyCustomLabel through rest api.
Any guidance would be appreciated. Thanks

There's not really an easy way to get the custom labels out of an org. The CustomLabel/ExternalString object isn't queryable, and the services/data/v43.0/nouns part of the rest api doesn't include them either.
The only way to get custom labels from Salesforce right now is by reading metadata. The quickest way to do this would probably be to use the synchronous listMetadata and readMetadata calls. This uses the SOAP api, so there's a bit of XML involved here.
1., listMetadata, replace org-id with your org id, and replace session-id with your session id.
curl \
-H 'Content-Type: text/xml' \
-H 'SOAPAction: ""' \
https://ap4.salesforce.com/services/Soap/m/38.0/org-id \
-d '<?xml version="1.0" encoding="utf-8"?><env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"><env:Header><n1:SessionHeader xmlns:n1="http://soap.sforce.com/2006/04/metadata"><n1:sessionId>session-id</n1:sessionId></n1:SessionHeader></env:Header><env:Body><n1:listMetadata xmlns:n1="http://soap.sforce.com/2006/04/metadata"><n1:queries><n1:type type="xsd:string">CustomLabel</n1:type></n1:queries></n1:listMetadata></env:Body></env:Envelope>'
2., pull out all of the custom label names within the <fullName> tags
3., readMetadata, replace org-id with your org id, replace session-id with your session id, and replace custom-label-name with the name of a custom label.
curl \
-H 'Content-Type: text/xml' \
-H 'SOAPAction: ""' \
https://ap4.salesforce.com/services/Soap/m/38.0/org-id \
-d '<?xml version="1.0" encoding="utf-8"?><env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"><env:Header><n1:SessionHeader xmlns:n1="http://soap.sforce.com/2006/04/metadata"><n1:sessionId>session-id</n1:sessionId></n1:SessionHeader></env:Header><env:Body><n1:readMetadata xmlns:n1="http://soap.sforce.com/2006/04/metadata"><n1:type type="xsd:string">CustomLabel</n1:type><n1:fullNames type="xsd:string">custom-label-name</n1:fullNames><n1:fullNames type="xsd:string">custom-label-name</n1:fullNames></n1:readMetadata></env:Body></env:Envelope>'

try:
1 - from console, activate tooling api
2 - query:
Select Id, name, masterlabel, ManageableState, Value
FROM ExternalString

Related

Http get method for Splunk saved search using access token

What would be the correct HTTP get request call syntax to fetch saved search data from Splunk if we're accessing it through an access token?
My curl command is working but http.get is not.
curl command:
#os.system('curl -H "Authorization: Bearer <token>"
<baseurl>:8089/services/search/jobs/export --data search="savedsearch abc_backup_status" -d output_mode=csv')
request call ::::
BASE_URL = '<baseurl>:8089/services/search/jobs/export'
data = {"search":"savedsearch abc_backup_status"}
headers = {'Authorization': "Bearer <token>"}
auth_response = requests.get(BASE_URL, headers=headers, data = data, verify=False)
this is giving 400 errors.
The curl options -d OR --data imply a POST method by default.
From: https://man7.org/linux/man-pages/man1/curl.1.html
-d, --data <data>
(HTTP MQTT) Sends the specified data in a POST request to
the HTTP server, in the same way that a browser does when
a user has filled in an HTML form and presses the submit
button. This will cause curl to pass the data to the
server using the content-type application/x-www-form-
urlencoded. Compare to -F, --form.
It is interesting that Splunk Docs claim that search/jobs/export takes a GET, but you're creating a job to immediately export, which feels like a POST type of operation.
Also I notice that your search starts with the savedsearch command, if that's a regularly scheduled savedsearch, you may want to GET saved/searches/{name}/history to get the last execution SID, followed either by the results or events endpoint of that already executed job, instead of a new search.... but that's a use case question

sagemaker calling curl with inference endpoint with csv

I have deployed a random cut forest model endpoint on AWS sagemaker. I am trying to test the inference endpoint with POST man. I am successfully able to authenticate into the endpoint with access and secret key.
Can someone confirm if the way I am sending the csv payload is correct ? It seems something is not working since whatever the third column value, I get the same score from the endpoint.
'1530000000000,E39E4F5CFFA2CA4A84099D2415583C1C,433190.06640625'
Pasting the curl for the POSTman generated code:
curl --request POST \
--url https://runtime.sagemaker.us-east-1.amazonaws.com/endpoints/randomcutforest-2018-06-05-01-08-02-956/invocations \
--header 'authorization: AWS4-HMAC-SHA256 Credential=/20180713/us-east-1/sagemaker/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date, Signature=d51371b2549e132c21a3402824b57258a74e6fa9f078d91a44bf54b0d110ea57' \
--header 'cache-control: no-cache' \
--header 'content-type: text/csv' \
--header 'host: runtime.sagemaker.us-east-1.amazonaws.com' \
--header 'postman-token: cb7cdfa5-025b-e4f4-c033-a4fb685133c4' \
--header 'x-amz-date: 20180713T190238Z' \
--data '1530000000000,E39E4F5CFFA2CA4A84099D2415583C1C,433190.06640625'
{
"scores": [
{
"score": 7.6438561895// This value never changes
}
]
}
The fact that your second column is not numeric is suspicious. RandomCutForest is only supposed to work with numbers.
I'd recommend you use the AWS forum: https://forums.aws.amazon.com/forum.jspa?forumID=285
Would you be able to share the feature_dim you used to train the the forest?
Thanks.

DB2 Warehouse on Cloud Load_jobs filename or path is not valid failure

I have to try to load data to DB2 WoC (formerly dashDB) from IBM Cloud Object Storage (Softlayer) by using /load_jobs API call.
Always getting error response: SQL3025N,A parameter specifying a filename or path is not valid.,0,n/a
Trying different formatting for path key, like following:
us-south/woctestdata/data_example.csv
/woctestdata/data_example.csv
woctestdata/data_example.csv
woctestdata::data_example.csv
also tried folowing suggestions from comments:
us-south::woctestdata\data_example.csv
us-south::woctestdata::data_example.csv
So no more ideas. How the path should be entered correctly?
There is the example of my request:
curl -X POST \
https://dashdb-mpp.services.dal.bluemix.net/dashdb-api/v2/load_jobs \
-H 'Authorization: Bearer <api_key>' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d '{
"load_source": "SOFTLAYER",
"load_action": "INSERT",
"schema": "MKT_ATBTN",
"table": "TRANSMISSIN_TABLE1",
"max_row_count": 0,
"max_warning_count": 0,
"cloud_source": {
"endpoint": "https://tor01.objectstorage.softlayer.net/auth/v1.0",
"path": "woctestdata/data_example.csv",
"auth_id": "<auth_id>",
"auth_secret": "<auth_secret>"
},
"server_source": {
"file_path": "string"
},
"stream_source": {
"file_name": "string"
},
"file_options": {
"code_page": "1208",
"column_delimiter": ";",
"string_delimiter": "",
"date_format": "YYYY-MM-DD",
"time_format": "HH:MM:SS",
"timestamp_format": "YYYY-MM-DD HH:MM:SS",
"cde_analyze_frequency": 0
}
}'
I also try to use db2 load command to load data from IBM Cloud object storage. But also no luck:
db2 load from Softlayer::https://tor01.objectstorage.softlayer.net/auth/v1.0::IBM:<ibm_email_address>::<password>::woctestdata::data_example.csv of del insert into MKT_ATBTN.TRANSMISSIN_TABLE1;
Result:
Agent Type Node SQL Code Result
_______________________________________________________________________
PRE_PARTITION 000 -00003025 Error.
_______________________________________________________________________
RESULTS: 0 of 0 LOADs completed successfully.
_______________________________________________________________________
Summary of LOAD Agents:
Number of rows read = 0
Number of rows skipped = 0
Number of rows loaded = 0
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 0
SQL3025N A parameter specifying a filename or path is not valid.
To download or access to the file you need to get an X-Auth-Token or set its container with an static URL through the Web page.
X-Auth-Token
I recommend to review Managing the Object Storage and softlayer-object-storage-auth-endpoint
When you run the command
curl -i -H "X-Auth-User: SLOS300001-10:rcuellar" -H "X-Auth-Key: 231222489e90646678364kjsdfhytwterd0259" https://tor01.objectstorage.softlayer.net/auth/v1.0
The response is something like this:
X-Auth-Token: AUTH_tkb26239d441d6401d9482b004d45f7259 – the token we need
X-Storage-Url: https://tor01.objectstorage.softlayer.net/v1/AUTH_df0de35c-d00a-40aa-b697-2b7f1b9331a6
And now you should be able to access to the file with an URL similar like below:
https://tor01.objectstorage.softlayer.net/v1/AUTH_df0de35c-d00a-40aa-b697-2b7f1b9331a6/woctestdata/data_example.csv
Static URL through Web Page
In the portal page go to:
Storage >> Object Storage >> Select Object Storage >> Select Cluster
(e.g. Toronto) >> Select your container
And check the Enable Static Site checkbox, see image below.
you also can use endpoints + object path, the path format is bucketname::filename

Update solr document using http get method

Using curl this can be done with: (update price field with value 100)
curl http://localhost:8983/solr/mycore/update?commit=true' -H 'Content-type:application/json' -d '[{"id":"1","price":{"set":100}}]
How to do the same using http get method? I need to fill the XXXX in the following:
http://localhost:8983/solr/mycore/update?stream.body=XXXX&commit=true
The following does not work:
http://localhost:8983/solr/mycore/update?stream.body=<add><doc><field name="id">1</field><field name="price" update="set">100</field></doc></add>&commit=true
the stream.body does not need to be xml, so this works:
http://localhost:8983/solr/mycore/update?stream.body=[{"id":"1","price":{"set":100}}]&commit=true

Error when adding user for Solr Basic Authentication

When I try to add the user for the Solr Basic Authentication using the following method in curl
curl --user user:password http://localhost:8983/solr/admin/authentication -H 'Content-type:application/json' -d '{
"set-user": {"tom" : "TomIsCool" ,
"harry":"HarrysSecret"}}'
I get the following error:
{
"responseHeader":{
"status":400,
"QTime":0},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"No contentStream",
"code":400}}
curl: (3) [globbing] unmatched brace in column 1
枩]?V7`-{炘9叡 t肤 ,? E'qyT咐黣]儎;衷 鈛^W褹?curl: (3) [globbing] unmatched cl
ose brace/bracket in column 13
What does this error means and how should we resolve it?
I'm using SolrCloud on Solr 6.4.2.
Regards,
Edwin
If you're using curl under Windows, this is a known issue with cmd.exe's escaping of single quotes. Use double quotes around your JSON string (or use cygwin, powershell, etc.)
curl --user user:password http://localhost:8983/solr/admin/authentication -H
"Content-type:application/json" -d "{
\"set-user\": {\"tom\" : \"TomIsCool\" ,
\"harry\":\"HarrysSecret\"}}"
The "globbing" message from curl is the hint that curl is doing something else than what you intended, and that the actual body of the request isn't getting to Solr (which is complaining about no message body being present).
You could also get around this by using stream.body in the URL and making the request from your browser.

Resources