I'm New to SFCC OCAPI. My purpose is to Export all Orders from "development.demandware.net" after certain date and this can happen Quite frequently like once in every 2 days. I'm currently using Python to achieve this using the endpoint "s/{{SITEID}}/dw/shop/v18_1/order_search". The problem is One call is getting me only 25 Records. Again i have change the query from dynamically to start from RecordNo 26 for the next call. So, If I have like 10,000 records, it makes upto 400 calls everytime the scirpt runs. The alternative options i'm aware of is:
OCAPI Batch requests
OCAPI Export job (Tried this, but haven't got enough knowledge to set this up)
So, I'd like to know if my purposed is achievable using the batch request. I tried to do this using the documentation. And, the response was 200 with no response body using the below code.
url = f"https://{DOMAIN}/s/-/dw/batch"
url_param = {'client_id': CLIENT_ID}
header = {'Authorization': 'Bearer ' + token,
'Origin': f'https://{DOMAIN}',
'Content-Type': 'multipart/mixed;boundary=23dh3f9f4',
'x-dw-http-method': 'POST',
'x-dw-resource-path': 's/{{SITEID}}/dw/shop/v18_8/order_search'}
body = """
{
"query" :
{
"filtered_query": {
"query": { "match_all_query": {} },
"filter": {
"range_filter": {
"field": "creation_date",
"from": "%s",
"from_inclusive": true
}
}
}
},
"select" : "(**)",
"sorts": [{
"field": "order_no",
"sort_order": "asc"
}],
"start": %s
}""" %(RETRIVE_RECORDS_FROM, startRecordFrom)
response = requests.post(url, params = url_param, headers = header, data = body)
My code doesn't have a x-dw-content-id as the above is an initial request. If its possible to achieve my purpose,
how should my sub-request looks like.?
And after that how do i retrieve the data of my request.? is there any endpoint i should use to get the batch results.?
I maybe asking for too much information. But, i couldn't find much information about this online so had to ask every question i have in one post.
My question might look similar to this question "Salesforce Commerce Cloud/Demandware - OCAPI query orders by date range", But I'm looking for information about batch requests and also to reduce the number of API calls.
Thanks in advance.
Related
Currently I am using rtk ( react tool kit) query to do an api call using redux , Current Url being static (ex:http://dummy.yt/fetch/) and body can be changed to get the desired result , API is an post request with a body looks like bellow
{
"type": "Data_one",
"include": "*",
"limit":4,
"offset": 0
}
Body updates here with every call for example offset will be changed to 1 , most of the examples on documentation talk about endpoints ( giving input to the URL) but I want this data to fetched In one single call and later optimize it using rtk query by alerting the body the way they are doing to the endpoint.
can we achieve this using RTK query?
You still need a query endpoint. Just one that does not return a string, but an object.
Instead of
query: arg => `some/url/${arg}`
do
query: arg => ({
url: 'some/url',
method: 'POST',
body: {
something: "foo",
anotherThing: arg
}
})
I'm trying to loop a request to extract the ID's from a previous request. I followed the steps in this video https://www.youtube.com/watch?v=4wuvgX-egdc but i can't get it to work. As I see it the problem is that {} is not an array but I would like to search within "campaigns" which seems to be an array. (As you probably understand I'm new to this)
Here's the request I've sent and would like to loop through to extract the ID's that I wish to use in the next request. (there are several hundreds of ID's)
{
"campaigns": [
{
"id": 373894,
"name": "Benriach",
"created_at": "2022-01-21 13:37:34",
"sent_at": "2022-01-21 13:37:53",
"status": "sent",
"type": "text_message"
},
Here's the test that I'm trying to run.
const response = pm.response.json();
const campaignids = response.map (campaignid => campaigns.id);
console.log(campaignids);
pm.variables.set('campaignids', campaignids);
Here's how it looks>>
Screenshot
The end goal is to use Postman to extract campaign statistics from an e-mail marketing tool and then send it on into Google Data Studio where I want to create a dashboard for e-mail-campaigns using both data from the e-mail marketing tool as well as website data.
const campaignids = response.map (campaignid => campaigns.id); here is the problem
space between map (
const response = pm.response.json();
const campaignids = response.map(campaign => campaign.id);
console.log(campaignids);
pm.variables.set('campaignids', campaignids);
and make sure response should be an array
I would like parse wikipedia Data, but through time snapshots of the site, using the wikipedia API.
However, if it seems that it's possible to browse through different version of an article, I cannot find a way to browse article given a specific date or timespan.
Is their a way to do something like this using this API ?
for instance, if I use this code in python. I get the current 500 first categories.
import requests as rq
S = rq.Session()
url="https://fr.wikipedia.org/w/api.php"
PARAMS = {
"action": "query",
"format": "json",
"list": "allcategories",
"acmin":100,
"aclimit": 500
}
R = S.get(url=url, params=PARAMS)
DATA = R.json()
However, if I wanted to have access to the first 500 categories that were existing in wikipedia in january 2015, how would I do ?
I've already asked the GAS community but I was advised to continue asking here...
So far I'm able to connect to BOX and get a list of files and I can download a file from BOX as well.
The whole idea is to download a file using BOX API, edit it and upload it back as a new file version using the BOX API.
I'm unable to make the last part working as it gives me error code 400.
Here is the function.
function uploadNewFileVersion() {
//767694355309 testing
var boxFileId="767694355309";
var newVerFile = DriveApp.getFileById("1sK-jcaJoD0WaAcixKtlHA85pf6t8M61v").getBlob();
var confirmAuthorization = getBoxService_().getAccessToken();
//var parent = { "id": "0" };
//"name": "apiNewVersion.xlsx",
//"parent": parent,
var payload = {
"file": newVerFile
}
var headers = {
'Authorization': 'Bearer ' + confirmAuthorization
}
var options = {
"method": "post",
"muteHttpExceptions": true,
"contentType": "multipart/form-data",
"headers": headers,
"payload": payload
}
var apiHtml = "https://upload.box.com/api/2.0/files/"+boxFileId+"/content/";
var response = UrlFetchApp.fetch(apiHtml, options);
Logger.log(response.getResponseCode());
var a = 1;
}
The boxFileId is the file on the box.
The newVerFile is the one downloaded from Box and updated. I need to make it as a new version of the Box file.
Could you please advise?
Thank you!
PEtr
I think parent and name is optional so I commented it out.
If I don't getBlob, then it returns 415 istead.
I believe your goal and situation as follows.
You want to upload a file of Google Drive using Box API with Google Apps Script.
From your question, I cannot find the official document of the method of API that you want to use. But, from the endpoint https://upload.box.com/api/2.0/files/"+boxFileId+"/content/ in your script, I guessed that you wanted to use "Upload file version".
Values of your access token and file ID are valid for using the API.
If my understanding of your question is correct, how about the following modification?
Modification points:
When I saw the official document of "Upload file version", I confirmed the following sample curl. In this case, it is considered that when the following curl command is converted to Google Apps Script, the request might work.
$ curl -i -X POST "https://upload.box.com/api/2.0/files/12345/content" \
-H "Authorization: Bearer <ACCESS_TOKEN>" \
-H "Content-Type: multipart/form-data" \
-F attributes="{"name":"Contract.pdf", "parent":{"id":"11446498"}}" \
-F file=#<FILE_NAME>
From the curl command, it is found that attributes and file are sent as form and files.
And, I thought that attributes="{"name":"Contract.pdf", "parent":{"id":"11446498"}}" might should be attributes="{\"name\":\"Contract.pdf\", \"parent\":{\"id\":\"11446498\"}}".
When I saw your current script, it seems that multipart/form-data is used for contentType. In this case, boundary in the request body is required to be included. Fortunately, at UrlFetchApp, in the case of multipart/form-data, when contentType is not used, the content type is automatically included in the request header. I think that in your case, this can be used.
In your script, attributes="{"name":"Contract.pdf", "parent":{"id":"11446498"}}" is not included. But I thought that you might use it in the future script. So in this answer, this is also included.
When above points are reflected and the sample curl command on the official document is converted to Google Apps Script, the script becomes as follows.
Sample script:
Please copy and paste the following script to the script editor and set the variables, and run the function of myFunction. By this, the request same with the sample curl is requested with Google Apps Script.
function myFunction() {
const accessToken = "###"; // Please set your access token.
const fileId = "###"; // Please set your fileId.
const fileBlob = DriveApp.getFileById("1sK-jcaJoD0WaAcixKtlHA85pf6t8M61v").getBlob();
const metadata = {name: "Contract.pdf", parent: {id: "11446498"}}; // Please set your file metadata.
const params = {
method: "post",
headers: {Authorization: `Bearer ${accessToken}`},
payload: {
attributes: JSON.stringify(metadata),
file: fileBlob,
},
muteHttpExceptions: true,
};
const url = `https://upload.box.com/api/2.0/files/${fileId}/content`;
const res = UrlFetchApp.fetch(url, params);
console.log(res.getContentText());
}
I could confirm that above sample script is the same request with above sample curl.
If you don't want to use the file metadata, please remove the line of attributes: JSON.stringify(metadata), from payload.
Note:
In this case, the maximum data size ("URL Fetch POST size") of UrlFetchApp is 50 MB. Please be careful this. Ref
About the limitation of file upload of Box API, please check https://developer.box.com/guides/uploads/.
If your access token and file ID are invalid, I think that an error occurs. So please be careful this.
References:
Upload file version
Class UrlFetchApp
I have noticed that the data endpoint for getting the spot price is returning the wrong currency information when using Python. I am using a currency_pair of BTC-USD but getting results for GBP.
Example:
price = client.get_spot_price(currency_pair = 'BTC-USD')
Response:
{
"amount": "5578.85",
"base": "BTC",
"currency": "GBP"
}
Any ideas on whats causing this problem?
A workaround, though not using the official Coinbase client, would be as follows:
import requests
import json
# Do This to Avoid Warning
headers = {
'CB-VERSION': '2017-12-08'
}
# Make Request
data = requests.get('https://api.coinbase.com/v2/prices/BTC-USD/sell/', headers=headers).text
# Parse Response, Get Amount
price = json.loads(data)['data']['amount]
Obviously not very robust in terms of handling errors, exceptions, or any other types of assertions one would need (that the official client should have) that would be needed to provide the confidence needed for actual buys/sells/transfers.
EDIT: UPDATE
Apparently this is a known issue:
Read here:
https://github.com/coinbase/coinbase-python/issues/32
Supposedly already fixed in the github master, though obviously not reflected in the pip version yet.
Quoting user kflecki:
I fixed this by going into the client.py file and modifying the code to look like this. Works just fine now, however would be nice for the files to come like this. But it's a simple fix that you can do on your own.
def get_spot_price(self, **params):
"""https://developers.coinbase.com/api/v2#get-spot-price"""
if 'currency_pair' in params:
currency_pair = params['currency_pair']
else:
currency_pair = 'BTC-USD'
response = self._get('v2', 'prices', currency_pair, 'spot', data=params)
return self._make_api_object(response, APIObject)
And now the command works like so:
eth_price = client.get_spot_price(currency_pair = 'ETH-USD')