On the Coinbase Pro Docs for how to delete all open orders it says you can simply make a DELETE request to the /orders endpoint (obviously taking into account the signatures necessary to access the private endpoints). However, when I do this it only deletes 20 open orders at a time. If I wish to delete more than 20 orders I need to make repeated calls to the endpoint checking each time to see if anything was deleted.
Is there a way to delete ALL open orders? And if the order deletion is going to the paginated, why is it defaulting to pages of 20 when the pagination section of the docs says that default is 100 entries per request?
Related
I'm building a SPA site in React (using redux).
To my site, any user can connect through Google or Facebook.
Each user who logs in to the site receives a personal user_id.
For each user, the system needs to keep a history of documents created by this same user (like the recent docs in Word).
I need to create functionality that whenever the user is logged in he will be able to see a history of the five documents he has created/updated.
In addition, the latest documents will load even after disconnecting and reconnecting to the system.
To load the history into the system I am thinking of using a dedicated index in ElasticSearch.
My question is which way would be suitable the most to use when the user is already logged in and creates several documents one after the other -
Should I need to save everything within the index in ES or is there a smart way to save and update the information locally without producing a lot of calls to DB?
I want that in the end there will be only 2 DB calls that are made in total - one call to load the information on login and one call to update the information when the user logs out. Any other create and update docs will save locally on the client side until leaving the site.
Here is the problem:
I have a tenant with 50,000 users Every day I need to pull that user list to see what has changed. Example: Which users were added or removed, and what are their mySite URL is.
I can get some general information calling /users but, I need each user's mySite. The only way I have found to retrieve that is to call /users/userId?$select=mySite.
This implies I must make 50k calls and I then encounter throttling issues.
Is there a way through Microsoft Graph (or some other mechanism) to pull the user data, including mySite efficiently?
I am executing a search operation for people search using Microsoft Graph Endpoint - https://graph.microsoft.com/V1.0/users.
The question I have is - I am able to get all the textual data I need, but is there a way to get photo for each returned user in a single call?. If there are 10 users returned in the previous search, executing 10 different operations to get the photos based on each user's id would be a challenge.
It isn't possible to fetch both user's data and photo in a single call since they are different data types (application/json vs image/jpeg).
Marc is spot on here. However you should also check out the new batching feature (note this is still in /beta) which would allow you to get up to 5 photos in one request round-trip. See https://developer.microsoft.com/en-us/graph/docs/concepts/json_batching. We'd love to get your feedback on this.
I'd like to be able to tag affiliate links with some extra information so that I can map successes to information inside of my system.
Is there any way I can include a custom identifier or payload of data with the affiliate link that Amazon will allow me to inspect when I receive a report of successful sales?
The only thing I found is the tracking ids from
Manage Your Tracking IDs page.
However this ids are limited to 100 values by default (you need to contact amazon for more). This is what they answered me:
I understand you'd like to view reporting within Products Advertising
API.
All reports are housed on your Associates account for you to view the
activity of your links.
We do offer multiple tracking IDs so that Associates can track the
activity of individual links easily and accurately.
You can create up to 100 tracking IDs in your account by visiting the
Account Settings section of Associates Central. You'll find a link in
the Account Information section labeled Manage your tracking IDs:
https://affiliate-program.amazon.com/gp/associates/network/your-account/manage-t...
Once you've created your additional tracking IDs, to view these IDs,
please log into Associates Central (http://associates.amazon.com).
Once logged in, click on the drop down box under Tracking ID to change
which ID you are working with.
If you are interested in receiving more than 100 tracking IDs, please
first create this amount via your associate account in Associate
Central. If you have already created 100 tracking IDs in your account
and are needing additional tracking IDs, please use the link below to
write back to us with a detailed description of how you'll be using
these additional ID:
I'd like to set up a coldfusion page that will pull the status updates from my own facebook account and twitter accounts and put them in a SQL database along with their timestamps. Whenever I run this page it should only grab information after the most recent time stamp it already has within the database.
I'm hoping this won't be too bad because all I'm interested in is just status updates and their time stamps. Eventually I'd like to pull other things like images and such, but for a first test just status updates is fine. Does anyone have sample code and/or pointers that could assist me in this endeavor?
I'd like it if any information relates to the current version of the apis (twitter with oAuth and facebook open graph) if they are necessary. Some solutions I've seen involve the creation of a twitter application and facebook application to interact with the APIs; is that necessary if all I want to do is access a subset of my own account information? Thanks in advance!
I would read the max(insertDate) from the database and if the API allows you, only request updates since that date. Then insert those updates. The next time you run you'll just need to get the max() of the last bunch of updates before calling for the next bunch.
You could run it every 5 minutes using a ColdFusion scheduled task.
How you communicate with the API is usually using <cfhttp />. One thing I always do is log every request and response, either in a text file, or in a database. That's can be invaluable when troubleshooting.
Hope that helps.
Use the cffeed tag to pull RSS feeds from Twitter and Facebook. Retain the date of the last feed scan somewhere (application variable or database) and loop over the feed entries. Any entry older than last scan is ignored, everything else gets committed. Make sure to wrap cffeed in a try/catch, as it will throw errors if the service is down (ahem, twitter) As mentioned in other answers, set it up as a scheduled task.
<cffeed action="read" properties="feedMetadata" query="feedQuery"
source="http://search.twitter.com/search.atom?q=+from:mytwitteraccount" />
Different approach than what you're suggesting, but it worked for us. We had two live events, where we asked people to post to a bespoke Facebook fan page, or to Twitter with a hashtag we endorsed for the event in realtime. Then we just fetched and parsed the RSS feeds of the FB page, and the Twitter search results, extracting what was new, on a short interval... I think it was approximately every three minutes. CFFEED was a little error-prone and wonky, just doing a CFHTTP get of the RSS feeds, and then processing the CFHTTP.filecontent struct item as XML worked fine
.LAG