Is there a way to view Coinbase Pro transactions for sub portfolios via their API? - coinbase-api

For some unknown reason I can only fetch Coinbase Pro transactions (withdrawals and deposits) for my Default Portfolio. This behavior doesn't seem to work with my sub portfolios. What's perhaps worse, I can't find any way to see transfers between portfolios!
Can someone else please confirm if they can see transactions between two portfolios using the pro.coinbase.com transactions api?

I've come to realize that the issue is with CCXT-JS. That tools does not seem to expose https://api.exchange.coinbase.com/accounts/{account_id}/ledger at the time of writing.

Related

what is the best way to debug vCloud client REST applications?

I'm building a vClould client application via the REST APIs, however, the documentation is inconsistent an in some cases just wrong and misleading.
All I really need is a solid debug tool or even a log file. Any recommendations?
You already mentioned you have access to the message stream, which is one of the first steps. Typically if I'm using the Apache HttpClient/HttpComponents I'll go increase the log level so it logs the full HTTP requests.
My next step is usually to cheat and to log into vCD as a system administrator and see what's going on. When vCD was designed there was a very deliberate decision to not reveal infrastructure level problems to tenants of the cloud (normal org users or org admins), as that would break the cloud abstraction. Sadly, that means as an org-level user you're often going to get "contact your cloud admin" error responses. We are aware that this isn't ideal and try to find ways to make it better when we can (IIRC the new 5.5 release that was announced last month does have some improvements in that area).
The last step is usually to cheat even more and to look at the server side logs (vcloud-container-debug.log, specifically). That usually gives me a better clue as to what went wrong. Of course, you may be unlucky and not have access to the vCD cell machine.
My workaround in the latter two cases is to try the operations via the vCD UI and see (1) if they work as expected and (2) if they do, to check the system state via the API and see if I'm sending the wrong request payloads, etc. because the doc or schema reference may not have been clear enough.
In regards to the documentation, please use the feedback links () found on individual doc pages to let us know! Our technical writer reviews all the feedback and tries to address them.
My final suggestion is that you might want to post API questions to the vCloud API community forum VMware has. There are a number of experts (both users and VMware employees) that monitor it and respond to questions.

Create does not work in GAE Datastore viewer

When I try to create some entities I don't see the option to input fields. I just see the SaveEntity button.
However I can view all the existing entities.
What is very strange is - there is another entity called VideoEntity for which the create did not work yesterday but works today.
Can somebody help me with this seemingly unpredictable tool ?
Regards,
Sathya
i think the console knows what properties each entity has based on existing data, rather then your models. and the data is only updated periodically. when did you upload your app? maybe waiting a few hours will give the console time to update.
alternatively, you could use the remote api to add your entities, or write a small snippet and upload such as ...
VideoStatsEntity(app='home', ip='116.89.52.67', params='tag=20130210').put()
Writing a simple interface to the data-store to allow you to edit/create models is probably the best thing to do in this case. You know what they contain so you can adjust your interface accordingly, rather then waiting for the admin interface to "catch up" as Gwyn notes.
I believe that there are some property types that are impossible to add via the admin interface that you are using so you'll probably get to the point sooner rather then later of creating a custom interface.
The admin datastore view is good for quickly checking out the contents of the datastore, but ever tried paging through 100's of entries? Not fun.

Automating Salesforce Security Checks

I need to create some automated method for checking certain security settings within a given Salesforce org(s). The four big ones are:
IP Restrictions within each profile
Mobile User setting disabled
Mobile Lite disabled
Chatter Disabled
I think the first two can be accomplished through the API (SOQL to get all profiles and check loginIpRanges[] length >0 and SOQL to get all users and check isMobileUser property for each one), but I can't find anything in the API for the other two and wonder if I would have to screen scrape it.
Any suggestions on the best approach to accomplish this? A local Python or other script that connects remotely via the API and a screen scraper or Selenium script for the non-API items? An Apex or VisualForce page that is installed within each org?
I am new to Salesforce and Apex, so before I start down one road and doing it within Salesforce vs via the API I would really appreciate any guidance.
Thank you!
I think you'll have to take a mixed approach to solving this, perhaps wrapped up in some larger python script.
Use the metadata API to get all of the Profile objects and parse for loginIPRanges. You can use Apache ANT and the Force.com migration tool commands to do this. You can also get the SecuritySettings from the same API and method and get a lot of the things in the Security Health Check, if you need them. The results will be returned in XML, which you can easily parse in your python script.
Use the API and a SOQL query to check for the isMobileUser permission, use python to parse/output results. Beatbox is a good library for connecting to the standard API.
For the last two, I think you'll need to go with some screen scraping/browser automation and parsing. Hopefully someone has a better answer for this, as I'm not familiar enough to help with how to accomplish this aspect. The screens are in standard locations so it should be repeatable as long as future updates don't move things.
Ideally you'll be able to combine these into one large script that fires off beatbox, then fires off ant/migration tool, and some browser automation script.

Is there a way to access to the database of Google Maps for the public transport?

I'd like to have access to the numbers and type of public transportation and for each of them, their stops of a certain city. So for instance, I'd like to have :
Number Type Stops
1 Metro Stop1.1, Stop1.2, Stop1.3, ...
6 Bus Stop6.1, Stop6.2, Stop6.3, ...
17 Tram Stop17.1, Stop17.2, Stop17.3, ...
... ... ...
Of course, I don't really care about the format, I just want to know how to have access to the data, in order to do not re-enter it manually in my website!
Thanks for any help :-)
GTFS Data Exchange
GTFS Exchange was retired in 2016.
It is still useful for legacy gtfs files that are not updated/available anymore. However, you should now refer to the following databases for static GTFS:
Transitland
TransitFeeds
TransitWiki
Google's own compilation: PublicFeeds.wiki
They overlap a lot but not completely, so it's always good to check all of them out. Also know that some smaller transit agencies still don't share their data with these services, so it's worth checking their website.
Finally, Google map uses these same files as their source data to my knowledge. In their reference page for GTFS, they mention the following:
Submitting a Transit feed to Google
If you're at a public agency that oversees public transportation for your city, you can use the GTFS specification to provide schedules and geographic information to Google Maps and other Google applications that show transit information.
If you provide a transportation service that is open to the public, and operates with fixed schedules and routes, we welcome your participation; it is simple and free.
For live updates though, I think that google uses GTFS RealTime.
Is there a way... in a word, no. This data isn't available through the Maps API (yet).

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

Resources