Echonest request limit - request

I use Echonest in an application but I am restricted when I make several requests per minute.
I tried to contact Echonest but I do not have an answer, do you know what are the solutions to avoid this limitation of requests API ?

The Echonest API has partly been migrated to the Spotify API and the original API will be shut down.
See: http://developer.echonest.com/
Over the past two years, following Spotify’s acquisition of The Echo Nest, it’s become clear that rather than have two separate APIs with considerable overlap in features, it makes sense to migrate functionality and focus all of our future development on the Spotify API.
In doing so, there are a couple important dates you should be aware of:
March 29th, 2016 - We are announcing three new APIs (details below) and The Echo Nest API will no longer be issuing new API keys
May 31, 2016 - As of May 31st, The Echo Nest platform will no longer serve requests and developers will need to move over to the Spotify API.

Related

Can I use a WebSocket protocol to send and receive data from Cloud Firestore using an ESP32S3 Using ESP-IDF C

Google is deprecating Cloud Iot, so not an option.
https://cloud.google.com/iot/docs/release-notes
Cloud IoT Core will be retired on August 16, 2023. After August 15, 2023, the >documentation for IoT Core will no longer be available.
I would like to use Firebase - Firestore for my backend. It takes all the hassles out of keeping a server up and running, scalability etc.
I managed to send data after login and authentication from an ESP32S3 using ESP-IDF in C, (note not Arduino, and not C++), and would like to know if I can rather use a websocket for the communication, once the Authentication done, and if so, can you give me a code example or pointers.
With a websocket, I can send data to my own server hosted in Europe, in less than 400ms.
With Firestore, there is a large HTTP header, that includes the API key, and also the Auth Token, a large amount of data, quite a lot of handshaking going on over HTTPS, and eventually the data is sent. This takes more than 1400ms.
We are weighing items in a farming scenario, and need to weigh very frequently, and the 1400ms with fast internet is not acceptable.
So if I could still go with Firebase Authentication, and Firestore for data, I probably would be able to speed it up to even faster than 400ms if I could use a WebSocket client connection with the Firestore document store. I can use the Refresh Token if needed to refresh the Auth Token, and thus keep the socket connection up, every 3600s as required by Firebase, (that also takes quite long) but less of a hassle, as only once every say 55 minutes.
Any pointers, advice will be appreciated.
Firestore supports multiple SDKs and wire protocols, but none of them work over web sockets. The closest you can get with Firestore would be its REST API, which is documented here. It's not the easiest protocol to work with though, so I recommend using the API explorer that is built into the documentation to create examples for yourself.

Get CS:GO Inventory from Steam API for my React APP

Since requesting the Inventory from https://api.steampowered.com/IEconItems_730/GetPlayerItems/v1/ is permanently disabled I have to use https://steamcommunity.com/profiles/<steam_id>/inventory/json/2
Because of CORS I have to use my backend to provide the requested data. But when I do requests too often, my requests get rejected and my app cannot work on a large scale.
So the question should be simple: How can I avoid the rejection of my requests?
Any ideas and suggestions welcome.
Steam inventory endpoints are pretty heavily rate-limited, but there are a few different endpoints that you can use.
Trade offer endpoint
https://steamcommunity.com/tradeoffer/new/partnerinventory
This is the endpoint that's used when you open a trade offer with someone. It can be used to fetch both your own inventory as well as a trade partners. Required parameters are partner which is the user's Steam 64 Id, appid which is 730 in the case of CS:GO, and contextid which is 2 for most valve games. I don't know the exact limit, but I've been hitting this endpoint about once a second for a month with minimal 429 responses. To use this endpoint, you need to have a valid steam session and send the proper cookies along with the request. This will also only return tradable items.
Inventory endpoint #1
http://steamcommunity.com/inventory/STEAM64ID/APPID/CONTEXTID
Another inventory endpoint which has the same parameters but in the URL. I use this endpoint as a fallback to the first, but I've found that if the first endpoint is rate limited then this one will be too. That said, this one becomes limited much faster so it's best to use the first one instead.
Inventory endpoint #2
http://steamcommunity.com/profiles/STEAM64ID/inventory/json/APPID/CONTEXTID
The endpoint you're using. I don't use this one at all, but it could be worth knowing as another fallback.
Not all of them return the same data format, so be mindful. One inventory a second is a pretty solid rate for any decently sized site, especially if you limit user's ability to refresh inventories. If you need more though, you'll have to start looking into proxies.

Publishing message to GCP pubsub using API is time Consuming

I have a node js app in mongodb cloud platform,which will be used for posting 1 million messages to a topic in GCP pubsub.Since the platform is not supporting the npm package #google-cloud/pubsub,we implemented it using the API reference for Pubsub.Upon load testing the app,I can see each message is taking 50 seconds for posting it to the topic.Ideally it should take less than 5 secs.It takes around 30 seconds for the access_token API call and 20 seconds for the message posting API call.Since each message posting is a independent event,we cannot maintain a session to store the access_token and reuse it and API_KEY authentication method is not available for GCP PubSub.Is the API method for gcp pubsub is very slow when compared to using library #google-cloud/pubsub ?.
Can anyone suggest a solution to improve performance of GCP PubSub using APIs
The PubSub client library are greatly optimized in several ways. The first one is the use of gRPC protocol instead of REST API. Then, there is message aggregation before a push to PubSub (500ms of wait by default). Then, there is various async mechanism to parallelize the processing.
So, a huge and great work done by the Client Library teams and hard (or expensive) to reproduce on your side. But you can, the sources are public, you can have a look to the client libraries!
The 30s for the access_token retrieval is too long. Are you sure that you haven't network issue? In any case, this token is valid for 1H. If you can reuse it in your subsequent call you will save a lot of time!

managing app engine versions through API calls

Is there anyway that I can manage the appengine versions and instances through API calls?
What I mean by managing is to start/stop/delete versions deployed to the appengine through API calls.
Is that possible by using gcloud sdk commands from command line ?
Another question , does google provide APIs (or commands) to check the status of running instances ? check if the instance is idle or not and how long its being idle
There is a beta API for managing versions and services here:
https://cloud.google.com/appengine/docs/admin-api/
The API is still beta because it's under active development; there are still a few methods and fields which aren't implemented. Shortly after those are complete, the API will be marked "v1", though v1beta4 and v1beta5 will continue to be supported for several months in transition.
For example, the API doesn't yet include operations on instances, but I expect that List/Get/Delete will be available fairly soon. Since App Engine automatically creates instances for you, there is no create instance API.
I just noticed that the most recent documentation re-skin seems to have hidden the documentation for the REST interface, so I'll drop that link there so you that you can find the currently implemented methods. (Version.Update is also implemented for a few fields, so that documentation update should be coming out very soon.)
2020 UPDATE: You can do it using the apps.services.versions api. You can stop/start a version with the PATCH method, setting the mask to "servingStatus" and in the body set the "servingStatus" field to "STOPPED"/"SERVING".
Similarly, you can use the delete/create methods to launch and remove new versions
Reference:
https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1/apps.services.versions/patch

Running Amazon product api based application via proxy to avoid throttle

I am making an application based on Amazon Product API (or could be MWS), I will need to fetch data huge information again and again. So, in order to avoid Throttling limits, i would like to use Proxy ips.
Is this a valid thing to do.
1. Is the throttling limit apply per MWS account or per IP address
2. There are several proxy hosts avaialble for free/commerical. Is it ok to use them.
thanks
Valid? not really.
Amazon has some sharp people working there. I'm pretty sure this scheme wouldn't last very long. Your api access would be revoked pretty quickly.

Resources