I created a Logic App HTTP GET request that retrieve data from a weather API.
What I would like to achieve is to reduce the calls to the weather API using a cached result only for identical requests.
Example: in my company there are 300 devices that are calling the Logic Apps endpoint with the same latitude and longitude in the query. At this point, I'm assuming, that for every call the Logic App makes a call at the weather API. Instead I'd like that it calls the weather API just the first time and then, for all the identical calls, it returns the cached result.
I'm afraid that, if I use cache-control settings in the header of the request, the Logic App would return the same cached result also if the query is different (for example a different location).
Thanks.
As #Thomas said in the comments above, API Management Service is expensive than other services of App Services, like Logic Apps.
However, per my experience, I think you can pay less to implement a cache logic using a few code. For example, it's available and cheap to use Azure Table Storage to store these cache weather data. And you can fetch them via the table partition key & row key as the query parameters of datetime, latitude and longitude from Azure Table.
Here is a simple pseudocode for cache logic.
string partitionKey = "<datetime>"
string rowKey = "<latitude>-<longitude>"
data = fetchWeatherDataFromTable(partitionKey, rowKey)
if data == null {
data = getWeatherDataFromRemoteAPI()
storeWeatherDataIntoTable(partitionKey, rowKey, data)
}
return data
Also, you can use other storages like Azure SQL Database or Redis instead of Azure Table Storage. It's up to you.
Hope it helps.
Related
We are a using a micro-service based pattern for our project where we have Users and their Orders. Users personal information (name, email, mobile) is stored in User table in relational database while we are storing Orders data of users in Orders collection in NoSql database. We want to develop an API to get a paginated list of all the orders placed with order details along with finer details of user associated like - user name, mobile, email along with each order. We are storing userId in Orders collection.
The problem is how do we get User details for each order in this list since both the resources are in different databases. We also thought of storing user name, email and mobile in Orders collection only but what if a user updates their profile, the Orders collection will have stale user data.
What is the best approach to address this issue?
You can use API gateway pattern, UI will call to API gateway endpoint and the Endpoint will call the both the API/services to get the result and aggregate it then returns aggregated response to the UI (caller)
https://microservices.io/patterns/apigateway.html
Well it mostly depends on scalability needs in terms of data size and number of requests. You may go with the API gateway if you don't have too much data and you don't get many requests to that service.
Otherwise if you really need something scalable then you should implement your own thought with an event based communication.
I already provided an answer for a similar situation you can take a look
https://stackoverflow.com/a/63957775/3719412
You have two services Orders and Users. You are requesting Orders service to get all Orders. It will return a response data which will contains ID of Users (each Order contains ID of User). Then, you will make a request to a Users service to get an information regarding User by ID which you got before. And finally, you can aggregate those results (if it is needed).
As guys mention, good solution will be to implement API Gateway here. As a client, you will send a request to a single port with endpoint (to a Gateway) and Gateway should create logic which I have described before.
I use these following GCP products together for a CRM system:
Cloud SQL
App Engine
Bigquery
Once a week an external application exports data from Bigquery in this way:
The external application makes a request to Appengine with a token.
AppEngine retrieves permissions for this token from Cloud SQL, makes some additional computation to obtain a list of allowed IDs.
Appengine runs a Bigquery's query filtered with these ids. Something like that: SELECT * FROM table WHERE id IN(ids)
Appengine responds to the external application with a unmodified result of query in JSON.
The problem is that the export is not very often, but amount of data can be large and I dont want to load AppEngine with this data. What other GCP products are useful in this case? Remember I need to retrieve permissions from Appengine and CloudSQL.
Unclear whether the JSON is just directly from BigQuery query results, or you do additional processing in the application to render/format it. I'm assuming direct results.
An option that comes to mind is to leverage cloud storage. You can use the signed url feature to provide a time-limited link to your (potential large) results without exposing public access.
This, coupled with BigQuery's ability to export results to GCS (either via an export job, or using the newer EXPORT DATA SQL statement allows you to run a query and deliver results directly to GCS.
With this, you could simply redirect the user to the signed URL at the end of your current flow. There's additional features that are complementary here, such as using GCS data lifecycle features to age out and remove files automatically so you don't need to concern yourself with slow accumulation of results.
Every time someone hits an API route I want to store that information in database, connected with req. IP.
Afther I would like to find some association rules based on similar searches.
Should I store some information in cookies or to use local dartabase?
Example on some hotels site:
I want to store info that i got a lot of request for cheap hotels in some specific area.
Thnaks.
Definitely in a database. Cookies wouldn't make sense because
You cannot rely on cookies for persistent data. They can expire, be cleared, etc.
Cookies can hold a very limited amount of data (4093 bytes usually)
Cookies are stored locally on your client's browser, you want information across all of your clients.
Tracking user behavior data is very common web feature. You may want to use a web analytics service such as Google Analytics rather then implement your own.
What is the best design approach in term of security, performance and maintenance for REST API that has many subscribers (companies)?
What is the best approach to use?:
Build a general API and sub APIs for each subscriber (company), when request come we check the request and forward it to the sub API using (API Key) then retrieve data to general API then to client.
Should we make single API and many databases for storing each subscribe(company) data (because each company has huge records that why we suggested to separated databases to improve performance)? when request come we verify it and change database Connection String based on client request.
Should we make one API and one big database that handle all subscribes data?
Do you suggest any new approach to solve this problem? We used Web API and MS SQL Server and Azure Cloud.
In the past I've had one API, the API is secured using OAuth/JWT in the token we have a company id. When a request comes in we read the company id from the JWT and perform a lookup in a master database, this database holds global information such a connection strings for each company. We then create a unit of work that has the company's conneciton string associated with it and any database lookups use that.
This mean that you can start with one master and one node database, when the node database starts getting overloaded you can bring up another one and either add new companies to that or move existing companies to take pressure off. Essentially you're just scaling out when the need arises.
We had no performance issues with this setup.
Depends on the transaction volume and nature of data, you can go for a single database or separate database for each company.
Option 2 would be the best , if you have complex data model
I don't see any advantage of going for option 1, because , anyway general API will call for each request.
You can use the ClientID verification while issuing access tokes.
What I understood from your question is, you want an rest API for multiple consumers(companies). Logically the employees from that company will consume your API, employees may be admin, HR etc. So what I suggested for such scenario you must go with single Rest API for providing the services to your consumers and for security you have to use OpenId on the top of OAuth 2. This resolves the authentication and authorization for you.
I have developed a standard Google App Engine backend Application for my Android client. Now, there is search functionality in the App and during one request, I plan to return 20 results but I search for more in advanced(like 100) so that for the next hit, I will just search in these records and return. So, I need a mechanism to save these 80 records so that the same user might get them quickly.
I searched for it and found out that we can enable sessions in appengine-web.xml but all the session access has been done in doPost() and doGet() while my code is entirely Google's cloud endpoints.(like Spring)
Another thing is that I would like to persist the data both inside the Datastore and some cache(like Memcache).
My end goal is storing this data across search sessions. Is there any mechanism that will allow me to do this?
The usual approach here is to provide a code value in the response which the user can send in the next request to "continue" viewing the same results. This is called a "cursor".
For example, you might store the 80 records under some random key in your cache, and then send that random key to the user as part of the response. Then, when the user makes a new request including the key, you just the records and return them.
Cookie-based sessions don't usually work well with APIs; they introduce unnecessary statefulness.