How can I mock the results of the GMail API? - google-app-engine

We are using the GMail API and developing an application on top of it. Ideally I would like to have some golden emails to test the analytics engine against. This way I can develop the analytics engine without having to worry about fetching the emails and hence without a network connection. What is the best way to achieve this? I noticed that App Engine (which we use) now allows you to mock DataStore/memcache etc. and run nosetests, but I don't know how to do this across local appserver restarts.

The Mock class provided by googleapis/google-api-python-client looks like a good candidate for your use case.
from googleapiclient.discovery import build
from googleapiclient.http import HttpMock
mock = HttpMock('mock-email-list.json', {'status': '200'})
gmail = build('gmail', 'v1', http=mock)
response = gmail.users().messages().list(userId='me').execute()
print(response)
Where mock-email-list.json content is of the form
{
"messages":[
{
"id":"abcd",
"threadId":"abcd"
},
{
"id":"efgh",
"threadId":"abcd"
},
],
"resultSizeEstimate":2
}
As a side note, after your test has run, you can also check what url the mock has been used for (by the gmail client):
assert mock.uri == 'https://gmail.googleapis.com/gmail/v1/users/me/messages?alt=json'

The Gmail API nor the client libraries provide special functionality for mocking email fetching. You'll need to build out the mocks on your own.

Related

How to avoid CORS error for a React - GraphQL application

I am currently trying to use camunda platform and in this concept I am building a react application to make a call to a graphQL api and perform some actions. So far, I have used the api with postman and does the job I want to, The graphql mutation is the following:
mutation claimTask ($taskId: String!, $assignee: String) {
claimTask (taskId: $taskId, assignee: $assignee) {
id
name
taskDefinitionId
processName
creationTime
completionTime
assignee
variables {
id
name
value
previewValue
isValueTruncated
}
taskState
sortValues
isFirst
formKey
processDefinitionId
candidateGroups
}
}
And the endpoint is
http://{my_ip}:8082/graphql
which is set in a personal vm server. What I am trying to do now, is make the same request through the react app (apollo client). So far, I am getting a cors policy error:
Access to fetch at 'http://{my_ip}:8082/graphql' from origin 'http://localhost:3000' has been blocked by CORS policy
I understand that I have to configure somehow the uri that can be accepted by the server. My question is, since I am using an existing api should I do this from the express server (apollo server) configuration? Because so far every solution I found talks about implementing the api from the scratch, including defining the schemas.
I have concluded, that I should use the express server to create a kind of proxy so that the react app will hit the api through there but I cannot figure out how exactly is this implemented.
I know that this is a vague question, but any suggestion could be very useful.
Thank you!!
It is a best practice to not hit the GraphQL API directly, but to create your own facade, which exposes the functionality your front-end needs, possibly in a more use case specific way. This means connectivity only needs to be allowed server-to-server between the back-ends. It is more secure as you don't need to open the API to the public and it also solves the cross-domain challenge you have. Your facade will be exposed under your domain.
Here is a example NestJS client "Generating the Tasklist service":
https://docs.camunda.io/docs/apis-clients/tasklist-api/tasklist-api-tutorial/#generating-the-tasklist-service
On your express backend you would do something similar.
(This example uses a Java back-end with react, but I am guess you want JS: https://github.com/camunda-community-hub/camunda-8-lowcode-ui-template/blob/main/src/main/java/org/example/camunda/process/solution/facade/TaskController.java .)

Testing UI with locust

I have been able to load test the backend APIs of my web application using locust in my CIT environment.
Since the CIT is a protected environment, I had to manually login with my user in CIT environment via browser and copy the required request headers and use them in code to send successful requests.
My current code:
import time
from locust import HttpUser, task, between
class TestUser(HttpUser):
wait_time = between(0.1, 0.2)
#task
def client_dashboard(self):
print('load testing "Client Dashboard" screen')
client_req_headers = {'Content-Type': 'application/json',
'Authorization': 'Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJMT3Etc0JueEtvYkZHbWNzaU9FT0pzV3VHSjB5MEJreDlISU5xUjZQbUEwIn0.eyJleHAiOjE2MDiJ9.xNIZqrZjvVkauLUVHv1dSb9vqOHtb1-kfBG94hZqGqhXWaK06IfYuYsFJlpmSa4mcauW',
'realm': 'Client1'
}
self.client.get("/order-service/orders/summary/12", headers=client_req_headers)
self.client.get("/order-service/clients/12/", headers=client_req_headers)
def on_start(self):
print('In on_start method')
def on_stop(self):
print('In on stop method')
I am new to locust and so far I have been able to identify that with locust I can send many concurrent requests by defining tasks in my User classes. Whatever behavior I define in my tasks will be load tested.
I wanted to know if I can also check the UI flow using locust. i.e. Open the login page via code, Enter the credentials and submit. Then directed to the dashboard page on successful login and navigate the different links (Our frontend is an SPA (single page application) built using Quasar)
Does locust provide this functionality?
Would I have to include another python library for this case like selenium etc. ?
What would be the recommended way to achieve such behavior?
Locust doesn't have built-in support for anything like that. It's primarily designed for just API and endpoint testing. But it's flexible enough to run any code in an orchestrated and distributed manner so it is possible. There are community modules that should be able to get you what you want. I haven't tried it but you could look at this:
https://github.com/nickboucart/realbrowserlocusts

How can I call a Google Cloud Function from Google App Engine?

I have an App Engine project.
I also have a Google Cloud Function.
And I want to call that Google Cloud Function from the App Engine project. I just can't seem to get that to work.
Yes, if I make the function full public (i.e. set the Cloud Function to 'allow all traffic' and create a rule for 'allUsers' to allow calling the function) it works. But if I limit either of the two settings, it stops working immediately and I get 403's.
The App and Function are in the same project, so I would at least assume that setting the Function to 'allow internal traffic only' should work just fine, provided that I have a rule for 'allUsers' to allow calling the function.
How does that work? How does one generally call a (non-public) Google Cloud Function from Google App Engine?
You need an auth header for the ping to the function url. It should look like:
headers = {
....
'Authorization': 'Bearer some-long-hash-token'
}
Here is how to get the token:
import requests
token_response = requests.get(
'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=' +
'https://[your zone]-[your app name].cloudfunctions.net/[your function name]',
headers={'Metadata-Flavor': 'Google'})
return token_response.content.decode("utf-8")
'Allow internal traffic only' does not work as expected. My App Engine app is in the same project as the Functions, and it does not work. I had to turn on 'Allow all traffic', and use the header method.
Example:
def get_access_token():
import requests
token_response = requests.get(
'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=' +
'https://us-central1-my_app.cloudfunctions.net/my_function',
headers={'Metadata-Flavor': 'Google'})
return token_response.content.decode("utf-8")
def test():
url_string = f"https://us-central1-my_app.cloudfunctions.net/my_function?message=it%20worked"
access_token = get_access_token()
print(
requests.get(url_string, headers={'Authorization': f"Bearer {access_token}"}
)
As mentioned in the docs, Allow internal traffic only mentions the following:
Only requests from VPC networks in the same project or VPC Service Controls perimeter are allowed. All other requests are rejected.
Please note that since App Engine Standard is a serverless product, it is not part of the VPC and then the requests made from this product are not considered "Internal" calls, actually the calls are made from the Public IPs of the instances and for this reason you get an HTTP 403 error message.
Also using a VPC Serverless Connector won't work since this more a bridge to reach resources in the VPC (like VMs or Memorystore instances) but not a Cloud Function because this is also a Serverless product and it does not have an IP in the VPC.
I think here are three options:
Using App Engine Flex:
Since App Engine Flex uses VM instances, these instances will be part of the VPC and you'll reach the Function even when setting the "Allow internal traffic only" option.
Use a VM as a proxy:
You can create a VPC Serverless Connector and assign it to the app in App Engine. Then you can create a VM and reach the function using the VM as a proxy. This is not the best option because of the costs but at the end is an option.
The last option considers that the function can use the Allow All Traffic option:
You can set some security on the Cloud Function to only allow a particular Service Account and you can use this sample code to authenticate.
EDITED:
A good sample of the code for this option was shared by #gaefan in the other answer.
#GAEfan is correct.
As an addition: I used the official Google Auth library to give me the necessary headers.
const {GoogleAuth} = require('google-auth-library');
// Instead of specifying the type of client you'd like to use (JWT, OAuth2, etc)
// this library will automatically choose the right client based on the environment.
const googleCloudFunctionURL = 'https://europe-west1-project.cloudfunctions.net/function';
(async function() {
const auth = new GoogleAuth();
let googleCloudFunctionClient = await auth.getIdTokenClient(googleCloudFunctionURL);
console.log(await googleCloudFunctionClient.getRequestHeaders(googleCloudFunctionURL));
})();

Invoke Lambda function from Amplify-generated React App without using API Gateway

I used Amplify to generate a static website and the underlying React app. Initially I also generated an API endpoint but, because my lambda function may run over the API Gateway timeout limit (29 seconds), I need to invoke the lambda function directly from the generated React App, instead of going through API Gateway.
The code looks as follows, for the React page to authenticate using Cognito:
import Auth from '#aws-amplify/auth';
import { withAuthenticator } from 'aws-amplify-react';
import awsconfig from './aws-exports';
Auth.configure(awsconfig);
The above lines wrap the App (root) object and work as advertised. But since I do not want to use the API Gateway, how do I invoke the AWS Lambda function directly from React App?
The answers I could find talk about importing AWS etc, which seems to be in conflict with what we are trying to do here. I need to use the authenticated connection (which already works using the above code) when invoking lambda, so I cannot use generic invocation given in this example.
The Invoke API does not provide any examples as well.
Any advice is appreciated.
Note: if you do not need a response after your long running lambda, then consider API Gateways' Asynchronous Invocation
Amplify calls this approach "working with service objects".
To do this you'll have to ensure that the role Cognito gives your authenticated users includes permissions for lambda:invoke as well as any additional permissions needed within the function. I'll assume you can do that for now, however you can see the Role-Based Access Control documentation, or ask another question if not.
To access these roles within Amplify you need to use the Auth.currentCredentials function, which returns a promise with a credentials object, which can then be used on an aws-sdk client.
For example:
import Auth from '#aws-amplify/auth';
import Lambda from 'aws-sdk/clients/lambda'; // npm install aws-sdk
Auth.currentCredentials()
.then(credentials => {
const lambda = new Lambda({
credentials: Auth.essentialCredentials(credentials)
});
return lambda.invoke({
FunctionName: 'my-function',
Payload: JSON.stringify({ hello: world }),
});
})
You can see the full documentation for invoking lambdas on the AWS-SDK javascript documentation.
However you should be aware that the payload from API Gateway is constructed by AWS and includes much more information than just the body that the endpoint was called with, however when you invoke directly, all you'll get is the payload, so you'll have to build that payload object accordingly.

Getting No response from google drive javascript api

I'm using google drive javascript api(v2) in my reactjs project.
And I'm using this function to get files from google drive.
this.getProjectObjects = function(query, callback)
{
var request = gapi.client.drive.files.list({
corpus : 'DEFAULT',
q : query,
fields : 'items(id,description,title,properties)'
});
request.then(function(resp) {
callback(resp.result.items)
}, function(err) {
console.log(err)
});
};
After authenticate the user, I can get the files using this function.
But, when I navigate to other react component, it's no longer working.
There is no response or error, api just hangs.
Silly thing is when I refresh page, it's working. Again if I navigate to other component, it's not working.
I'm using the same query, nothing changed during navigating.
Anyone have any ideas what the issue could be?
Thanks
You may want to check the Chrome devTools to see if you are encountering any problem when switching pages. Also in the documentation - Getting Started:
There are several ways to use the JavaScript client library to make API requests, but they all follow the same basic pattern:
The application loads the JavaScript client library.
The application initializes the library with API key, OAuth client ID, and API Discovery Document(s).
The application sends a request and processes the response.
The following sections show 3 common ways of using the JavaScript client library.
Option 1: Load the API discovery document, then assemble the request.
Option 2: Use gapi.client.request
Option 3: Use CORS
See the sample for code implementation on effectively loading and making API calls.
Hope this helps.

Resources