Can Google Cloud Endpoints Be Hit Using Task Queues? - google-app-engine

I have an app engine task queue that tries to call a cloud endpoint, but when I see the task queue fire off it gets a 404. I verified the endpoint is configured for post:
#ApiMethod(name = "sendemail", path = "sendemail", httpMethod = HttpMethod.POST)
and I am queueing like this:
TaskOptions lOptions = TaskOptions.Builder.withUrl("/_ah/api/email/v1/sendemail");
I can hit the endpoint using the endpoint explorer, what am I missing? Thanks!

It's probably defaulting to get method try adding method POST or making endspoints method get.
withUrl("....").method(Method.POST)

Related

Google Cloud Tasks cannot authenticate to Cloud Run

I am trying to invoke a Cloud Run service using Cloud Tasks as described in the docs here.
I have a running Cloud Run service. If I make the service publicly accessible, it behaves as expected.
I have created a cloud queue and I schedule the cloud task with a local script. This one is using my own account. The script looks like this
from google.cloud import tasks_v2
client = tasks_v2.CloudTasksClient()
project = 'my-project'
queue = 'my-queue'
location = 'europe-west1'
url = 'https://url_to_my_service'
parent = client.queue_path(project, location, queue)
task = {
'http_request': {
'http_method': 'GET',
'url': url,
'oidc_token': {
'service_account_email': 'my-service-account#my-project.iam.gserviceaccount.com'
}
}
}
response = client.create_task(parent, task)
print('Created task {}'.format(response.name))
I see the task appear in the queue, but it fails and retries immediately. The reason for this (by checking the logs) is that the Cloud Run service returns a 401 response.
My own user has the roles "Service Account Token Creator" and "Service Account User". It doesn't have the "Cloud Tasks Enqueuer" explicitly, but since I am able to create the task in the queue, I guess I have inherited the required permissions.
The service account "my-service-account#my-project.iam.gserviceaccount.com" (which I use in the task to get the OIDC token) has - amongst others - the following roles:
Cloud Tasks Enqueuer (Although I don't think it needs this one as I'm creating the task with my own account)
Cloud Tasks Task Runner
Cloud Tasks Viewer
Service Account Token Creator (I'm not sure whether this should be added to my own account - the one who schedules the task - or to the service account that should perform the call to Cloud Run)
Service Account User (same here)
Cloud Run Invoker
So I did a dirty trick: I created a key file for the service account, downloaded it locally and impersonated locally by adding an account to my gcloud config with the key file. Next, I run
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" https://url_to_my_service
That works! (By the way, it also works when I switch back to my own account)
Final tests: if I remove the oidc_token from the task when creating the task, I get a 403 response from Cloud Run! Not a 401...
If I remove the "Cloud Run Invoker" role from the service account and try again locally with curl, I also get a 403 instead of a 401.
If I finally make the Cloud Run service publicly accessible, everything works.
So, it seems that the Cloud Task fails to generate a token for the service account to authenticate properly at the Cloud Run service.
What am I missing?
I had the same issue here was my fix:
Diagnosis: Generating OIDC tokens currently does not support custom domains in the audience parameter. I was using a custom domain for my cloud run service (https://my-service.my-domain.com) instead of the cloud run generated url (found in the cloud run service dashboard) that looks like this: https://XXXXXX.run.app
Masking behavior: In the task being enqueued to Cloud Tasks, If the audience field for the oidc_token is not explicitly set then the target url from the task is used to set the audience in the request for the OIDC token.
In my case this meant that enqueueing a task to be sent to the target https://my-service.my-domain.com/resource the audience for the generating the OIDC token was set to my custom domain https://my-service.my-domain.com/resource. Since custom domains are not supported when generating OIDC tokens, I was receiving 401 not authorized responses from the target service.
My fix: Explicitly populate the audience with the Cloud Run generated URL, so that a valid token is issued. In my client I was able to globally set the audience for all tasks targeting a given service with the base url: 'audience' : 'https://XXXXXX.run.app'. This generated a valid token. I did not need to change the url of the target resource itself. The resource stayed the same: 'url' : 'https://my-service.my-domain.com/resource'
More Reading:
I've run into this problem before when setting up service-to-service authentication: Google Cloud Run Authentication Service-to-Service
1.I created a private cloud run service using this code:
import os
from flask import Flask
from flask import request
app = Flask(__name__)
#app.route('/index', methods=['GET', 'POST'])
def hello_world():
target = os.environ.get('TARGET', 'World')
print(target)
return str(request.data)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
2.I created a service account with --role=roles/run.invoker that I will associate with the cloud task
gcloud iam service-accounts create SERVICE-ACCOUNT_NAME \
--display-name "DISPLAYED-SERVICE-ACCOUNT_NAME"
gcloud iam service-accounts list
gcloud run services add-iam-policy-binding SERVICE \
--member=serviceAccount:SERVICE-ACCOUNT_NAME#PROJECT-ID.iam.gserviceaccount.com \
--role=roles/run.invoker
3.I created a queue
gcloud tasks queues create my-queue
4.I create a test.py
from google.cloud import tasks_v2
from google.protobuf import timestamp_pb2
import datetime
# Create a client.
client = tasks_v2.CloudTasksClient()
# TODO(developer): Uncomment these lines and replace with your values.
project = 'your-project'
queue = 'your-queue'
location = 'europe-west2' # app engine locations
url = 'https://helloworld/index'
payload = 'Hello from the Cloud Task'
# Construct the fully qualified queue name.
parent = client.queue_path(project, location, queue)
# Construct the request body.
task = {
'http_request': { # Specify the type of request.
'http_method': 'POST',
'url': url, # The full url path that the task will be sent to.
'oidc_token': {
'service_account_email': "your-service-account"
},
'headers' : {
'Content-Type': 'application/json',
}
}
}
# Convert "seconds from now" into an rfc3339 datetime string.
d = datetime.datetime.utcnow() + datetime.timedelta(seconds=60)
# Create Timestamp protobuf.
timestamp = timestamp_pb2.Timestamp()
timestamp.FromDatetime(d)
# Add the timestamp to the tasks.
task['schedule_time'] = timestamp
task['name'] = 'projects/your-project/locations/app-engine-loacation/queues/your-queue/tasks/your-task'
converted_payload = payload.encode()
# Add the payload to the request.
task['http_request']['body'] = converted_payload
# Use the client to build and send the task.
response = client.create_task(parent, task)
print('Created task {}'.format(response.name))
#return response
5.I run the code in Google Cloud Shell with my user account which has Owner role.
6.The response received has the form:
Created task projects/your-project/locations/app-engine-loacation/queues/your-queue/tasks/your-task
7.Check the logs, success
The next day I am no longer able to reproduce this issue. I can reproduce the 403 responses by removing the Cloud Run Invoker role, but I no longer get 401 responses with exactly the same code as yesterday.
I guess this was a temporary issue on Google's side?
Also, I noticed that it takes some time before updated policies are actually in place (1 to 2 minutes).
For those like me, struggling through documentation and stackoverflow when having continuous UNAUTHORIZED responses on Cloud Tasks HTTP requests:
As was written in thread, you better provide audience for oidcToken you send to CloudTasks. Ensure your requested url exactly equals to your resource.
For instance, if you have Cloud Function named my-awesome-cloud-function and your task request url is https://REGION-PROJECT-ID.cloudfunctions.net/my-awesome-cloud-function/api/v1/hello, you need to ensure, that you set function url itself.
{
serviceAccountEmail: SERVICE-ACCOUNT_NAME#PROJECT-ID.iam.gserviceaccount.com,
audience: https://REGION-PROJECT-ID.cloudfunctions.net/my-awesome-cloud-function
}
Otherwise seems full url is used and leads to an error.

How to use camel consul component for agent API?

As per camel documentation for consul(camel.apache.org/consul-component.html), the supported HTTP API are kv, event and agent. There are example of kv (key/value store) which are working fine but there is no such example for agent API. I went thruogh the documentation of Consul [www.consul.io/docs/agent/http/agent.html] and the corresponding java client [github.com/OrbitzWorldwide/consul-client] as well and tried to figure out how consul:agent component should work but I have found nothing simple there.
main.getCamelTemplate().sendBodyAndHeader(
"consul:agent?url=http://localhost:8500/v1/agent/service/register",
payload,
ConsulConstants.CONSUL_ACTION, ConsulAgentActions.AGENT); //also tried with ConsulAgentActions.SERVICES, but no luck
I also checked the test cases mention at https://github.com/apache/camel/tree/master/components/camel-consul/src/test/java/org/apache/camel/component/consul but unable to find anything related to agent api.
So my question is that how to use consul:agent component.
UPDATE: I tried the below code and able to get the services.
Object res = main.getCamelTemplate().requestBodyAndHeader("consul:agent", "", ConsulConstants.CONSUL_ACTION, ConsulAgentActions.SERVICES);
It seems that the consul component only work for the GET operation of the HTTP agent API. But in that case how do I register a new service (like /v1/agent/service/register : Registers a new local service) with consul component?
This code works for me:
ImmutableService service =
ImmutableService.builder()
.id("service-1")
.service("service")
.addTags("camel", "service-call")
.address("127.0.0.1")
.port(9011)
.build();
ImmutableCatalogRegistration registration =
ImmutableCatalogRegistration.builder()
.datacenter("dc1")
.node("node1")
.address("127.0.0.1")
.service(service)
.build();
ProducerTemplate template = main.getCamelTemplate();
Object res = template.requestBodyAndHeader("consul:catalog", registration, ConsulConstants.CONSUL_ACTION, ConsulCatalogActions.REGISTER);
But it's looking some inelegantly (like workaround), and i think there are other solutions.
One can use
.to("consul:agent?action=SERVICES")
to retrieve the registered Services as Map<String, Service>, with service id as map key.
And
.to("consul:catalog?action=REGISTER")
to write registrations, expecting an ImmutableCatalogRegistration as body
Note that you can employ a CamelServiceRegistrationRoutePolicy to register Camel routes as services automatically.

Is it possible to delete data from Firebase by invoking servlets using Google App Engine cron jobs?

I have developed a simple chat application using AngularJs and Firebase. I have hosted this application on the Google app engine platform. Now, I want to delete the Firebase database containing chat messages on a schedule (every night).
Is there any way to achieve this using a servlet, so that it can be invoked as a cron job? Thanks.
PS: The firebase documentation has been given only for Android and I am new to this. SO, specifically looking for servlet code.
It's not entirely clear from your question or description what you need assistance with. This answer assumes you'd like a daily cron job to send a request to an App Engine handler which then deletes data from a Firebase database.
From the documentation, Firebase has a REST API. Therefore, data can be added, removed and updated using standard HTTP requests (GET, PUT, POST, PATCH, DELETE). Any application capable of issuing HTTP requests can make changes to the data when properly authenticated and authorized.
Given your request to use a cron job and Java servlet on App Engine, I'd advise the following:
Define a cron job that issues requests to a specific URL in your cron.xml
<cron>
<url>/firebase_cleanup</url>
<description>Delete all chat messages of the day</description>
<schedule>every day 23:00</schedule>
</cron>
Deploy a servlet that will handle such requests and issue the appropriate HTTP request to Firebase. In your case, it should issue a DELETE request. This can be done using HttpURLConnection.
URL url = new URL("firebase-url-formatted-for-delete-request");
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("DELETE");
int responseCode = connection.getResponseCode();
// Act upon responseCode accordingly
Note that the above servlet code does not include the authentication you'll need to issue the DELETE request. That will require your Firebase secret or a generated token. As I do not have a Firebase account, I cannot test the above so it will likely require some modifications.

Restlet CorsFilter with ChallengeAuthenticator

I'm building a RESTful API with the Restlet framework and need it to work with cross domain calls (CORS) as well as basic authentication.
At the moment I'm using the CorsFilter which does the job of making my webservice support CORS requests. But, when I try to use this with a simple ChallengeAuthenticator with HTTP Basic Authentication it won't work as I want it to (from a web site).
When I access the webservice directly via Chrome it works as intended, but when I try it in a small web application written in angularjs (jquery/javascript) and try to access the webservice it does not.
Basically what happens is that when a OPTIONS request is sent to my webservice it will not respond with the headers: 'Access-Control-Allow-Origin', 'Access-Control-Allow-Credentials', etc. as it should. Instead it is sending a respond with HTTP status code 401 saying that the authentication failed.. Is this because the authenticator is overriding the CorsFilter somehow?
My createInboundRoot method can be seen below.
#Override
public Restlet createInboundRoot() {
ChallengeAuthenticator authenticator = createAuthenticator();
RoleAuthorizer authorizer = createRoleAuthorizer();
Router router = new Router(getContext());
router.attach("/items", ItemsServerResource.class);
router.attach("/items/", ItemsServerResource.class);
Router baseRouter = new Router(getContext());
authorizer.setNext(ItemServerResource.class);
authenticator.setNext(baseRouter);
baseRouter.attach("/items/{itemID}", authorizer);
baseRouter.attach("", router);
// router.attach("/items/{itemID}", ItemServerResource.class);
CorsFilter corsFilter = new CorsFilter(getContext());
corsFilter.setNext(authenticator);
corsFilter.setAllowedOrigins(new HashSet(Arrays.asList("*")));
corsFilter.setAllowedCredentials(true);
return corsFilter;
}
(The authorizer and authenticator code is taken from the "official" restlet guide for authorization and authentication)
I've tried alot of changes to my code but none which given me any luck. But I noticed that when setting the argument "optional" in ChallengeAuthenticator to true (which "Indicates if the authentication success is optional") the CorsFilter does its job, but obviously the ChallengeAuthenticator does not care about authenticating the client and lets anything use the protected resources..
Has anyone had a similar problem? Or have you solved this (CORS + Authentication in Restlet) in any other way?
Thanks in advance!
I think that it's a bug of the Restlet CORS filter. As a matter of fact, the filter uses the method afterHandle to set the CORS headers. See the source code: https://github.com/restlet/restlet-framework-java/blob/4e8f0414b4f5ea733fcc30dd19944fd1e104bf74/modules/org.restlet/src/org/restlet/engine/application/CorsFilter.java#L119.
This means that the CORS processing is done after executing the whole processing chain (authentication, ...). So if your authentication failed, you will have a status code 401. It's actually the case since CORS preflighted requests don't send authentication hints.
For more details about using CORS with Restlet, you could have a look at this link: https://templth.wordpress.com/2014/11/12/understanding-and-using-cors/. This can provide you a workaround until this bug was fixed in Restlet itself.
I opened an issue in Github for your problem: https://github.com/restlet/restlet-framework-java/issues/1019.
Hope it helps,
Thierry
The CorsService (in 2.3.1 coming tomorrow) contains also a skippingResourceForCorsOptions property, that answers directly the Options request without transmitting the request to the underlying filters and server resources.

Request to App Engine Backend Timing Out

I created an App Engine backend to serve http requests for a long running process. The backend process works as expected when the query references an input of small size, but times out when the input size is large. The query parameter is the url of an App Engine BlobStore blob, which is the input data for the backend process. I thought the whole point of using App Engine backends was to avoid the timeout restricts that App Engine frontends possess. How can I avoid getting a timeout?
I call the backend like this, setting the connection timeout length to infinite:
HttpURLConnection connection = (HttpURLConnection)(new URL(url + "?" + query).openConnection());
connection.setRequestProperty("Accept-Charset", charset);
connection.setRequestMethod("GET");
connection.setConnectTimeout(0);
connection.connect();
InputStream in = connection.getInputStream();
int ch;
while ((ch = in.read()) != -1)
json = json + String.valueOf((char) ch);
System.out.println("Response Message is: " + json);
connection.disconnect();
The traceback (edited for anonymity) is:
Uncaught exception from servlet
java.net.SocketTimeoutException: Timeout while fetching URL: http://my-backend.myapp.appspot.com/somemethod?someparameter=AMIfv97IBE43y1pFaLNSKO1hAH1U4cpB45dc756FzVAyifPner8_TCJbg1pPMwMulsGnObJTgiC2I6G6CdWpSrH8TrRBO9x8BG_No26AM9LmGSkcbQZiilhC_-KGLx17mrS6QOLsUm3JFY88h8TnFNer5N6-cl0iKA
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.convertApplicationException(URLFetchServiceImpl.java:142)
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.fetch(URLFetchServiceImpl.java:43)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.fetchResponse(URLFetchServiceStreamHandler.java:417)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.getInputStream(URLFetchServiceStreamHandler.java:296)
at org.someorg.server.HUDXML3UploadService.doPost(SomeService.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
As you can see, I'm not getting the DeadlineExceededException, so I think something other than Google's limits is causing the timeout, and also making this a different issue from similar stackoverflow posts on the topic.
I humbly thank you for any insights.
Update 2/19/2012: I see what's going on, I think. I should be able to have the client wait indefinitely using GWT [or any other type of client-side async framework] async handler for an any client request to complete, so I don't think that is the problem. The problem is that the file upload is calling the _ah/upload App Engine system endpoint which then, once the blob is stored in the Blobstore) calls the upload service's doPost backend to process the blob. The client request to _ah/upload is what is timing out, because the backend doesn't return in a timely fashion. To make this timeout problem go away, I attempted to make the _ah_upload service itself a public backend accessible via http://backend_name.project_name.appspot.com/_ah/upload, but I don't think that google allows a system service (like _ah/upload) to be run as a backend. Now my next approach is to just have ah_upload immediately return after triggering the backend processing, and then call another service to get the original response I wanted, after processing is finished.
The solution was to start a backend process as a tasks and add that to the task queue, then returning a response to client before it waits to process the backend task (which can take a long time). If I could have assigned ah_upload to a backend, this would have also solved the problem, since the clien't async handler could wait forever for the backend to finish, but I do not think Google permits assigning System Servlets to backends. The client will now have to poll persisted backend process response data, as Paul C mentioned, since tasks can not respond like a normal servlet.

Resources