My goal is to limit access to my App Engine Service to my home office IP. I have configured the App Engine Firewall with allow rules for both my IPv4 and IPv6 addresses, and set the default rule to deny.
This works when browsing my application using the unique appspot.com address assigned to my app. But attempting to access my application using the custom domain I have configured for App Engine, is resulting in a 403.
I have further verified that the rules are working as intended on the appspot.com domain. Anything that isn't in my allow list is getting the 403 as intended.
This tells me that my rules are "working," but I am unable to find any reference as to why this would not influence access to my application through the configured custom domain.
Note: when the default rule is set to allow, my application does work using the custom domain, so I am certain that configuration is sound.
Are custom domains simply beyond the scope of App Engine's Firewall? I was hoping to avoid digging into the VPC configuration for now.
Firewall Rules
Custom Domain Config
Edit: Log shows my IPv6 IP address as the requesting IP when tailing the log:
{
"entries": [
{
"insertId": "dlpqxpfa090t8",
"jsonPayload": {
"appLatencySeconds": "0.011",
"trace": "b7f63eb3d2fb4c52480253c224821a23",
"latencySeconds": "0.011"
},
"httpRequest": {
"requestMethod": "GET",
"requestUrl": "/users/kind/add",
"status": 200,
"responseSize": "4810",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.81 Safari/537.36",
"remoteIp": "2600:****:****:****:****:****:****:9936",
"referer": "https://f******s.e******t.com/users",
"latency": "0.011s",
"protocol": "HTTP/1.1"
},
"resource": {
"type": "gae_app",
"labels": {
"zone": "",
"project_id": "f*******s",
"version_id": "20220801t212517",
"module_id": "default"
}
},
"timestamp": "2022-08-09T22:11:33.869Z",
"labels": {
"appengine.googleapis.com/trace_id": "b*****************a23",
"appengine.googleapis.com/instance_name": "aef-default-2*********7-770v",
"compute.googleapis.com/resource_name": "0**********3",
"compute.googleapis.com/resource_id": "21*********29",
"compute.googleapis.com/zone": "********"
},
"logName": "projects/f********s/logs/appengine.googleapis.com%2Fnginx.request",
"trace": "projects/f*********s/traces/b7f63eb3d2fb4c52480253c224821a23",
"receiveTimestamp": "2022-08-09T22:11:38.104875464Z"
}
]
}
Edit 2: As suggested in the comments, I tried hitting a URL w/ curl. Below is the result:
Microsoft Windows [Version 10.0.22000.856]
(c) Microsoft Corporation. All rights reserved.
C:\Users\shawn>curl
curl: try 'curl --help' for more information
C:\Users\shawn>curl https://f*****s.e*******t.com/index
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>403 Forbidden</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Forbidden</h1>
<h2>Access is forbidden.</h2>
<h2></h2>
<script defer src="https://static.cloudflareinsights.com/beacon.min.js/v652eace1692a40cfa3763df669d7439c1639079717194" integrity="sha512-Gi7xpJR8tSkrpF7aordPZQlW2DLtzUlZcumS8dMQjwDHEnw9I7ZLyiOj/6tZStRBGtGgN6ceN6cMH8z7etPGlw==" data-cf-beacon='{"rayId":"738c818088a17d62","version":"2022.6.0","r":1,"token":"c070c2d4c5ad48d18815371af21d9e80","si":100}' crossorigin="anonymous"></script>
</body></html>
C:\Users\shawn>
NOTE: I thought I was on to something with IPv6 being the culprit, but I've since disabled IPv6 completely and https://whatismyipaddress.com/ is showing that I'm not broadcasting an IPv6 address any longer. Still no dice.
Cloudflare Proxied CNAME strikes again. Turning off this feature in Cloudflare for the CNAME pointing at ghs.googlehosted.com resolved the issue after about 5 minutes.
Related
I followed the instruction from this website to add the authentication with Identity Server. The configuration is quite simple
proxy:
title: Open Analytics Shiny Proxy
port: 8080
authentication: openid
openid:
auth-url: https://identityserverurl/connect/authorize
token-url: https://identityserverurl/connect/token
jwks-url: https://identityserverurl/.well-known/openid-configuration/jwks
logout-url: https://identityserverurl/Account/Logout?return=http://yourshinyproxy:8080/
client-id: ShinyProxy
client-secret: secret
scopes: [ "openid", "profile", "roles" ]
username-attribute: aud
roles-claim: role
And the authentication seems working. When I add the access-groups to display only the app for a particular role, it doesn't work
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-network: sp-example-net
access-groups: 200122-user
The same code is working with the version 2.4.3 of ShinyProxy.
Is there anything I missed for this configuration in the ShinyProxy 2.6.1?
Running an App Engine Java 8 app with Google Cloud Endpoints. I've generated the openapi.json, deployed it to my Endpoints Portal, and can see the API in my portal, with the various methods and resources listed correctly.
I'm running the dev app server locally in IntelliJ using the Cloud Code plugin. When I run it, it opens a browser tab that gives me an Error 403, with the following stack trace (abbreviated):
SEVERE: direct send of a check request service_name: "my-project-redacted.appspot.com"
operation {
operation_id: "11b8f9a6-c9cb-4895-95fb-8bb39176efb9"
operation_name: "1.my_project_dot_appspot_com.MyAPI"
consumer_id: "project:my-project"
start_time {
seconds: 1596075164
nanos: 812000000
}
end_time {
seconds: 1596075164
nanos: 812000000
}
labels {
key: "servicecontrol.googleapis.com/caller_ip"
value: "127.0.0.1"
}
labels {
key: "servicecontrol.googleapis.com/user_agent"
value: "ESP"
}
labels {
key: "servicecontrol.googleapis.com/service_agent"
value: "EF_JAVA/1.0.12"
}
}
failed
endpoints.repackaged.com.google.api.client.http.HttpResponseException: 403
{
"error": {
"code": 403,
"message": "The caller does not have permission",
"errors": [
{
"message": "The caller does not have permission",
"domain": "global",
"reason": "forbidden"
}
],
"status": "PERMISSION_DENIED"
}
}
at endpoints.repackaged.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.newExceptionOnError(AbstractGoogleClientRequest.java:456)
at endpoints.repackaged.com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:321)
at endpoints.repackaged.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1065)
at endpoints.repackaged.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at endpoints.repackaged.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at endpoints.repackaged.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.api.control.Client.check(Client.java:205)
It's worth noting that the API seems to work -- I have the iOS simulator connecting to my local dev app server and making Endpoints calls. I'm just tired of seeing the 403 in my browser every time I start the dev app server, and fear it may portend some similar issue in production when I eventually push this new service.
This error indicates that there is something wrong with the permissions or that the Service Control API is disabled in your project, so in order to fix it you can:
Make sure your service has access to servicecontrol.googleapis.com enabled by running the following command on Cloud Shell:
gcloud services enable servicecontrol.googleapis.com
Double check ENDPOINTS_SERVICE_NAME parameter in your appengine-web.xml file, it should look like this:
<env-var name="ENDPOINTS_SERVICE_NAME" value="$PROJECT"/>
Check if OpenAPI specs are deployed to Cloud, you can check it by running this command on Cloud Shell:
gcloud endpoints configs list --service=$PROJECT
Double check if the Service account running your instance has the proper IAM roles.
We've got an application hosted on a VM in Azure, which is behind a WAF that we've got a lot of trouble with for some users.
Some users are plagued by the HTTP Error 400. The size of the request headers is too long. The application is protected by Azure AD login.
The full repsonse to the browser is:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
{
"data": "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\"\"http://www.w3.org/TR/html4/strict.dtd\">\r\n<HTML><HEAD><TITLE>Bad Request</TITLE>\r\n<META HTTP-EQUIV=\"Content-Type\" Content=\"text/html; charset=us-ascii\"></HEAD>\r\n<BODY><h2>Bad Request - Request Too Long</h2>\r\n<hr><p>HTTP Error 400. The size of the request headers is too long.</p>\r\n</BODY></HTML>\r\n",
"status": 400,
"config": {
"method": "GET",
"transformRequest": [
null
],
"transformResponse": [
null
],
"url": "api/datacontext/workbooks/876dac86e00e42878d9e239a8efb00a3/session/start",
"headers": {
"Accept": "application/json, text/plain, */*",
"x-invision-app-language": "EN",
"Authorization": "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6ImFQY3R3X29kdlJPb0VOZzNWb09sSWgydGlFcyIsImtpZCI6ImFQY3R3X29kdlJPb0VOZzNWb09sSWgydGlFcyJ9.eyJhdWQiOiI2MGMyZDcwNi02M2JmLTRhYzItOGQyZi01M2QzMzM1ZTAwMDIiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC80M2I0MWVlMy01MDllLTRmNDYtOTFmZi1hYWMxNGY5MjAwYTAvIiwiaWF0IjoxNTY5NTg0NTIyLCJuYmYiOjE1Njk1ODQ1MjIsImV4cCI6MTU2OTU4ODQyMiwiYWNyIjoiMSIsImFpbyI6IkFTUUEyLzhNQUFBQWtlMDNycUJycmJoZlNBdDB4SnY4bkF4RzA1V1VPdnd4SjZDUmdVeFdkeGc9IiwiYW1yIjpbInB3ZCJdLCJhcHBpZCI6IjYwYzJkNzA2LTYzYmYtNGFjMi04ZDJmLTUzZDMzMzVlMDAwMiIsImFwcGlkYWNyIjoiMSIsImZhbWlseV9uYW1lIjoiUmV0dmVkdCIsImdpdmVuX25hbWUiOiJFcmlrIiwiaXBhZGRyIjoiMTk0LjI0OC4xNDcuMTMiLCJuYW1lIjoiRXJpayBSZXR2ZWR0Iiwib2lkIjoiMGJjMDUyMDgtZDQ1YS00MTk4LTk3MzItMzZiN2U2MDJiYmYwIiwib25wcmVtX3NpZCI6IlMtMS01LTIxLTEzNDI3OTMzNzAtNDI1NDg1MTE1LTU5NDUyMDk5NS0xODU0Iiwic2NwIjoiRGlyZWN0b3J5LlJlYWQuQWxsIFVzZXIuUmVhZCBVc2VyLlJlYWQuQWxsIFVzZXIuUmVhZEJhc2ljLkFsbCIsInN1YiI6ImJ0M3pIOFNQcjRtMWFqenZJa2ZiMjBnVUZkR3BxblpEYmg3QWJ4M3B5N28iLCJ0aWQiOiI0M2I0MWVlMy01MDllLTRmNDYtOTFmZi1hYWMxNGY5MjAwYTAiLCJ1bmlxdWVfbmFtZSI6ImVyaWsucmV0dmVkdEBjaG9pY2Uubm8iLCJ1cG4iOiJlcmlrLnJldHZlZHRAY2hvaWNlLm5vIiwidXRpIjoiUERYS0FfSl9CRWlmc0tDLVJJbHhBQSIsInZlciI6IjEuMCJ9.aghrUBArpEvvvXBs2MBPTCL2nUPZ3aMCJ-1r3EqB5a9UaqaX7Ego5mSw1gb_68y3KhsGfO7kAv49uCB7cy80kEXV4ES4htLefQmmp-Bx-1Et_w6vstoki9ojWuKP97NsaGlQBjPYCZcbCRptBIZJIr_H71dMuFhAPWYEImcGtOrF2RNQA4AFvlx6WL2dIONHVPar3sjgLWEvFxhPFZsml3Ht3M1OtLj5drAJrkUjgxfV3-00bqCwYCm5_t_BAtxWsd-LZEpjDLpN7nDBFIJF14oFrPB7yXCBM_q-Y4FCCwGE14NoRcUrJNJPYMt5b0LKHEAbIopdq_zmFQ6XnUmcjg"
},
"withCredentials": true
},
"statusText": "Bad Request"
}
The error show up on different paths in the application and seemingly at random. It might work fine for a while then the user gets the error message again. We've narrowed the problem down to beeing the WAF as when we've changed the traffic to flow directly to the IIS webserver the users are seeing none of theese errors. It seems that there's nowhere to change the size limit for the request header that I can find in the WAF. Anyone got any idea as for where to start looking for a solution?
The Azure WAF is configured as follows:
Configure
Tier: WAF
Firewall Status: Enabled
Firewall mode: Detection
There are no exclusions configured.
Global Parameters:
Inspect request body: Off
Rules
Rule set: OWASP 3.0
Advanced rule configuration: Disabled
We are using a service account to deploy our app to App Engine using Travis.
On every merged PR, Travis pulls the code from our GitHub repository, and pulls a Docker image which contains Google Cloud SDK and executes the gcloud app deploy command. We use a Service Account to perform the deployment with "Project Owner" role.
Everything used to work fine until I added a new service to the project which automates SSL certificate generation and renewal, along with a dispatch.yaml file to route traffic incoming from Let's Encrypt for domain verification. I needed to add more permissions to allow updating the SSL certificates we use for our custom domain. I removed the current service account, and created a new one with a new private key. I created a new role with the required permissions to update and view SSL certificates in addition to the previous permissions (all appengine.* permissions). I assigned the new role and the Project Owner role to the new account. After these changes, the deployment fails with the following error when executing the deploy command:
Permissions error fetching application [apps/hollowverse-c9cad]. Please make sure you are using the correct project ID and that you have permission to view applications on the project.
I used the same service account on my local machine with logging level set to debug. I got this error:
DEBUG: HttpError accessing <https://appengine.googleapis.com/v1/apps/hollowverse-c9cad?alt=json>: response: <{'status': '403', 'content-length': '335', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Wed, 02 Aug 2017 14:33:50 GMT', 'x-frame-options': 'SAMEORIGIN', 'alt-svc': 'quic=":443"; ma=2592000; v="39,38,37,36,35"', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"code": 403,
"message": "Operation not allowed",
"status": "PERMISSION_DENIED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.ResourceInfo",
"resourceType": "gae.api",
"description": "The \"appengine.applications.get\" permission is required."
}
]
}
}
>
DEBUG: (gcloud.beta.app.deploy) Permissions error fetching application [apps/hollowverse-c9cad]. Please make sure you are using the correct project ID and that you have permission to view applications on the project.
The description says that appengine.applications.get is required to perform the deployment. Looking at the permissions granted to the role assigned to the Travis account we use to deploy, appengine.applications.get is clearly granted:
I assigned every possible App Engine and Project role to the account, but deployment still fails with the same error. However, using the default service account, which is automatically created for every new project on GCP, seems to be working.
I removed the current service account, and created a new one with a new private key.
This is where it went wrong. The new account had the same ID as the previous one. Although I could not find this behavior documented anywhere, it looks like that once an ID is used for a service account, it cannot be used again for a new account, even if the previous one is removed.
We created a new account with a new ID (travis2#hollowverse-c9cad.iam.gserviceaccount.com) instead of travis#hollowverse-c9cad.iam.gserviceaccount.com) and the issue is now resolved.
We have a project that runs on App Engine and creates files on Cloud Storage. The two are connected as being part of the same cloud platform project.
In App Engine we have a "Google APIs Console Project Number", and in Cloud Console -> Credentials we have that project number listed under "Client ID" (1[..........].apps.googleusercontent.com) and "Email Address" (1[..........]#developer.gserviceaccount.com).
Every morning, we have some cron jobs that upload files to our Cloud Storage bucket. This has worked flawlessly since September 2013 but as of this morning (Oct 16, 2014) we're getting "permission denied" errors from Cloud Storage.
We're using the cloudstorage client library, which raises cloudstorage.ForbiddenError. Here's the log & exception output:
Expect status [201] from Google Storage. But got status 403.
Path: u'/bucketname/icon_20141016.png'.
Request headers: {'x-goog-resumable': 'start', 'x-goog-api-version': '2', 'content-type': 'image/png', 'accept-encoding': 'gzip, *'}.
Response headers: {'alternate-protocol': '443:quic,p=0.01', 'content-length': '151', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'vary': 'Origin', 'server': 'UploadServer ("Built on Oct 9 2014 15:35:27 (1412894127)")', 'date': 'Thu, 16 Oct 2014 11:56:10 GMT', 'content-type': 'application/xml; charset=UTF-8'}.
Extra info: None.
Since we're using the Cloud platform connection between the two services, I feel like I can only diagnose the problem on my production App Engine instance. I would prefer not to deploy new versions and risk breaking a production server. This also appears to be a Cloud Storage issue this morning, but the only status page I could find says everything is working fine.
As #tx802 suggested, I checked the bucket ACLs carefully.
$ gsutil getacl gs://bucket
<Entry>
<Scope type="UserByEmail">
<EmailAddress>1[..........]#developer.gserviceaccount.com</EmailAddress>
</Scope>
<Permission>FULL_CONTROL</Permission>
</Entry>
I looked at the App Engine application settings and saw the service account is actually appname#appspot.gserviceaccount.com, so I gave that account full control:
$ gsutil chacl -u appname#appspot.gserviceaccount.com:FC gs://bucket
I'm not sure what changed since yesterday's cron run, but now it succeeds.