I'm trying to follow this tutorial here, but I can't complete the verification step (#4). My domain provider doesn't allow me to add a DNS record for the type AAAA. I tried contacting my domain provider but they say it's not supported. Is there another work around I could do? Should I try using another cloud hosting service like Azure?
You can use the features and capabilities that Cloud DNS offers. No need for switching Cloud hosting services.
Cloud DNS is a high-performance, resilient, global Domain Name System (DNS) service that publishes your domain names to the global DNS in a cost-effective way.
Migrate to Cloud DNS an existing DNS domain from another DNS provider.
Then, Managing Records will make it easy for you to add and remove a record. This is done by using a transaction that specifies the operations you want to perform. A transaction supports one or more record changes that are propagated together.
Update
I would also check out Google Domains, which is a fairly new service (still in Beta) and allows you to register your domain name and works like a charm.
Related
Pub/Sub is really easy to use from my local work station. I set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path to my .json authentication object.
But what if you need to interact with multiple different Pub/Sub projects? This seems like an odd way to do authentication. Shouldn't there be a way to pass the .json object in from the Java code?
How can I use the client libraries without setting the system's environment variable?
Thanks
You can grant access for PubSub in different project using single Service Account and set it as env variable. See PubSub Access Control for more details.
A service account JSON key file allows to identify a service account on GCP. This service account is the equivalent of a user account but for the app (without user, only machine).
Thereby, if your app need to interact with several topic from different projects, you simply have to grant the service account email with the correct role on each topics/projects.
Another important thing. The service account key file are useful for app outside GCP. Else it's bad. Let me explain this. The service account key file is a secret file that you have to keep and store securely. And a file, it's easy to copy it, to send it by email and even to commit it into public git repository... In addition, it's recommended to rotate this key at least every 90 days for security reasons.
So, for all these reasons and the difficulty in security that represent the service account key files, I don't recommend you to use them.
With your local environment, use your user account (simply use the gcloud SDK and perform a gcloud auth application-default auth). Your aren't a machine (I hope!!)
With GCP component, use the component "identity" (load a service account when you deploy a service, create a VM,..., and grant the correct role on this service account without generating JSON key
With external app (other cloud provider, on premise, Apigee, CI/CD pipeline,...), generate a JSON file on your service account, you can't avoid them in this case.
I'm using AWS EC2 to run a database that supports search capabilities - similar to Elasticsearch. The database is only running in a single AWS region due to budgetary constraints.
The database is also running inside of a private subnet in a VPC. Currently there are no inbound or outbound connections that it can make.
I need to allow access to the database so that only my serverless functions can connect to it via HTTP. Users should not be allowed to access it directly from the client-side. Using Lambda is possible but is far from ideal due to long cold start times. Users will expect search results to appear very quickly, especially when the page first loads. So something else is required.
The plan is to replace Lambda with Cloudflare Workers. With faster start times and closer distance to end users all over the world, connecting to the database this way would probably give me the speed I need while still offering all the benefits of a serverless approach. However, I'm not sure how I can configure my VPC security group to allow connections only from a specific worker.
I know that my workers all have unique domains such as https://unique-example.myworkerdomain.com and they remain the same over time. So is there a way I can securly allow inbound connections from this domain while blocking everything else? Can/should this be done through configuring security groups, internet gateway, IAM role, something else entirely?
Thank you for any help and advice
There are a couple of options.
ECS
You can run an ECS cluster in the same VPC as your database, and run Fargate tasks, which have sub-second start times (maybe 100ms or less?). And you can run ECS tasks on hot cluster instances (but you then pay for them all the time), but perhaps a scale to/from zero approach with ECS would allow you to manage cost without compromising on most of user requests (the first request after a scale-to-zero event would get 100ms+ latency, but subsequent requests would get similar). Lambda actually does something similar to this under the hood, but with much more aggressive scale-down timelines. This doesn't restrict from a specific domain, but may solve your issue.
Self-Managed Proxy
Depending on how your database is accessed, you might be able to have a reverse proxy such as Nginx in a public subnet doing request validation to limit access to the database. This could control access by any request headers, but I'd recommend doing TLS client validation to ensure that only your functions can access the database through the proxy, and it might be possible to validate the domain this way (by limiting the trusted CA to an intermediate CA that only signs for that domain, alternatively, I think Nginx can allow a connection depending on traits of the client cert matches regexes such as domain name).
Route Through Your Corporate Network
Using a VPN, you can originate the function from within your network or somehow filter the request, then the database could still be in a private subnet with connectivity allowed from the corporate network through the VPN.
Use AWS WAF
You make a public ALB pointing at your database, and set up AWS WAF to block all requests that don't contain a specific header (such as an API key). Note: you may have to also set up Cloudfront, I forget off the top of my head whether you can apply WAF directly to an ELB or not. Also note: I don't particularly advise this, as I don't think WAF was designed with sensitive strings in the rules, so you may have to think about who has describerule / describewebacl permissions on WAF, also these rules may end up in logs because AWS doesn't expect the rules to be sensitive. But it might be possible for WAF to filter on something you find viable. I'm pretty sure you can filter on HTTP headers, but unless those headers are secret, anyone can connect by submitting a request with those headers. I don't think WAF can do client domain validation.
I'm trying to get a proof of concept going for a multi-tenancy containerized ASP.NET MVC application in Service Fabric. The idea is that each customer would get 1+ instances of the application spread across the cluster. One thing I'm having trouble getting mapped out is routing.
Each app would be partitioned similar to this SO answer. The plan so far is to have an external load balancer route each request to the SF Reverse Proxy service.
So for instance:
tenant1.myapp.com would get routed to the reverse proxy at <SF cluster node>:19081/myapp/tenant1 (19081 is the default port for SF Reverse Proxy), tenant2.myapp.com -> <SF Cluster Node>:19081/myapp/tenant2, etc and then the proxy would route it to the correct node:port where an instance of the application is listening.
Since each application has to be mapped to a different port, the plan is for SF to dynamically assign a port on creation of each app. This doesn't seem entirely scaleable since we could theoretically hit a port limit (~65k).
My questions then are, is this a valid/suggested approach? Are there better approaches? Are there things I'm missing/overlooking? I'm new to SF so any help/insight would be appreciated!
I don't think the Ephemeral Port Limit will be an issue for you, is likely that you will consume all server resources (CPU + Memory) even before you consume half of these ports.
To do what you need is possible, but it will require you to create a script or an application that will be responsible to create and manage configuration for the service instances deployed.
I would not use the built-in reverse proxy, it is very limited and for what you want will just add extra configuration with no benefit.
At moment I see traefik as the most suitable solution. Traefik enables you to route specific domains to specific services, and it is exactly what you want.
Because you will use multiple domains, it will require a dynamic configuration that is not provided out of the box, this is why I suggested you to create a separate application to deploy these instances. A very high level steps would be:
You define your service with the traefik default rules as shown here
From your application manager, you deploy a new named service of this service for the new tenant
After the instance is deployed you configure it to listen in a specific domain, setting the rule traefik.frontend.rule=Host:tenant1.myapp.com to the correct tenant name
You might have to add some extra configurations, but this will lead you to the right path.
Regarding the cluster architecture, you could do it in many ways, for starting, I would recommend you keep it simple, one FrontEnd node type containing the traefik services and another BackEnd node type for your services, from there you can decide how to plan the cluster properly, there is already many SO answers on how to define the cluster.
Please see more info on the following links:
https://blog.techfabric.io/using-traefik-reverse-proxy-for-securing-microservices-on-azure-service-fabric/
https://docs.traefik.io/configuration/backends/servicefabric/
Assuming you don't need an instance on every node, you can have up to (nodecount * 65K) services, which would make it scalable again.
Have a look at Azure API management and Traefik, which have some SF integration options. This works a lot nicer than the limited built-in reverse proxy. For example, they offer routing rules.
Whiles playing around with GAE custom domain setup in hopes of building a multi-tenant application. I noticed that wildcard sub domains don't quit work as documented.
for example, if one configures domain *.dev.example.com *.qa.example.com you would expect dev.example.com to automatically serve default services deployed in appengine, I however noticed that recently I would have to explicitly enter default.dev.example.com. This however is not what has been documented.
Anyone understands why this is now the case? the domains are verified with DNS configuration on Google DNS service. All works as expected, meaning that I can reach all other services on domain, but default service is not automatically been served.
After various attempts, I eventually purchased some Google support time. And the solution to this is that you need to create and map both a wildcard domain and naked domain. Therefore, one will need to have both
*.dev.example.com and dev.example.com
This is of course tedious, the good news is that Google is running alpha testing on API that allows domain mapping to happen automatically, register at here
Soon multi tenancy application deployments will require no manual intervention.
Is it possible to create a google app engine program that would route http requests to a server on a local network?
What would be the best way to build a program like this?
I am trying to get away from buying a server from a hosting provider and simply use a local network server instead, and use google apps as a sort of proxy. The firewall would be configured to allow access to the server from the google app engine servers only.
If this has been done before in an open source project that would be excellent, but I have not been able to find one.
If all you want is a domain name that points to your dynamic IP address, you could give Dynamic DNS a try. It's designed for your use case, and you won't need to write any code; you just need either a router that supports it or a server with cron. There are lots of providers, but I've had good experiences with Dyn DNS, specifically their Remote Access plan.