We have a Logic App trying to access the Storage Account Queue, secured by a private endpoint. Both resources are in the same SNET. We have configured our Managed Identities correctly and given Storage Queue Data Contributor access to the storage account. When trying to put a message in the queue, we are getting the error:
SSL connection could not be established, see inner exception.
We checked this Microsoft documentation but are not aware of what certificate they are talking about. Please check the error screenshot below.
It turns out that we need to whitelist the IP addresses in our company's firewall. However, there is a catch. According to this community post, we required our logic app and storage accounts in different regions. Since the move was not possible, we changed all our connectors to built-in connectors instead of Azure public connectors. Changing to the built-in connectors came with its own challenges, but we were able to achieve our purpose.
Related
I'm looking to leverage Azure deployment slots for a production Web App (with Azure SQL DB).
I also use a Fortiweb WAF-as-a-Service for production app.
If I use deployment slots, will I need a separate Fortiweb WAF-as-a-Service instance to point to new name of "ProductionApp/Staging"?
I suspect I would need DNS entries as well for new Staging name, along with a separate WAF to have the client successfully connect to staging deployment slot.
Any comments, pointers or other would be most welcomed.
Regards,
Paul
Thank you to #PDorenberg for your question, and the subsequent solution that you provided in your comment.
For the sake of the community, I'm posting your comment as an answer, as it will benefit many others who are facing the same issue and are searching for a solution. Also, I've adding some points that I feel should be included and considered in the answer.
Deployment slots can’t swap custom client domain, associated private TLS/SSL certificates and scale settings as these settings are directly related to virtual network and private endpoints and these are ultimately related to the IP address space and DNS records created for them which are unique for every instance of resource that is routable, mappable and can be found over the internet through public IP addresses
Also, do keep in mind that only app settings, connection strings, language framework versions, web sockets, HTTP version, and platform bitness can be swapped between a deployment slot and a production slot. Please see the documentation for all the information regarding the deployment slot configuration and swapping.
Please also take note that you won’t need the Fortiweb WAF-as-a-service instance when pointing to the production slot of the app for a deployment that is already deployed in the staging slot of the App Service. But if the App Service instances are different for different apps in production, then you surely would need the Fortiweb WAF-as-a-service to route the traffic accordingly to each App Service instance separately.
As part of a project we are implementing Azure OAuth authentication for a SQL Server instance hosted in the Azure cloud. We are using the MS-TDS (Tabular Data Stream) protocol to create the federated authentication packet which contain the access token data and the other parts needed in the packet.
We are now stuck at a point where no matter what we do, we are not able to get a successful response. As of now we are getting the below error from the server
(47089) Reason: Login failed because Azure Dns Caching feature extension is malformed
Even though we are not populating anything related to DNS Caching in the packet knowingly. A major part of the problem is that we don't understand the error much by itself. We have looked over the internet and haven't really found much help yet. Do we have any resource we can refer to be able to understand this better? I mean the packet itself is huge and it is proving to be really difficult to debug this without any kind of documentation from Azure side.
Any help on this is much appreciated.
I'm using AWS EC2 to run a database that supports search capabilities - similar to Elasticsearch. The database is only running in a single AWS region due to budgetary constraints.
The database is also running inside of a private subnet in a VPC. Currently there are no inbound or outbound connections that it can make.
I need to allow access to the database so that only my serverless functions can connect to it via HTTP. Users should not be allowed to access it directly from the client-side. Using Lambda is possible but is far from ideal due to long cold start times. Users will expect search results to appear very quickly, especially when the page first loads. So something else is required.
The plan is to replace Lambda with Cloudflare Workers. With faster start times and closer distance to end users all over the world, connecting to the database this way would probably give me the speed I need while still offering all the benefits of a serverless approach. However, I'm not sure how I can configure my VPC security group to allow connections only from a specific worker.
I know that my workers all have unique domains such as https://unique-example.myworkerdomain.com and they remain the same over time. So is there a way I can securly allow inbound connections from this domain while blocking everything else? Can/should this be done through configuring security groups, internet gateway, IAM role, something else entirely?
Thank you for any help and advice
There are a couple of options.
ECS
You can run an ECS cluster in the same VPC as your database, and run Fargate tasks, which have sub-second start times (maybe 100ms or less?). And you can run ECS tasks on hot cluster instances (but you then pay for them all the time), but perhaps a scale to/from zero approach with ECS would allow you to manage cost without compromising on most of user requests (the first request after a scale-to-zero event would get 100ms+ latency, but subsequent requests would get similar). Lambda actually does something similar to this under the hood, but with much more aggressive scale-down timelines. This doesn't restrict from a specific domain, but may solve your issue.
Self-Managed Proxy
Depending on how your database is accessed, you might be able to have a reverse proxy such as Nginx in a public subnet doing request validation to limit access to the database. This could control access by any request headers, but I'd recommend doing TLS client validation to ensure that only your functions can access the database through the proxy, and it might be possible to validate the domain this way (by limiting the trusted CA to an intermediate CA that only signs for that domain, alternatively, I think Nginx can allow a connection depending on traits of the client cert matches regexes such as domain name).
Route Through Your Corporate Network
Using a VPN, you can originate the function from within your network or somehow filter the request, then the database could still be in a private subnet with connectivity allowed from the corporate network through the VPN.
Use AWS WAF
You make a public ALB pointing at your database, and set up AWS WAF to block all requests that don't contain a specific header (such as an API key). Note: you may have to also set up Cloudfront, I forget off the top of my head whether you can apply WAF directly to an ELB or not. Also note: I don't particularly advise this, as I don't think WAF was designed with sensitive strings in the rules, so you may have to think about who has describerule / describewebacl permissions on WAF, also these rules may end up in logs because AWS doesn't expect the rules to be sensitive. But it might be possible for WAF to filter on something you find viable. I'm pretty sure you can filter on HTTP headers, but unless those headers are secret, anyone can connect by submitting a request with those headers. I don't think WAF can do client domain validation.
I'm trying to follow this tutorial here, but I can't complete the verification step (#4). My domain provider doesn't allow me to add a DNS record for the type AAAA. I tried contacting my domain provider but they say it's not supported. Is there another work around I could do? Should I try using another cloud hosting service like Azure?
You can use the features and capabilities that Cloud DNS offers. No need for switching Cloud hosting services.
Cloud DNS is a high-performance, resilient, global Domain Name System (DNS) service that publishes your domain names to the global DNS in a cost-effective way.
Migrate to Cloud DNS an existing DNS domain from another DNS provider.
Then, Managing Records will make it easy for you to add and remove a record. This is done by using a transaction that specifies the operations you want to perform. A transaction supports one or more record changes that are propagated together.
Update
I would also check out Google Domains, which is a fairly new service (still in Beta) and allows you to register your domain name and works like a charm.
Google has recently added Firewall (beta) support for Google App Engine.
Is there a way to deny all external access but allow all internal GCP access, including GCP cloud functions running in the same project?
Whereas the Firewall allows you to allow or deny specific IP ranges, there doesn't seem to be a way to ascertain which IP ranges a function might be running from. And using the typical internal IP range and mask, e.g. 10.0.0.0/8 does not seem to allow access from GCP cloud functions.
The default rule is Allow from *. You can edit that rule and change it to Deny from * to close down all external access via the firewall.
Next, you're going to have to look up all of GCP's IP address blocks and add those into your Allow rules. The instructions for looking them all up are here.
There is an open issue logged for accessing via internal APIs that you can subscribe to follow.