From my understanding of Cloudflare - the service is supposed to act as a reverse proxy for your server/website. I have added my site to Cloudflare, assigned my nameservers to Cloudflare's nameservers, and have enabled my DNS records to be proxied. The issue I'm having is that requests sent to my site are NOT coming from Cloudflare? The requests are just coming from regular IP addresses. I can see the requests on Cloudflare's WAF event logger, but when the request gets to my actual site - it's just the persons IP address. How can I set it up to where all requests come directly from Cloudflare? I tried adding rules in my .htaccess to allow Cloudflare IPs, and block all other requests, but that just returns an HTTP 403 Forbidden error. Any ideas on what I may have messed up in my Cloudflare configuration, or how to fix this?
I tried adjusting firewall settings on the server, and various changes in .htaccess to force requests only from Cloudflare's network
I think you're mixing two things:
If you add your site to Cloudflare DNS, it will just reply to dns queries, but the traffic from each client will go directly to your site.
If you want Cloudflare to proxy all the traffic to your site, you should use something like Cloudflare tunnels.
Related
I have a mule application deployed on cloudhub. My client is trying to trigger the mule-worker URL (http://mule-worker-appname.us-e2.cloudhub.io:8081) from his network (connecting to their VPN), but it is giving Timed Out error. However, when he tries to hit with SLB URL (http://appname.us-e2.cloudhub.io) he is getting the response.
When client disconnects his VPN then worker URL is also working.
Can someone explain why the worker url is not working where as SLB url is working? I thought worker external url is a public url and can be accessed, then why there is a restriction from their network? Is there some firewalls that client must be having?
When accessing the worker directly (ie using http(s)://mule-worker-myappname.region.cloudhub.io:port) you have to add explicitly the default HTTP (8081) or HTTPS (8082) ports to the request. Example: `http://mule-worker-testapp.us-e2.cloudhub.io:8081)
Also if in the VPC firewall those ports are blocked of access from anywhere (ie public Internet access) you may not be able to access.
From the request made in electron, I don't want the software like Fiddler to see how it should be done. Any help is very grateful
You can't. Such information is ultimately public (if the traffic is not encrypted by HTTPS, but Fiddler can also read this because it acts as a man in the middle), and there is even more sophisticated software, like Wireshark which lets you read any network traffic which is flowing through your LAN.
In case you are not using HTTPS: Use HTTPS with valid, non-self-signed certificates (like the free Let's Encrypt certificates) and enable certificate checking in Electron if you have it disabled (it's enabled by default) because Electron will then reject the self-signed certificate Fiddler uses for HTTPS traffic and thus will not make any request, which ultimately prevents Fiddler from reading them.
However, I would not bother about this issue at all. Anyone with access to the connection can sniff on it and even on HTTPS traffic if all secrets are accessible (i.e. the one sniffing sits on the same system or has access to it), so there really is no way to keep request/response information "secret" with some third party having access to all the secret stuff.
On configured AKS there is docker container with application that is using AAD authentication.
Based on this article there is also configured ingress. API is working well.
When I add to Azure Active Directory application registration reply URL with https prefix I receive error "The reply url specified in the request does not match the reply urls configured for the application". And I see that in browser address line redirect_uri is starting with http.
When I add reply URL that is starting with http, then I receive "Exception: Correlation failed".
What I have tried: Add to ingress.yaml setting ingress.kubernetes.io/force-ssl-redirect: "true"
May be there is some way to force ingress run https instead of http, or there might be some AAD redirect configuration? Any ideas?
UPDATE 2: Probably http redirect is because of ADAL.
PS: Was able to find similar topic without an answer
UPDATE3:
I have decided not to use nginx as ingress. Instead I am using now Load balancer. Soon it would be possible to use Azure Application Gateway Ingress Controller
Have you tried this?
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.
This can be disabled globally using ssl-redirect: "false" in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation in the particular resource.
More information on this on the Ingress documentation link.
You have to make a decision whether to use HTTPS or not. If this is just the start of a development cycle, start without it and get auth to work - but implement HTTPS as soon as possible.
AAD supports both http and https, but of course, the reply urls must be added to the application registration respectively.
As #mihail-stancescu says, ssl-redirect must be set to false, if you choose not to use HTTPS. In addition to this, you also have to ensure that your app does not make the redirect from HTTP to HTTPS.
Using curl with -L -k and -v options will give you a lot of information on what is actually happening with your requests.
When the http/https thing is solved, you have to remove any rewrite annotations you have in your ingress. (e.g. ingress.kubernetes.io/rewrite-target: / should be removed).
Now, if your ingress path to the service in question is e.g. /myservice, then the reply-url should also have that part of the path added ([host]/myservice/signin-oidc) - both in the AAD application registration and in the configuration of your app. (The path in the config should not contain the host)
If you are using https, then you must also have a proper certificate. You can use the free LetsEncrypt (https://letsencrypt.org/) in conjunction with KubeLego (https://github.com/jetstack/kube-lego), where you can find some nice examples on how to implement it.
I am currently confused about how angular's (jquery) preflight OPTIONS call is "selected" or chosen to perform before a request.
I have a normal RESTful api call (api.domain.co)
I have created a host entry 127.0.0.1 local.domain.co in my hosts file /etc/hosts.
I've created self-signed certificate:
http://www.akadia.com/services/ssh_test_certificate.html
I've configured the certs in my mac as trusted:
http://abetobing.com/blog/port-forwarding-mac-os-yosemite-81.html
I've configured my Yosemite Port Forwarding Rules:
http://abetobing.com/blog/port-forwarding-mac-os-yosemite-81.html
I understand that from the browser's perspective (Chrome):
I have an angular app being loaded from https://local.domain.co with a trusted certificate that has a call to https://api.domain.co/user everything looks green with the cert, and I still get a preflight OPTIONS call to my api.domain.co server which is a node resitfy server with CORS support
Everything is Working... BUT
I want to get rid of the OPTIONS preflight Any pointers?
unfortunately subdomain still affected by preflight rule so if you want to remove OPTIONS you can either using jsonp or have the same subdomain for both the site & api.
You can't use localhost. I had to create an entry in my host file to associate 127.0.0.1 to an arbitrary name like mackbook. Then it should work for you.
I'd like to use the URL fetch service for app engine (java). I'm just sending a POST to one of my own servers from a servlet.
AppEngine -> post-to: https://www.myotherserver.com/scripts/log.php
I'm reading the url fetch doc:
Secure Connections and HTTPS
An app can fetch a URL with the HTTPS method to connect to secure servers. Request and response data are transmitted over the network in encrypted form.
The proxy the URL Fetch service uses cannot authenticate the host it is contacting. Because there is no certificate trust chain, the proxy accepts all certificates, including self-signed certificates. The proxy server cannot detect "man in the middle" attacks between App Engine and the remote host when using HTTPS.
I don't understand - the first paragraph makesit sound like everything that goes from the servlet on app engine, to my php script is going to be secure if I use https. The second paragraph makes it sound like the opposite, that it won't actually be secure. Which is it?
Thanks
There are two things HTTPS does for you. One is to encrypt your data so that as it travels over the internet, through various routers and switches, no one can peek at it. The second thing HTTPS does is authenticate that you are actually talking to a certain server. This is the part App Engine can't do. If you were trying to connect to www.myotherserver.com, it is possible that some bad guy named bob could intercept your connection, and pretend to be www.myotherserver.com. Everything you sent to bob would be encrypted on it's way to bob, but bob himself would be able to get the unencrypted data.
In your case, it sounds like you control both the sending server and the destination server, so you could encrypt your data with a shared secret to protect against this possibility.
The UrlFetch through https has been fixed allowing certificate server validation.
validate_certificate
A value of True instructs the application to send a request to the
server only if the certificate is
valid and signed by a trusted CA, and
also includes a hostname that matches
the certificate. A value of False
instructs the application to perform
no certificate validation. A value of
None defaults to the underlying
implementation of URL Fetch. The
underlying implementation currently
defaults to False, but will default to
True in the near future.