We are using Spring Cloud Gateway [JavaDsl] as our API Gateway. For the proxies we have multiple microservices [running on different ip:port] as target's. Would like to know either we can configure multiple targets to spring cloud gateway proxies, similar to apache camel load balancer eip.
camel.apache.org/manual/latest/loadBalance-eip.html
We are looking for software load balancing with in spring cloud gateway [similar to netflix/apache-camel] instead of another dedicated LB.
Able to get Spring Cloud Gateway load balanced Route working using spring-cloud-starter-netflix-ribbon. However when one of the server instance is down, load-balancing-fails. Code Snippets below.
Version :
spring-cloud-gateway : 2.1.1.BUILD-SNAPSHOT
Gateway Route
.route(
r ->
r.path("/res/security/")
.filters( f -> f
.preserveHostHeader()
.rewritePath("/res/security/", "/targetContext/security/")
.filter(new LoggingFilter())
.uri("lb://target-service1-endpoints")
)
application.yml
ribbon:
eureka:
enabled: false
target-service1-endpoints:
ribbon:
listOfServers: 172.xx.xx.s1:80, 172.xx.xx.s2:80
ServerListRefreshInterval: 1000
retryableStatusCodes: 404, 500
MaxAutoRetriesNextServer: 1
management:
endpoint:
health:
enabled: true
Here is the response from Spring Cloud Team.
What you've described, indeed, happens. However, it is not gateway-specific. If you just use Ribbon in a Spring Cloud project with listOfServers, the same thing will happen. This is because, unlike for eureka, the IPing for non-discovery-service scenario is not instrumented (a DummyPing instance is used).
You could probably change this behaviour by providing your own IPing, IRule or ServerListFilter implementation and overriding the setup we provide in the autoconfiguration in this way.
https://github.com/spring-cloud/spring-cloud-gateway/issues/1482
Related
In Google Cloud, I have an application deployed in Kubernetes in one project (call it Project-A), and another deployed in App Engine (call it Project-B). Project-A has a cloud NAT created using automatic IP. Project-B uses App Engine standard.
Project-B by default allows ingress traffic from the internet. However, I only want Project-A to communicate with Project-B. All other traffic needs to be blocked.
I currently do not have any shared VPC configured.
In Project-B, I configure the App Engine Firewall rules with the following deny rules (the list below is shown in the order of the firewall rule priority defined in App Engine Firewall):
0.0.0.1/32
0.0.0.2/31
0.0.0.4/30
0.0.0.8/29
0.0.0.16/28
0.0.0.32/27
0.0.0.64/26
0.0.0.128/25
0.0.1.0/24
0.0.2.0/23
0.0.4.0/22
0.0.8.0/21
0.0.16.0/20
0.0.32.0/19
0.0.64.0/18
0.0.128.0/17
0.1.0.0/16
0.2.0.0/15
0.4.0.0/14
0.8.0.0/13
0.16.0.0/12
0.32.0.0/11
0.64.0.0/10
0.128.0.0/9
1.0.0.0/8
2.0.0.0/7
4.0.0.0/6
8.0.0.0/5
16.0.0.0/4
32.0.0.0/3
64.0.0.0/2
128.0.0.0/1
default rule: allow *
(the CIDR blocks above correspond to 0.0.0.1 - 255.255.255.255; I used https://www.ipaddressguide.com/cidr to perform the calculation for me).
From Project-A, I am still able to reach Project-B. Is there some kind of internal network routing that Google does which bypasses the App Engine firewall? It seems like in this case, Google is using the default rule and ignoring all my other rules.
I then did the reverse. The rules for all those CIDR blocks above were changed to ALLOW, while the last default rule was changed to DENY for all IPs. I then got the reverse behaviour - Project-A is unable to reach Project-B. Again, it looks like only the default rule is being used.
How can I achieve the situation where only Project-A can communicate with Project-B, no internet ingress traffic is allowed to reach Project-B? Can I avoid using a shared VPC? If I do use a shared VPC, what should the App Engine firewall rules be for Project-B?
Sure. I ended up going with the load balancer solution. This gives me a loosely coupled solution, which is better for my scenario. Takes less than 30minutes to set it up.
Is there a way to automatically control access to a specific IP with Google App Engine?
For example, if an API endpoint is accessed 10 times in one minute, it will not accept requests for that IP address.
I understand that GKE and GCE can do the same with Google Load Balancing and GOOGLE CLOUD ARMOR. I want to do this with Google App Engine.
Cloud Endpoints with App Engine is a solution.
You can set up ESP to authenticate clients Choosing an Authentication Method.
Then you can limit the api request by adding a quota to your OpenAPI document Configuring quotas.
This is an example from official documentation:
The following example shows how to configure the x-google-quota
extension in the
paths section:
x-google-management:
metrics:
# Define a metric for read requests.
- name: "read-requests"
displayName: "Read requests"
valueType: INT64
metricKind: DELTA quota:
limits:
# Define the limit or the read-requests metric.
- name: "read-limit"
metric: "read-requests"
unit: "1/min/{project}"
values:
STANDARD: 1000 paths: "/echo":
post:
description: "Echo back a given message."
operationId: "echo"
produces:
- "application/json"
responses:
200:
description: "Echo"
schema:
$ref: "#/definitions/echoMessage"
parameters:
- description: "Message to echo"
in: body
name: message
required: true
schema:
$ref: "#/definitions/echoMessage"
x-google-quota:
metricCosts:
"read-requests": 1
security:
- api_key: []
You can find more about OpenAPI extensions
You can use some more advanced techniques Proof of Work to enforce rate limiting without needing to remember IP addresses SO case.
I did not try it myself but it looks like there is an API to do it.
https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1/apps.firewall.ingressRules
To programmatically create firewall rules for your App Engine app, you
can use the apps.firewall.ingressRules methods in the Admin API.
You can use Cloud Endpoints with App Engine. You can develop, deploy, and manage APIs on any Google Cloud back end with the help of Cloud Endpoints.
Cloud Endpoints provides quotas, which let you control the rate at
which applications can call your API. Setting a quota allows you to
specify usage limits to protect your API from an excessive number of
requests from calling applications. The excessive requests might have
been caused by a simple typo or from an inefficiently designed system
that makes needless calls to your API. Regardless of the cause,
blocking traffic from a specific source once it reaches a certain
level is necessary for the overall health of your API. By setting a
quota, you ensure that one application cannot negatively impact other
applications that use your API.
You can also check Rate limiting on Apigee Edge
I want to put a scraping service using Apache HttpClient to the Cloud. I read problems are possible with Google App Engine, as it's direct network access and threads creation are prohibited. What's about other cloud hosting providers? Have anyone experince with Apache HttpClient + cloud?
AppEngine has threads and direct network access (HTTP only). There is a workaround to make it work with HttpClient.
Also, if you plan to use many parse tasks in parallel, you might check out Task Queue or even mapreduce.
Btw, there is a "misfeature" in GAE that you can not fully set custom User-agent header on your requests - GAE always adds "AppEngine" to the end of it (this breaks requests to certain sites - most notably iTunes).
It's certainly possible to create threads and access other websites from CloudFoundry, you're just time limited for each process. For example, if you take a look at http://rack-scrape.cloudfoundry.com/, it's a simple rack application that inspects the 'a' tags from Google.com;
require 'rubygems'
require 'open-uri'
require 'hpricot'
run Proc.new { |env|
doc = Hpricot(open("http://www.google.com"))
anchors = (doc/"a")
[200, {"Content-Type" => "text/html"}, [anchors.inspect]]
}
As for Apache HttpClient, I have no experience of this but I understand it isn't maintained any more.
I wanted to know if Apache Camel can be used as a load balancer for any HTTP web server.
I am thinking of Apache as I can add some customization to it.
Yes you can use camel for that.
Something like this might do it for you (in a route builder):
from("jetty://http://0.0.0.0:8080/my/path")
.loadBalance()
.roundRobin()
.to("http://server1:8080/my/path","http://server2:8080/my/path");
You can check out more load balancing options here: http://camel.apache.org/load-balancer.html
Since you want to load balance HTTP, then see this page as well, as you would need to configure the http endpoints to be bridged:
http://camel.apache.org/how-to-use-camel-as-a-http-proxy-between-a-client-and-server.html
And as well the matchOnUriPrefix=true, to match any requests coming in.
And if you use jetty on all the endpoints it can scale up, using non-blocking continuations.
Yeah of course you can use camel as a Load balancer. I have so far used it very successfully. Have a look at this discussion Load balancing using camel. This will be useful to get started. Have fun riding on Camel!
I'm trying to make http requests from my Google App Engine webapp, and discovered I have to use URLConnection since it's the only whitelisted class. The corresponding Clojure library is clojure.contrib.http.agent, and my code is as follows:
(defroutes example
(GET "/" [] (http/string (http/http-agent "http://www.example.com")))
(route/not-found "Page not found"))
This works fine in my development environment- the browser displays the text for example.com. But when I test it out with Google's development app server:
phrygian:example wei$ dev_appserver.sh war
2010-09-28 14:53:36.120 java[43845:903] [Java CocoaComponent compatibility mode]: Enabled
...
INFO: The server is running at http://localhost:8080/
It just hangs when I load the page. No error, or anything. Any idea what might be going on?
http-agent creates threads so that might be why it does not work.
From the API documentation:
Creates (and immediately returns) an Agent representing an HTTP
request running in a new thread.
You could try http-connection, which is a wrapper around HttpURLConnection, so this should work.
Another alternative is to try clj-http. The API seems to be a bit more high-level, but it uses Apache HttpComponents which might be blacklisted.
I am guessing http.async.client is a definite no-go due to its strong asynchronous approach.
You might want to try appengine.urlfetch/fetch from appengine-clj (http://github.com/r0man/appengine-clj, also in clojars)