I am new to AppDynamics to I am confused how to decide about node and tiers
My application is using :
Angular JS and TypeScript : Frontend
fastapi : Backend
AWS EKS cluster and S3 bucket and CloudFront : for frontend and backend deployment
I am also using some Data-Management-Platform APIs and SNOW APIs
I can`t decide how many nodes do I need in this application and how to decide that this part should be a node and tier
Put simply: A Node is an instance of an application, a Tier is a collection of instances which have the same functionality.
"Angular JS and TypeScript : Frontend" - You would need Browser Real User Monitoring (BRUM) in order to monitor the front end. This is not organised into Tiers and Nodes, but rather page views and browser sessions.
"fastapi : Backend" - Assuming a set of Nodes with the same functionality, here you may want to have a 'fastapi' Tier which contains a number of Nodes. So one might be Tier = 'fastapi', Node = 'fastapi-1' and another might be Tier = 'fastapi', Node = 'fastapi-2'. If there are different types of Node (different functionality) these should be arrange into different Tiers (e.g. "Authentication", or "Reporting")
"AWS EKS cluster and S3 bucket and CloudFront : for frontend and backend deployment" - Here you should likely be using the Cluster Agent which again uses different concepts based on Kubernetes architecture
Docs:
https://docs.appdynamics.com/21.9/en/end-user-monitoring/browser-monitoring
https://docs.appdynamics.com/21.9/en/application-monitoring/tiers-and-nodes
https://docs.appdynamics.com/21.9/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent
Related
I have Vespa.ai cluster with multiple container/content nodes. After Vespa is loaded with data, my app sends queries and gets the data from Vespa. I want to be sure that I utilize well all the nodes and I get the data as fast as possible. My app builds HTTP request and sends it to one of the nodes.
Which node/nodes should I direct my request to?
How can I be sure that all instances participate in answering queries?
What should I do to utilize all the cluster nodes?
Does Vespa know to load balance these requests to other instances for better performance?
Vespa is a 2-tier system:
The containers will load balance over the content nodes (if you have multiple groups), but since you are sending the requests to the containers, you need to load balance over those.
This can be done by code you write in your client, by VIP, by another tier of nodes you host yourself such as e.g Nginx, or by a hosted load balancer such as AWS ELB.
You can debug the distributed query execution by adding &presentation.timing=true&trace.timestamps&tracelevel=5
to the search request, then you'll get a trace in the response where you can see how the query was dispatched and how long each node uses to match the query. See also Scaling Vespa https://docs.vespa.ai/en/performance/sizing-search.html
We are using Spring Cloud Gateway [JavaDsl] as our API Gateway. For the proxies we have multiple microservices [running on different ip:port] as target's. Would like to know either we can configure multiple targets to spring cloud gateway proxies, similar to apache camel load balancer eip.
camel.apache.org/manual/latest/loadBalance-eip.html
We are looking for software load balancing with in spring cloud gateway [similar to netflix/apache-camel] instead of another dedicated LB.
Able to get Spring Cloud Gateway load balanced Route working using spring-cloud-starter-netflix-ribbon. However when one of the server instance is down, load-balancing-fails. Code Snippets below.
Version :
spring-cloud-gateway : 2.1.1.BUILD-SNAPSHOT
Gateway Route
.route(
r ->
r.path("/res/security/")
.filters( f -> f
.preserveHostHeader()
.rewritePath("/res/security/", "/targetContext/security/")
.filter(new LoggingFilter())
.uri("lb://target-service1-endpoints")
)
application.yml
ribbon:
eureka:
enabled: false
target-service1-endpoints:
ribbon:
listOfServers: 172.xx.xx.s1:80, 172.xx.xx.s2:80
ServerListRefreshInterval: 1000
retryableStatusCodes: 404, 500
MaxAutoRetriesNextServer: 1
management:
endpoint:
health:
enabled: true
Here is the response from Spring Cloud Team.
What you've described, indeed, happens. However, it is not gateway-specific. If you just use Ribbon in a Spring Cloud project with listOfServers, the same thing will happen. This is because, unlike for eureka, the IPing for non-discovery-service scenario is not instrumented (a DummyPing instance is used).
You could probably change this behaviour by providing your own IPing, IRule or ServerListFilter implementation and overriding the setup we provide in the autoconfiguration in this way.
https://github.com/spring-cloud/spring-cloud-gateway/issues/1482
so I got my Identity Server project up and running, and am setting up my project to publish. Now, when I define my client in the config for IS4, I suppose I will have to set my redirect urls to my publish domain, something like this:
new Client{
...
RedirectUris = { "localhost:5002/signin-oidc", "myclient.com/signin-oidc" }
...
}
Is including the localhost and domain the right way to do this?
I am thinking it would be ok since an attacker would have to have my client secret in order to login. Or is it better to set up two separate clients (eg. 'client' and 'client_local'), and request the appropriate client at startup?
There are two ways:
1) Use Configuration File: You can store the clients in a JSON file and load them during startup. Use different JSON files for different environments.
Example. clients.Development.json for Development and clients.Production.json in Production environment; However, The clients will be In Memory Clients and any changes in clients configuration will require a reboot of your application.
2) Use Persistent Storage: Use a database server to store configuration and operational data. A local database for development and a database for production use.
See this docs, The example uses Entity Framework for persistent storage but you're bound to Entity Framework or any ORM. You can opt to write your own Data Access Layer for IdentityServer. This will allow you to change client configurations without restarting your application as the data will be retrieved from a database.
Is it possible to do scheduled deletion using the Node SDK ?
I can't find a function or parameter in the SDK that would let me do it.
If it's not possible with the SDK as-is, any pointers for a workaround (e.g. how to manually craft an HTTP request in Node that would serve the same purpose - can I use the Node SDK to prepare a template request or token?) would be really useful.
Not within the SDK itself - COS doesn't support object expiry or lifecycle policies yet.
To schedule operations within your application logic, might want to check out the later (link) or node-cron (link) packages.
I'm trying to set up two Google App Engine modules where one of the modules is configured with Basic scaling so it can handle long-running computation. The front-end module interacts with the user and enqueues tasks.
I need for the front-end module to be able to enqueue a task for the back-end module to pick up the task and execute it. I've gotten it mostly to work except when I enqueue the task, it gets assigned to run in the front-end module rather than the back-end module.
The problem is in the development server environment. On production App Engine it seems clear how to do it by simply stating in the header with the "Host" parameter:
Queue queue = QueueFactory.getDefaultQueue();
TaskOptions taskOptions = TaskOptions.Builder.withUrl("/longtest").param("content", content).header("Host", "nbsocialmetrics-backend");
log.info("SignGuestbookServlet taskOption " + taskOptions);
queue.add(taskOptions);
But in the development server, modules are addressed by port numbers rather than by module name. I don't think using the <target> parameter will work either because it also addresses the module by name rather than by port number.
I've found that the approach with the Host header DOES work on the development server with App Engine SDK version 1.9.15.
Also note this was also posted as an issue on code.google.com: Task queues don't work with multiple modules on development server