Karaf Cellar and Managed Service - karaf

I've a cluster of two nodes with Cellar. There are managed service set on a configuration create with the configuration admin service.
When I update configuration, there is only one managed service which triggered instead of two.
But when I stop the first node, the managed service is triggered on the other node receiving updated configuration.
I expect that the managed service triggered on two nodes at the same time.
Thanks
Configuration
Karaf 3.0.7
Karaf Cellar 3.0.3
Event producer=true
Event consumer=true
bundle.listener = false
config.listener = true
feature.listener = false
default.bundle.sync = cluster
default.config.sync = cluster
default.feature.sync = cluster
default.obr.urls.sync = cluster

Related

How to decide the node and tiers in Appdynamics

I am new to AppDynamics to I am confused how to decide about node and tiers
My application is using :
Angular JS and TypeScript : Frontend
fastapi : Backend
AWS EKS cluster and S3 bucket and CloudFront : for frontend and backend deployment
I am also using some Data-Management-Platform APIs and SNOW APIs
I can`t decide how many nodes do I need in this application and how to decide that this part should be a node and tier
Put simply: A Node is an instance of an application, a Tier is a collection of instances which have the same functionality.
"Angular JS and TypeScript : Frontend" - You would need Browser Real User Monitoring (BRUM) in order to monitor the front end. This is not organised into Tiers and Nodes, but rather page views and browser sessions.
"fastapi : Backend" - Assuming a set of Nodes with the same functionality, here you may want to have a 'fastapi' Tier which contains a number of Nodes. So one might be Tier = 'fastapi', Node = 'fastapi-1' and another might be Tier = 'fastapi', Node = 'fastapi-2'. If there are different types of Node (different functionality) these should be arrange into different Tiers (e.g. "Authentication", or "Reporting")
"AWS EKS cluster and S3 bucket and CloudFront : for frontend and backend deployment" - Here you should likely be using the Cluster Agent which again uses different concepts based on Kubernetes architecture
Docs:
https://docs.appdynamics.com/21.9/en/end-user-monitoring/browser-monitoring
https://docs.appdynamics.com/21.9/en/application-monitoring/tiers-and-nodes
https://docs.appdynamics.com/21.9/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent

Zookeeper + Solr in TestContainers

I'm using org.testcontainers to perform integration testing with Solr.
[Using SolrJ in my unit tests]
When I start Solr in cloud mode, using an embedded ZooKeeper instance, I'm able to connect to the solr instance from my unit test, but unable to connect to ZooKeeper from my SolrClient.
I think this is because embedded ZooKeeper is bound to IP 127.0.0.1 and inaccessible.
If I start two separate containers [using a shared network], ZooKeeper and Solr, I can connect Solr to ZooKeeper, and I can connect to Zookeeper from my unit tests, BUT when Zookeeper returns the active SOLR node, it return the internal server IP which is not accessible from my unit test [in my SolrJ client].
I'm not sure where to go with this.
Maybe there is a network mode that will do address translation?
Thoughts?
UPDATE:
There is an official Testcontainers Module: https://www.testcontainers.org/modules/solr/
It seems that this problem can`t be solved that easy.
One way would be to use fixed ports with testcontainer. In this case the ports 9983 and 8983 will be mapped to the same ports on the host. This makes it possible to use the Solr Cloud Client. But this only works if you can ensure that tests will run sequentially, which can be a bit tricky, e.g. on Jenkins with Feature Branches.
A different solution would be to use another client. Since Solrj provides multiple Clients, you can choose which one you want to use. If you only want to search or update you can use the LBHttp2SolrClient which load balances between multiple nodes. If you want to use a specific client for the Integration Tests this example could work:
// Create the solr container.
SolrContainer container = new SolrContainer();
// Start the container. This step might take some time...
container.start();
// Do whatever you want with the client ...
SolrClient client = new Http2SolrClient.Builder("http://localhost:" + container.getSolrPort() + "/solr").build();
SolrPingResponse response = client.ping("dummy");
// Stop the container.
container.stop();
Here is a list of solr client in java: https://lucene.apache.org/solr/guide/8_3/using-solrj.html#types-of-solrclients
I ran into the exact same issue. I solved it using a proxy. In my docker_compose.yml I added:
squid:
image: sameersbn/squid:3.5.27-2
ports:
- "3128:3128"
volumes:
- ./squid.conf:/etc/squid/squid.conf
- ./cache:/var/spool/squid
restart: always
networks:
- solr
And in the configuration of the SolrClient I added:
[...]
HttpClient httpClient = HttpClientBuilder.create().setProxy(new HttpHost("localhost", 3128)).build()
CloudSolrClient c = new CloudSolrClient.Builder(getZookeeperList(), Optional.empty()).withHttpClient(httpClient).build();
[...]
protected List<String> getZookeeperList() {
List<String> zookeeperList = new ArrayList<String>();
for (Zookeepers z : Zookeepers.values()) {
zookeeperList.add(testcontainer.getServiceHost(z.getServicename(), z.getPort()) + ":"
+ testcontainer.getServicePort(z.getServicename(), z.getPort()));
}
return zookeeperList;
}
But I'd still be interested in the workaround, that Jeremy mentioned in this comment.

Testing multiple AppEngine services with task queues

I have an AppEngine app with 2 services, where Service A is queueing tasks for Service B using the task (push) queue. How do I test this using the development server? When running multiple services with the development server, each service gets a unique port number, and the task queue can't resolve the URL because the target URL is actually running on another port, i.e. Service A is on port 8080 and Service B is on port 8081. This all works great in production where everything is on the same port, but how do I go about testing this locally?
The push queue configuration allows for specifying the target service by name, which the development server understands. From Syntax:
target (push queues)
Optional. A string naming a service/version, a frontend version, or a
backend, on which to execute all of the tasks enqueued onto this
queue.
The string is prepended to the domain name of your app when
constructing the HTTP request for a task. For example, if your app ID
is my-app and you set the target to my-version.my-service, the
URL hostname will be set to
my-version.my-service.my-app.appspot.com.
If target is unspecified, then tasks are invoked on the same version
of the application where they were enqueued. So, if you enqueued a
task from the default application version without specifying a target
on the queue, the task is invoked in the default application version.
Note that if the default application version changes between the time
that the task is enqueued and the time that it executes, then the task
will run in the new default version.
If you are using services along with a dispatch file, your task's
HTTP request might be intercepted and re-routed to another service.
For example a basic queue.yaml would be along these lines:
queue:
- name: service_a
target: service_a
- name: service_b
target: service_b
I'm not 100% certain if this alone is sufficient, personally I'm also using a dispatch.yaml file as I need to route requests other than tasks. But for that you need to have a well-defined pattern in the URLs as host-name based patterns aren't supported in the development server. For example if the Service A requests use /service_a/... paths and Service B use /service_b/... paths then these would do the trick:
dispatch:
- url: "*/service_a/*"
service: service_a
- url: "*/service_b/*"
service: service_b
In your case it might be possible to achieve what you want with just a dispatch file - i.e. still using the default queue. Give it a try.

How to make a request in SolrCloud?

I have a Node.js app and I used to have a standalone Solr but then our company decided to use SolrCloud to provide failover.
In the standalone Solr I had only the one server with it and I had all my requests like: http://solr_server:8983/solr/mycore/select?indent=on&q=*:*&wt=json so all requests led to the same server all the time.
But now I have 3 different instances with 1 ZooKeeper and 1 Solr node on each of them and my requests look like this: http://solr_server_1:8983/solr/mycollection/select?q=*:*
And now the question: what if the solr_server_1 will go down? How can I still get my results? How can I handle requests in this case?
If you're doing this manually: You'll have to catch the exception when the connection fails, and then retry the next server in your list.
let servers = ['ip1:8983', 'ip2:8983', 'ip3:8983']
If you're using a library that supports Zookeeper (i.e. it connects to zookeeper to find out what the live nodes are), you give the client a list of zookeeper nodes and lets it figure out the rest. node-solr-smart-client is a client that supports Zookeeper as well.
options = {
zkConnectionString: 'ip1:2181,ip2:2181,ip3:2181',
// etc.
}
solrSmartClient.createClient('my_solr_collection', options, function (err, solrClient) {

Apache Sticky sessions

I have configured a sticky session set up with a load balancer (Apache) and three app nodes running Jboss 4.2.2 .
the load balancer uses mod_jk and settings as mentioned in the tutorial here.
http://community.jboss.org/wiki/UsingModjk12WithJBoss;jsessionid=1569CBFB7C3096C59C977CD3F7159A32
I have the jumRoute set as node1 ,node2 and node3 for the three nodes and my workerlist property for load balancer is set as
node1,node2,node3
The tutorial has been followed till the last point but I did not configure the useJK parameters under.the value is still set to false.
The sticky sessions are holding up but I seem to loose session and get this error in my mod_jk log file
[error] ajp_get_reply::jk_ajp_common.c (1926): (node1) Timeout with waiting reply from tomcat. Tomcat is down, stopped or network problems (errno=110)
I personally checked the user logged in on node1 and then moved to node2.
Does Apache redirect to another node when it fails to get reply from node1, How does useJk help in this situation.
---edit 01---
I changed the UseJK value to true but still few users still experience sudden log out which I know due to change in the server node catering the users request.
I also wanted to know whether traffic on the nodes has any affect on sticky session and how to counter it.( I am experiencing high load on all the servers for a few days)
----edit 02 ----
I would also like to know about controlling the number of connections per worker.
controlling the number of ajp connector/connections.
relation between the number of connections of apache load balancer and number of
ajp connections in JBoss worker nodes.
what would be the best configuration between Apache 2.2.3 and JBoss 4.2.2 worker nodes with Tomcat 5.5 connectors.
---- edit03-----
http://community.jboss.org/wiki/OptimalModjk12Configuration
using the above article i just wanted to know the best values for Apache
MaxClients
ThreadPerChild
I found the following note in this article interesting. I haven't tried this, but perhaps could be useful for someone experiencing the same problem.
If you are using mod_jk and have turned sticky sessions on, but your sessions are failing to stick, you have probably failed to set the domain, or you have failed to set the jvmRoute, or you are using a non-standard cookie name to implement the stickyness!
I think in your worker.properties file the workerlist should have loadBalancer worker not the node1,node2 & node3.It should be like this
worker.list=loadmanager
worker.loadmanager.balance_workers=node1,node2,node3
I hope u must have these correct.
Also you have to set UserJK arttribute to set as true for load balancing with sticky session combined with JvmRoute. If set to true, it will insert a JvmRouteFilter
to intercept every request and replace the JvmRoute if it detects a failover.
<attribute name="UseJK">true</attribute>
in deploy/jboss-web.deployer/META-INF/jboss-service.xml

Resources