Distributed cache using JCS - distributed

I am developing an application to manage cache consistence in distributed environment.
I have a clustered weblogic environment in which there are multiple managed servers(possibly on different IPs).
A java application will be deployed in all managed servers. An application in managed server 1 can update the cache where it has to be reflected in cache of managed server 2.
I found JCS lateral cache is suitable for this. I am struggling configuring the ccf for this scenario.
jcs.auxiliary.LTCP.attributes.TcpServers=localhost:XXXX,localhost:YYYY
jcs.auxiliary.LTCP.attributes.TcpListenerPort=ZZZZZ
Can someone explain :
How to create above two pieces of configuration?
How can I know the ports to configure?
Thanks in advance.

Check out this link:
http://commons.apache.org/proper/commons-jcs/LateralTCPAuxCache.html
There are two types of configuration TCP and UDP.
TCP configuration requires IP address and port number in configuration file:
jcs.auxiliary.LTCP=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.LateralTCPCacheFactory
jcs.auxiliary.LTCP.attributes=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.TCPLateralCacheAttributes
jcs.auxiliary.LTCP.attributes.TcpServers=localhost:1111,localhost:1112jcs.auxiliary.LTCP.attributes.TcpListenerPort=1110
jcs.auxiliary.LTCP.attributes.AllowGet=false
The above link has more description on the properties and how it works.
I am going to use these configurations myself. Will let you know how it goes.
-Bini

Related

Are there any code samples for Uno platform accessing MSSQL, PostgreSQL or MYSQL?

I have tried various ways to connect to different DB engines (asynchronously), but they all failed when I deployed the code and executed it via my browser in WASM format. The code worked well for UWP tho, so I'm a bit baffled.
Although there's a sample for SQLLite in browser, it wasn't too helpful for me. Hopefully someone could give me a few pointers to continue. Thanks in advance
The support for SQLite is about running the database inside of the browser itself, but not about running it from a remote database.
If you need to have such a support, you will need to have a .NET SQL provider that supports plain HTTP/S or WebSockets, which is available for cloud-based databases.
In general though, you may want to consider a WebAssembly app as a mobile app for which it is best to access remote resources like databases through a Web API.
Note that the Chrome developers have in mind the creation of a RAW sockets API, which would enable TCP non-HTTP connections to be created.

Best way to update Spartan Config

I've installed DC/OS to a new cluster and am learning it. Bootstrapping and installing was a relatively okay process; I chose the advanced method and found it to be the easiest to get working with our system.
Once deployed, I'm confused about how I am to go about updating the cluster configuration (the values I'd provided with bootstrap). Does DC/OS do anything to help here, or is configuration relatively static?
Specifically, I'd like to modify the configuration of Spartan to:
Only listen on the dummy device (it's listening on all of them at the moment)
Configure a zone specific resolver (I was told it's possible https://github.com/mesosphere/mesos-dns/pull/441)
According to DCOS docs for 1.9 (see "resolvers" option), upstream DNS servers cannot be changed once provided during bootstrap.
"Upstream DNS Servers [...] Caution: If you set this parameter
incorrectly you will have to reinstall DC/OS."

Embedded Solr on Amazon AWS

Currently, I have developed a web application. In my web application, I used embedded solr server to make indexing. After that I deployed onto the Tomcat 6 on window xp. Everything is ok. Next, I have tried my web application to deploy on Amazon AWS. My platform is linux + mysql. When I deployed, I got the exception related with embedded solr.
[ WARN] 19:50:55 SolrCore - [] Solr index directory 'solrhome/./data/index' doesn't exist. Creating new index...
[ERROR] 19:50:55 CoreContainer - java.lang.RuntimeException: java.io.IOException: Cannot create directory: /usr/share/tomcat6/solrhome/./data/index
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:403)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:552)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:480)
So how to fix my problem. I am novie to linux.
My guess is that the user you are running Solr under does not have permission to access that directory.
Also, which version of Solr are you using? Looks like 3+. The latest version is 4, so it may make sense to try using that from the start. Probably a bit more troubleshooting to start, but a much better pay off that starting with legacy configuration.
I got solution. That is because of permission affair on Amazon Linux with ec2-user. So , I changed permission by following.
sudo chmod -R ugo+rw /usr/share/tomcat6
http://wiki.apache.org/solr/SolrOnAmazonEC2strong text
t should allow access to ports 22 and 8983 for the IP you're working from, with routing prefix /32 (e.g., 4.2.2.1/32). This will limit access to your current machine. If you want wider access to the instance available to collaborate with others, you can specify that, but make sure you only allow as much access as needed. A Solr instance should not be exposed to general Internet traffic. If you need help figuring out what your IP is, you can always use whatismyip.com. Please note that production security on AWS is a wide ranging topic and is beyond the scope of this tutorial.

Apache Mod_JK and Load Balancing

I am using Tomcat 6 and have some questions about Apache mod_jk as follows.
Do I have to install Apache webserver to use mod_jk ??
If I run applications on 2 servers under Tomcat and load balance between them using mod_jk, will this also check the availability of the applications i.e. will it only send requests to one server if the application is down on the other server ??
If it checks for availability do you need to have multicast available on the network.
We intend to use tomcat clustering as well, will this work with mod_jk ??
Is there anything else I could use to load balance with availability checking for tomcat running applications ??.
Any help will be appreciated.
Cheers
Jeff
Yes.
Yes, unless you go out of your way to configure mod_jk not to do that.
No.
Yes, but it is not necessary.
Pretty much any H/W load-balancer, pretty much any web server that supports reverse proxy over HTTP or AJP.
You would be much better off using mod_proxy_ajp rather than mod_jk for this. It's much simpler to configure, none of those nasty JkMount things or the Tomcat listener that 'auto-configures' it for you, not, and it works a lot better too. It's also not deprecated, unlike mod_jk since Tomcat 5.5.
Yes , you must have a Apache/Httpd installed on your webserver, on this you can perform Load balancing using mod_jk/mod_cluster/mod_proxy. Hope currently you are using mod_jk.
You are right. This can be enable using session. If you want one session to a corresponding server instance only means you can enable session stickiness. And the load balancing will be based on the "lbfactor" which you are mentioning on the "worker.properties" of your mod_jk. "redirect" option for failover also available in worker.properties. Failover can be done from Application server side as well.
As far as I knew if you are enabling failover in Application server, multicast address will be available by default. Only thing you need to do is port opening.
Mod_jk will will work with clustring in tomcat/Jboss perfectly.
As I mentioned above in Answer "1" you can use any load balancing for tomcat.

Solr : replication options

I've got a SOLR instance running behind a firewall. I'm about to put up another instance which will not be firewalled. Howevever, SOLR appears to only support pull replication and not push replication.
What are my options with regard to maintaining the same level of security? I'd rather not open too many ports in the firewall. Would HTTP over a SSH tunnel be the best option? Would it also be possible to just replicate the index files using plain old rsync (not using any SOLR specific features) or would this break something?
Would it also be possible to just replicate the index files using plain old rsync
Solr actually supports this kind of distribution with its snappuller mechanism, documented here: http://wiki.apache.org/solr/CollectionDistribution
I would open a port and specify the IP address of the slave, and just use ordinary HTTP-based replication; that would be quite secure, I think, and easier to maintain probably. I know it's not exactly where you were angling, but it's what I'd recommend.
I'm answering my own question as the solution i went for is different than what the two other answers suggested. I ended up using a SSH tunnel for HTTP traffic. Thus, i used SSH to redirect all traffic to port 8080 on the HostA to port 8080 on hostB through a SSH tunnel.
The solution appears to be working fine. I'm using a script which validates the tunnel every 5 minutes or so.
You could use HTTP basic authentication (see https://wiki.apache.org/solr/SolrReplication#Slave) but since the password will be passed in plain text, an SSH tunnel or secure VPN would also be required in order to deter more determined attackers.
I'll be going for a VPN solution to start with and consider an SSH tunnel before moving to production if we feel we are unable to place sufficient trust in our internal networks.

Resources