Apache Mod_JK and Load Balancing - tomcat6

I am using Tomcat 6 and have some questions about Apache mod_jk as follows.
Do I have to install Apache webserver to use mod_jk ??
If I run applications on 2 servers under Tomcat and load balance between them using mod_jk, will this also check the availability of the applications i.e. will it only send requests to one server if the application is down on the other server ??
If it checks for availability do you need to have multicast available on the network.
We intend to use tomcat clustering as well, will this work with mod_jk ??
Is there anything else I could use to load balance with availability checking for tomcat running applications ??.
Any help will be appreciated.
Cheers
Jeff

Yes.
Yes, unless you go out of your way to configure mod_jk not to do that.
No.
Yes, but it is not necessary.
Pretty much any H/W load-balancer, pretty much any web server that supports reverse proxy over HTTP or AJP.

You would be much better off using mod_proxy_ajp rather than mod_jk for this. It's much simpler to configure, none of those nasty JkMount things or the Tomcat listener that 'auto-configures' it for you, not, and it works a lot better too. It's also not deprecated, unlike mod_jk since Tomcat 5.5.

Yes , you must have a Apache/Httpd installed on your webserver, on this you can perform Load balancing using mod_jk/mod_cluster/mod_proxy. Hope currently you are using mod_jk.
You are right. This can be enable using session. If you want one session to a corresponding server instance only means you can enable session stickiness. And the load balancing will be based on the "lbfactor" which you are mentioning on the "worker.properties" of your mod_jk. "redirect" option for failover also available in worker.properties. Failover can be done from Application server side as well.
As far as I knew if you are enabling failover in Application server, multicast address will be available by default. Only thing you need to do is port opening.
Mod_jk will will work with clustring in tomcat/Jboss perfectly.
As I mentioned above in Answer "1" you can use any load balancing for tomcat.

Related

Distributed cache using JCS

I am developing an application to manage cache consistence in distributed environment.
I have a clustered weblogic environment in which there are multiple managed servers(possibly on different IPs).
A java application will be deployed in all managed servers. An application in managed server 1 can update the cache where it has to be reflected in cache of managed server 2.
I found JCS lateral cache is suitable for this. I am struggling configuring the ccf for this scenario.
jcs.auxiliary.LTCP.attributes.TcpServers=localhost:XXXX,localhost:YYYY
jcs.auxiliary.LTCP.attributes.TcpListenerPort=ZZZZZ
Can someone explain :
How to create above two pieces of configuration?
How can I know the ports to configure?
Thanks in advance.
Check out this link:
http://commons.apache.org/proper/commons-jcs/LateralTCPAuxCache.html
There are two types of configuration TCP and UDP.
TCP configuration requires IP address and port number in configuration file:
jcs.auxiliary.LTCP=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.LateralTCPCacheFactory
jcs.auxiliary.LTCP.attributes=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.TCPLateralCacheAttributes
jcs.auxiliary.LTCP.attributes.TcpServers=localhost:1111,localhost:1112jcs.auxiliary.LTCP.attributes.TcpListenerPort=1110
jcs.auxiliary.LTCP.attributes.AllowGet=false
The above link has more description on the properties and how it works.
I am going to use these configurations myself. Will let you know how it goes.
-Bini

Embedded Solr on Amazon AWS

Currently, I have developed a web application. In my web application, I used embedded solr server to make indexing. After that I deployed onto the Tomcat 6 on window xp. Everything is ok. Next, I have tried my web application to deploy on Amazon AWS. My platform is linux + mysql. When I deployed, I got the exception related with embedded solr.
[ WARN] 19:50:55 SolrCore - [] Solr index directory 'solrhome/./data/index' doesn't exist. Creating new index...
[ERROR] 19:50:55 CoreContainer - java.lang.RuntimeException: java.io.IOException: Cannot create directory: /usr/share/tomcat6/solrhome/./data/index
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:403)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:552)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:480)
So how to fix my problem. I am novie to linux.
My guess is that the user you are running Solr under does not have permission to access that directory.
Also, which version of Solr are you using? Looks like 3+. The latest version is 4, so it may make sense to try using that from the start. Probably a bit more troubleshooting to start, but a much better pay off that starting with legacy configuration.
I got solution. That is because of permission affair on Amazon Linux with ec2-user. So , I changed permission by following.
sudo chmod -R ugo+rw /usr/share/tomcat6
http://wiki.apache.org/solr/SolrOnAmazonEC2strong text
t should allow access to ports 22 and 8983 for the IP you're working from, with routing prefix /32 (e.g., 4.2.2.1/32). This will limit access to your current machine. If you want wider access to the instance available to collaborate with others, you can specify that, but make sure you only allow as much access as needed. A Solr instance should not be exposed to general Internet traffic. If you need help figuring out what your IP is, you can always use whatismyip.com. Please note that production security on AWS is a wide ranging topic and is beyond the scope of this tutorial.

Install Jetty or run embedded for Solr install

I am about to install Solr on a production box. It will be the only Java applet running and be on the same box as the web server (nginx).
It seems there are two options.
Install Jetty separately and configure to use with Solr
Set Solr's embedded Jetty server to start as a service and just use that
Is there any performance benefit in having them separate?
I am a big fan of KISS, the less setup the better.
Thanks
If you want KISS there is no question: 2. stick to vanilla Solr distrib with included jetty.
Doing the work of installing an external servlet engine would make sense if you needed Tomcat for example, but just to use the same thing (Jetty) Solr already includes...no way.
Solr is still using jetty 6. So there would be some benefits if you can get the solr application to run in a recent jetty distribution. For example you could use jetty 9 and use features like SPDY to enhance the response times of your application.
However I have no idea or experience if it's possible to run the solr application standalone in a servlet engine.
Another option for running Solr and keeping it simple is to use Solr-Undertow which is a high performance with small footprint server for Solr. It is easy to use on local machines for development and also production. It supports simple config files for running instances with different data directories, ports and more. It also can run just by pointing it at a distribution .zip file without needing to unpack it.
(note, I am the author of Solr-Undertow)
Link here: https://github.com/bremeld/solr-undertow with releases under the "Releases" tab.

Set up a webserver for multiple users and makes PHP scripts run under their account (with their permissions)

I'm setting up an Apache 2.2 webserver for multiple users (having the "developers" profile).
They need to execute PHP scripts/applications (both home-made and acquired) and run
I tried using *mod_userdir* but the problem is that Apache (thus the scripts) runs under "www-data" (I'm using GNU/Debian OS).
So I looked at suPHP but it doesn't support *php_admin_value* Apache directives.
I also saw apache2-mpm-itk mentioned but it uses virtual hosts, which itself requires DNS.
I think I could see some workaround to that if I was to install a DNS server on the webserver managing a subdomain via delegation (eg. my webserver's FQDN is "testsrv.mycompany.tld" and users's virtual host's FQDN would be "user1.testsrv.mycompany.tld", "user2.testsrv.mycompany.tld"). But it might a bit "too much" no?
You could use virtual hosts along with mod_auth_basic so user1 would have a password protected site at www.user1.example.com.
If by 'php_admin_value' you are refering to the .htaccess files, then yes they are not supported by suPHP but I believe there is a way around that.
Finally, I am setting up my server locally (for testing) so I just updated my /etc/hosts/ file. That might be a good place for you to start.

Solr : replication options

I've got a SOLR instance running behind a firewall. I'm about to put up another instance which will not be firewalled. Howevever, SOLR appears to only support pull replication and not push replication.
What are my options with regard to maintaining the same level of security? I'd rather not open too many ports in the firewall. Would HTTP over a SSH tunnel be the best option? Would it also be possible to just replicate the index files using plain old rsync (not using any SOLR specific features) or would this break something?
Would it also be possible to just replicate the index files using plain old rsync
Solr actually supports this kind of distribution with its snappuller mechanism, documented here: http://wiki.apache.org/solr/CollectionDistribution
I would open a port and specify the IP address of the slave, and just use ordinary HTTP-based replication; that would be quite secure, I think, and easier to maintain probably. I know it's not exactly where you were angling, but it's what I'd recommend.
I'm answering my own question as the solution i went for is different than what the two other answers suggested. I ended up using a SSH tunnel for HTTP traffic. Thus, i used SSH to redirect all traffic to port 8080 on the HostA to port 8080 on hostB through a SSH tunnel.
The solution appears to be working fine. I'm using a script which validates the tunnel every 5 minutes or so.
You could use HTTP basic authentication (see https://wiki.apache.org/solr/SolrReplication#Slave) but since the password will be passed in plain text, an SSH tunnel or secure VPN would also be required in order to deter more determined attackers.
I'll be going for a VPN solution to start with and consider an SSH tunnel before moving to production if we feel we are unable to place sufficient trust in our internal networks.

Resources