Change from local to external host - database

I am running yugabyte using yb-ctl create. I am using --rf 3 to create a 3 node cluster. How can make it listen on the external IP address instead of localhost? And run on three different IPs?

yb-ctl only works for local deployments for quick debugging or testing. To bring up yugabyte on three separate hosts, you can follow the instructions at https://docs.yugabyte.com/latest/deploy/manual-deployment/. The commands there are for 4 different hosts but it should be very similar for 3 hosts.

Indeed, yb-ctl is for local clusters on a single node and not meant to be used for multi-node deployments. In addition to the manual install option, there are a number of orchestrated multi-node deployment options available:
Terraform on any cloud
Cloud formation in AWS, Deployment manager in GCP and ARM templates in Azure
If Kubernetes is of interest, thats another easy way to deploy using Operators or Helm charts.

Related

Change "Solr Cluster" in Lucidworks Fusion 4

I am running Fusion 4.2.4 with external Zookeeper (3.5.6) and Solr (7.7.2). I have been running a local set of servers and am trying to move to AWS instances. All of the configuration from my local Zookeepers has been duplicated to the AWS instances so they should be functionally equivalent.
I am to the point where I want to shut down the old (local) Zookeeper instances and just use the ones running in AWS. I have changed the configuration for Solr and Fusion (fusion.properties) so that they only use the AWS instances.
The problem I have is that Fusion's solr cluster (System->Solr Clusters) associated with all of my collections is still set to the old Zookeepers :9983,:9983,:9983 so if I turn off all of the old instances of Zookeeper my queries through Fusion's Query API no longer work. When I try to change the "Connect String" for that cluster it fails because the cluster is currently in use by collections. I am able to create a new cluster but there is no way that I can see to associate the new cluster with any of my collections. In a test environment set up similar to production, I have changed the searchClusterId for a specific collection using Fusion's Collections API however after doing so the queries still fail when I turn off all of the "old" Zookeeper instances. It seems like this is the way to go so I'm surprised that it doesn't seem to work.
So far, Lucidworks's support has not been able to provide a solution - I am open to suggestions.
This is what I came up with to solve this problem.
I created a test environment with an AWS Fusion UI/API/etc., local Solr, AWS Solr, local ZK, and AWS ZK.
1. Configure Fusion and Solr to only have the AWS ZK configured
2. Configure the two ZKs to be an ensemble
3. Create a new Solr Cluster in Fusion containing only the AWS ZK
4. For each collection in Solr
a. GET the json from <fusion_url>:8764/api/collections/<collection>
b. Edit the json to change “searchClusterId” to the new cluster defined in Fusion
c. PUT the new json to <fusion_url>:8764/api/collections/<collection>
After doing all of this, I was able to change the “default” Solr cluster in the Fusion Admin UI to confirm that no collections were using it (I wasn’t sure if anything would use the ‘default’ cluster so I thought it would be wise to not take the chance).
I was able to then stop the local ZK, put the AWS ZK in standalone mode, and restart Fusion. Everything seems to have started without issues.
I am not sure that this is the best way to do it, but it solved the problem as far as I could determine.

How to build a Libra TestNet with two servers?

I want to build a Libra TestNet with two servers.
I don't know how to use config-builder to configure the program.
This answer might be a bit late but it might help for someone who is looking for a solution.
I was able to setup local test network with single/multiple nodes based on the following
For a single node, the libra-swarm package is well documented here https://developers.libra.org/docs/run-local-network and defines easy steps to setup your local test network with defined number of nodes.
If you are planning to use multiple nodes, you can use docker files and shell scripts to create docker images from Libra's github repo and use those images with some container-orchestration system like kubernetes to setup your network. I was able to do this and have it setup using in this github repository.

Distributed cache using JCS

I am developing an application to manage cache consistence in distributed environment.
I have a clustered weblogic environment in which there are multiple managed servers(possibly on different IPs).
A java application will be deployed in all managed servers. An application in managed server 1 can update the cache where it has to be reflected in cache of managed server 2.
I found JCS lateral cache is suitable for this. I am struggling configuring the ccf for this scenario.
jcs.auxiliary.LTCP.attributes.TcpServers=localhost:XXXX,localhost:YYYY
jcs.auxiliary.LTCP.attributes.TcpListenerPort=ZZZZZ
Can someone explain :
How to create above two pieces of configuration?
How can I know the ports to configure?
Thanks in advance.
Check out this link:
http://commons.apache.org/proper/commons-jcs/LateralTCPAuxCache.html
There are two types of configuration TCP and UDP.
TCP configuration requires IP address and port number in configuration file:
jcs.auxiliary.LTCP=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.LateralTCPCacheFactory
jcs.auxiliary.LTCP.attributes=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.TCPLateralCacheAttributes
jcs.auxiliary.LTCP.attributes.TcpServers=localhost:1111,localhost:1112jcs.auxiliary.LTCP.attributes.TcpListenerPort=1110
jcs.auxiliary.LTCP.attributes.AllowGet=false
The above link has more description on the properties and how it works.
I am going to use these configurations myself. Will let you know how it goes.
-Bini

Embedded Solr on Amazon AWS

Currently, I have developed a web application. In my web application, I used embedded solr server to make indexing. After that I deployed onto the Tomcat 6 on window xp. Everything is ok. Next, I have tried my web application to deploy on Amazon AWS. My platform is linux + mysql. When I deployed, I got the exception related with embedded solr.
[ WARN] 19:50:55 SolrCore - [] Solr index directory 'solrhome/./data/index' doesn't exist. Creating new index...
[ERROR] 19:50:55 CoreContainer - java.lang.RuntimeException: java.io.IOException: Cannot create directory: /usr/share/tomcat6/solrhome/./data/index
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:403)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:552)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:480)
So how to fix my problem. I am novie to linux.
My guess is that the user you are running Solr under does not have permission to access that directory.
Also, which version of Solr are you using? Looks like 3+. The latest version is 4, so it may make sense to try using that from the start. Probably a bit more troubleshooting to start, but a much better pay off that starting with legacy configuration.
I got solution. That is because of permission affair on Amazon Linux with ec2-user. So , I changed permission by following.
sudo chmod -R ugo+rw /usr/share/tomcat6
http://wiki.apache.org/solr/SolrOnAmazonEC2strong text
t should allow access to ports 22 and 8983 for the IP you're working from, with routing prefix /32 (e.g., 4.2.2.1/32). This will limit access to your current machine. If you want wider access to the instance available to collaborate with others, you can specify that, but make sure you only allow as much access as needed. A Solr instance should not be exposed to general Internet traffic. If you need help figuring out what your IP is, you can always use whatismyip.com. Please note that production security on AWS is a wide ranging topic and is beyond the scope of this tutorial.

Set up a webserver for multiple users and makes PHP scripts run under their account (with their permissions)

I'm setting up an Apache 2.2 webserver for multiple users (having the "developers" profile).
They need to execute PHP scripts/applications (both home-made and acquired) and run
I tried using *mod_userdir* but the problem is that Apache (thus the scripts) runs under "www-data" (I'm using GNU/Debian OS).
So I looked at suPHP but it doesn't support *php_admin_value* Apache directives.
I also saw apache2-mpm-itk mentioned but it uses virtual hosts, which itself requires DNS.
I think I could see some workaround to that if I was to install a DNS server on the webserver managing a subdomain via delegation (eg. my webserver's FQDN is "testsrv.mycompany.tld" and users's virtual host's FQDN would be "user1.testsrv.mycompany.tld", "user2.testsrv.mycompany.tld"). But it might a bit "too much" no?
You could use virtual hosts along with mod_auth_basic so user1 would have a password protected site at www.user1.example.com.
If by 'php_admin_value' you are refering to the .htaccess files, then yes they are not supported by suPHP but I believe there is a way around that.
Finally, I am setting up my server locally (for testing) so I just updated my /etc/hosts/ file. That might be a good place for you to start.

Resources