I've installed DC/OS to a new cluster and am learning it. Bootstrapping and installing was a relatively okay process; I chose the advanced method and found it to be the easiest to get working with our system.
Once deployed, I'm confused about how I am to go about updating the cluster configuration (the values I'd provided with bootstrap). Does DC/OS do anything to help here, or is configuration relatively static?
Specifically, I'd like to modify the configuration of Spartan to:
Only listen on the dummy device (it's listening on all of them at the moment)
Configure a zone specific resolver (I was told it's possible https://github.com/mesosphere/mesos-dns/pull/441)
According to DCOS docs for 1.9 (see "resolvers" option), upstream DNS servers cannot be changed once provided during bootstrap.
"Upstream DNS Servers [...] Caution: If you set this parameter
incorrectly you will have to reinstall DC/OS."
Related
I am having problems with the Bluemix Monitoring and Analytics service.
I have 2 applications with bindings to a single Monitoring and Analytics service. Every ~1 minute I get the following log line in both apps:
ERR [Resource Monitoring][ERROR]: JsonSender request error: Error: unsupported certificate purpose
When I remove the bindings, the log message does not appear. I also greped my code for anything related to "JsonSender" or "Resource Monitoring" and did not find anything.
I am doing some major refactoring work on our server, which might have broken things. However, our code does not use the Monitoring service directly (we don't have a package that connects to the monitoring server or something like that) - so I will be very surprised if the problem is due to the refactoring changes. I did not check the logs before doing the changes.
Any ideas will help.
Bluemix have 3 production environments: ng, eu-gb, au-syd, and I tested with ng, and eu-gb, both using 2 applications with same M&A service, and tested with multiple instances. They are all work fine.
Meanwhile, I received a similar problem that claim they are using Node.js 4.2.6.
So there are some more information we need to know to identify the problem:
1. Which version of Node.js are you using (Bluemix Default or any other one)
2. Which production environment are you using? (ng, eu-gb, au-syd)
3. Is there any environment variables are you using in your application?
(either the creating in code one, or the one using USER-DEFINED Variables)
4. One more thing, could you please try to delete the M&A service, and create it again, in case we are trapped in a previous fault of M&A.
cf ds <your M&A service name>
cf cs MonitoringAndAnalytics <plan> <your M&A service name>
NodeJS versions 4.4.* all appear to work
NodeJS uses openssl and apparently did/does not like how one of the M&A server certificates were constructed.
Unfortunately NodeJS does not expose the openssl verify purpose API.
Please consider upgrading to 4.4 while we consider how to change the server's certificates in the least disruptive manner as there are other application types that do not have an issue with them (e.g. Liberty and Ruby)
setting node js version 4.2.4 in package.json worked for me, however this is an alternative by-passing solution. Actual fix is being handled by core team. Thanks.
I am developing an application to manage cache consistence in distributed environment.
I have a clustered weblogic environment in which there are multiple managed servers(possibly on different IPs).
A java application will be deployed in all managed servers. An application in managed server 1 can update the cache where it has to be reflected in cache of managed server 2.
I found JCS lateral cache is suitable for this. I am struggling configuring the ccf for this scenario.
jcs.auxiliary.LTCP.attributes.TcpServers=localhost:XXXX,localhost:YYYY
jcs.auxiliary.LTCP.attributes.TcpListenerPort=ZZZZZ
Can someone explain :
How to create above two pieces of configuration?
How can I know the ports to configure?
Thanks in advance.
Check out this link:
http://commons.apache.org/proper/commons-jcs/LateralTCPAuxCache.html
There are two types of configuration TCP and UDP.
TCP configuration requires IP address and port number in configuration file:
jcs.auxiliary.LTCP=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.LateralTCPCacheFactory
jcs.auxiliary.LTCP.attributes=org.apache.commons.jcs.auxiliary.lateral.socket.tcp.TCPLateralCacheAttributes
jcs.auxiliary.LTCP.attributes.TcpServers=localhost:1111,localhost:1112jcs.auxiliary.LTCP.attributes.TcpListenerPort=1110
jcs.auxiliary.LTCP.attributes.AllowGet=false
The above link has more description on the properties and how it works.
I am going to use these configurations myself. Will let you know how it goes.
-Bini
I need to check a group of servers (Unix, Linux) to know what kind of services, software (also version) are running there (check it once for a while and store it in database).
The idea is to have always fresh info about whole environment - its constantly changing. Perhaps you can suggest some solution that is already there?
Currently i am thinking about using Nagios or Cacti + plugins but I am not sure if this solution will be optimal.
Nagios is a very powerful monitoring solution (the best for me) : Open source, Compatible with both linux & windows, reporting & notifications via emails/SMS, Nice interface, Many many plugins...etc I've already worked with & I was very satisfied.
Check Nico Largo's Forum for Install. If you are not familiar to linux command search for FAN : Fully Automated Nagios which is a .iso where nagios is already in.
If you have any trouble during install or configuration post your questions there : https://serverfault.com/
Given that you want to poll for information on the system that can change dynamically, I would look at Check_MK.
It originally started as a plugin for Nagios that would poll a server for running services and generate the necessary configs for monitoring anything it discovered. Since then, it has evolved into a complete monitoring solution that provides its own complete ui (still based on nagios core), so you are safe in running this if you are familiar with nagios already.
See the website: http://mathias-kettner.com/checkmk_monitoring_system.html
You may need to select that you wish to view the "English" perspective of the site on first visit.
Currently, I have developed a web application. In my web application, I used embedded solr server to make indexing. After that I deployed onto the Tomcat 6 on window xp. Everything is ok. Next, I have tried my web application to deploy on Amazon AWS. My platform is linux + mysql. When I deployed, I got the exception related with embedded solr.
[ WARN] 19:50:55 SolrCore - [] Solr index directory 'solrhome/./data/index' doesn't exist. Creating new index...
[ERROR] 19:50:55 CoreContainer - java.lang.RuntimeException: java.io.IOException: Cannot create directory: /usr/share/tomcat6/solrhome/./data/index
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:403)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:552)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:480)
So how to fix my problem. I am novie to linux.
My guess is that the user you are running Solr under does not have permission to access that directory.
Also, which version of Solr are you using? Looks like 3+. The latest version is 4, so it may make sense to try using that from the start. Probably a bit more troubleshooting to start, but a much better pay off that starting with legacy configuration.
I got solution. That is because of permission affair on Amazon Linux with ec2-user. So , I changed permission by following.
sudo chmod -R ugo+rw /usr/share/tomcat6
http://wiki.apache.org/solr/SolrOnAmazonEC2strong text
t should allow access to ports 22 and 8983 for the IP you're working from, with routing prefix /32 (e.g., 4.2.2.1/32). This will limit access to your current machine. If you want wider access to the instance available to collaborate with others, you can specify that, but make sure you only allow as much access as needed. A Solr instance should not be exposed to general Internet traffic. If you need help figuring out what your IP is, you can always use whatismyip.com. Please note that production security on AWS is a wide ranging topic and is beyond the scope of this tutorial.
I want to test my thick client against my RESTful appengine application. I regularly increment the appengine version number so I need to keep updating my test config. Is there the equivalent of http://latest.application.appspot.com that I could point my config too?
Thanks
Skirting around your question, but in my head, I've stopped thinking of the "version" in the typical software release version (which like you, I started out thinking), but rather, it's "a different application using the same datastore".
I found out the software release version (1.0, 1.1, 1.2 etc) doesn't make much sense because 1) I don't tend to use older versions 2) my main usage would be to regression test, but this doesn't work well, because it's quite possible for a change in your model in v1.1 to break the code in v1.0.
The versions feature comes in hand to have different functional versions. For example, maybe the default application.appspot.com runs production level code, but debug.application.appspot.com has more logging enabled. Perhaps a third version has administrator functionality enabled, etc.
No, there's no way to do this. Versions aren't sequenced - they're all entirely distinct deploys, only one of which is set as the default.
What you are likely looking for is the CURRENT_VERSION_ID environment variable. It stores the deployment revision as dot-separated string: version_name.deployment_revision, e.g. staging.12345678910111213141516. You could just use it directly in your config:
import os
API_VERSION = os.environ['CURRENT_VERSION_ID'].split('.')[1]