Where are the YAML defaults in Wildfly Swarm? - wildfly-swarm

According to the official reference guide, YAML is the recommended way to configure an application in Wildfly Swarm, and the user-provided YAML file is
applied overtop the absolute defaults that WildFly Swarm provides
What are these absolute defaults? The documentation does not say anything about that.
EDIT:
A few defaults are shown in https://wildfly-swarm.gitbooks.io/wildfly-swarm-users-guide/configuration_properties.html, but the defaults for most fractions are missing: Logging, Batch, Mail, Ee, Ejb, Io, Remoting, Transactions, Webservices etc.

The Wildfly CLI reference specifies the defaults for each Wildfly subsystem, e.g. for the datasources subsystem.
The settings roughly correspond to the Wildfly Swarm yaml configuration. To see how they correspond, check http://docs.wildfly-swarm.io/2018.1.0/

Related

Programmatically getting Apache Camel components operations, parameters, options decriptions

Is there a way to get any Apache Camel component "metadata" using Java code, like the list of options and other parameters and their types? I think some automatic help builder was mentioned somewhere that might be of use for this task without using reflection.
A way to get the registered components of all types (including data formats and languages) with java code is also sought. Thanks
Yeah take a look at the camel-catalog JAR which includes all such details. This JAR is what the tooling uses such as some of the Maven tooling itself, or IDE plugs for IntelliJ or Eclipse etc. The JAR has both Java API and metadata files embedded in the JAR you can load.
At runtime you can also access this catalog via RuntimeCamelCatalog which you can access via CamelContext. The runtime catalog is a little bit more limited than CamelCatalog as it has a view of what actually is available at runtime in the current Camel application.
Also I cover this in my book Camel in Action 2nd edition where there is a full chapter devoted on Camel tooling and how to build custom tooling etc.
This is what I've found so far
http://camel.apache.org/componentconfiguration.html

Embedded Solr on Amazon AWS

Currently, I have developed a web application. In my web application, I used embedded solr server to make indexing. After that I deployed onto the Tomcat 6 on window xp. Everything is ok. Next, I have tried my web application to deploy on Amazon AWS. My platform is linux + mysql. When I deployed, I got the exception related with embedded solr.
[ WARN] 19:50:55 SolrCore - [] Solr index directory 'solrhome/./data/index' doesn't exist. Creating new index...
[ERROR] 19:50:55 CoreContainer - java.lang.RuntimeException: java.io.IOException: Cannot create directory: /usr/share/tomcat6/solrhome/./data/index
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:403)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:552)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:480)
So how to fix my problem. I am novie to linux.
My guess is that the user you are running Solr under does not have permission to access that directory.
Also, which version of Solr are you using? Looks like 3+. The latest version is 4, so it may make sense to try using that from the start. Probably a bit more troubleshooting to start, but a much better pay off that starting with legacy configuration.
I got solution. That is because of permission affair on Amazon Linux with ec2-user. So , I changed permission by following.
sudo chmod -R ugo+rw /usr/share/tomcat6
http://wiki.apache.org/solr/SolrOnAmazonEC2strong text
t should allow access to ports 22 and 8983 for the IP you're working from, with routing prefix /32 (e.g., 4.2.2.1/32). This will limit access to your current machine. If you want wider access to the instance available to collaborate with others, you can specify that, but make sure you only allow as much access as needed. A Solr instance should not be exposed to general Internet traffic. If you need help figuring out what your IP is, you can always use whatismyip.com. Please note that production security on AWS is a wide ranging topic and is beyond the scope of this tutorial.

Install Jetty or run embedded for Solr install

I am about to install Solr on a production box. It will be the only Java applet running and be on the same box as the web server (nginx).
It seems there are two options.
Install Jetty separately and configure to use with Solr
Set Solr's embedded Jetty server to start as a service and just use that
Is there any performance benefit in having them separate?
I am a big fan of KISS, the less setup the better.
Thanks
If you want KISS there is no question: 2. stick to vanilla Solr distrib with included jetty.
Doing the work of installing an external servlet engine would make sense if you needed Tomcat for example, but just to use the same thing (Jetty) Solr already includes...no way.
Solr is still using jetty 6. So there would be some benefits if you can get the solr application to run in a recent jetty distribution. For example you could use jetty 9 and use features like SPDY to enhance the response times of your application.
However I have no idea or experience if it's possible to run the solr application standalone in a servlet engine.
Another option for running Solr and keeping it simple is to use Solr-Undertow which is a high performance with small footprint server for Solr. It is easy to use on local machines for development and also production. It supports simple config files for running instances with different data directories, ports and more. It also can run just by pointing it at a distribution .zip file without needing to unpack it.
(note, I am the author of Solr-Undertow)
Link here: https://github.com/bremeld/solr-undertow with releases under the "Releases" tab.

how to use cacti to monitor remote hosts

I have nagios installed in a server and it's monitoring different remote hosts using different plugins. But I am not able to view the process of each system in a graph format. Is it possible to use cacti for the same purpose? I just installed cacti on the same machine. But not sure how to install plugins and monitor different servers. Also just wanted to know can I use cacti as the frontend tool for Nagios? How cacti works
Can someone help me on this please.
Thanks
I'm not sure on how Cacti interacts with Nagios, but I do have the pnp4nagios plugin/extension installed and configured for one of my Nagios Instances which gives me a great overview in graphs for the services I monitor. (not all of them, but only those who are variable and are usefull to see in a graph) It's a really nice tool and not so hard to setup. I compiled it from source and it's install.php gives you great feedback on what to do next in the installation procedure. One thing they didn't mention was that you had to enable Includes in your Nagios instance's apache2 config file. (this is necessary if you want to use the SSI include in the Nagios CGI files. This SSI file contains jQuery Javascript definitions that will enable the popUp png graphs when you mouseover a graph in Nagios)
It also uses rrdtool (Round Robin Database files) which uses fixed size storage.
(could be beneficial if you have little space on your harddrive)
For nagios there is nagiosgraph, it generates graph for each services define in nagios, you just have to add config for nagiosgraph.
as for cacti, there is plugin called NPC, it is generate new tab on cacti which contains services define in nagios.

Use Different WCF Services in Dev and Production for WPF Application

I have a WPF application that accesses a number of WCF services. The WCF service addresses used in dev are different than those used in production (though their WSDL signatures are identical). What is the best way to setup the config files so that the proper service url is used for each type of build?
You can consider using a NANT process after build that change the config in a different way, here a discussion about that. An alternative is to create a MSbuild custom task, as discussed here.

Categories

Resources