weblogic managed server autostart - weblogic11g

Friends
I have configured WebLogic cluster with 2 managed servers and set crashrecoveryenabled to 'true' in nodemanager.properties so that in case of server crash the managed servers can start automatically.The Node manager and admin server are setup as windows services so that they can start automatically on server reboot. I have 2 questions
1.How can I make sure that the managed servers will start automatically after server reboot(I know adding managed servers as windows service is one option).
2.In nodemanager.properties do I need to set startscriptenabled to true in production environments?
thanks

Setting up a service to have the managed servers start on system reboot is the preferred approach.

I always set startScriptEnabled=true in production environments. This just uses the script to start up the managed servers.
Provided crashRecoveryEnabled is set to true and you have started each of your managed servers then it will start.
You can use wlst to check if they are running (or start them) through some sort of scheduled task if you wish.
EDIT: From the Oracle Documentation 4.2.4 Configuring Node Manager to Start Managed Servers
If a Managed Server contains other Oracle Fusion Middleware products, such as Oracle SOA Suite, Oracle WebCenter Portal, or Oracle JRF, the Managed Servers environment must be configured to set the correct classpath and parameters. This environment information is provided through the start scripts, such as startWebLogic and setDomainEnv, which are located in the domain directory.
If the Managed Servers are started by Node Manager (as is the case when the servers are started by the Oracle WebLogic Server Administration Console or Fusion Middleware Control), Node Manager must be instructed to use these start scripts so that the server environments are correctly configured. Specifically, Node Manager must be started with the property StartScriptEnabled=true.
There are several ways to ensure that Node Manager starts with this property enabled. As a convenience, Oracle Fusion Middleware provides the following script, which adds the property StartScriptEnabled=true to the nodemanager.properties file:
(UNIX) ORACLE_COMMON_HOME/common/bin/setNMProps.sh.
(Windows) ORACLE_COMMON_HOME\common\bin\setNMProps.cmd
For example, on Linux, execute the setNMProps script and start Node Manager:
ORACLE_COMMON_HOME/common/bin/setNMProps.sh
MW_HOME/wlserver_n/server/bin/startNodeManager.sh
When you start Node Manager, it reads the nodemanager.properties file with the StartScriptEnabled=true property, and uses the start scripts when it subsequently starts Managed Servers. Note that you need to run the setNMProps script only once.

Related

Pentaho "kettle.properties" file - which one am I using?

When one runs a job from the server (by selecting server as below), does PDI pick up the "kettle.properties" file from the server or from the local computer they are running the job from? What about the Pentaho User Console portal - where is the file being picked up from when one runs the jobs from there? Is there anyway to tell PDI which "kettle.properties" file to use?
AFAIK, there is no way to pick a kettle.properties file location from within the Spoon interface right before executing a job/transformation.
The kettle.properties file used is always linked to the instance of Kettle that executes the job/transformation.
When running a job locally with the PDI Client (Spoon), the kettle.properties file used is the one contained in the directory pointed to by the -DKETTLE_HOME JVM option (defined when running the spoon.sh or Spoon.bat launch scripts).
When running a job/transformation on the Pentaho Server (by either scheduling it explicitly on the Server from Spoon, or by running it from the PUC), the kettle.properties file used is the one located in the directory pointed to by the -DKETTLE_HOME JVM option defined when running the start-pentaho.sh or the start-pentaho.bat launch scripts.
Both the PDI Client and the Pentaho Server set the default location of KETTLE_HOME to ~/.kettle.
If you want to use a kettle.properties file located somewhere else, you will have to define the location of the Kettle Home directory yourself before starting the PDI Client or the Pentaho Server:
By setting an environment variable called KETTLE_HOME. It has to be set before running the Spoon launching scripts or the Pentaho Server launching scripts
For the Pentaho Server, you can also add the option -DKETTLE_HOME to CATALINA_OPTS (if the Pentaho Server uses Tomcat) by editing the launch script.
You can find this information on the Customize the Pentaho Server page.

Knime with database

How to add new driver into database through KNIME preferences? Generally,
File-> Preferences -> Add File/ Add Directory
The files accepted are only of *.jar or *.zip.
MY QUESTION
I have installed ODBC64 into my PC. Now I need to add that file into knime preferences and use the Driver into Database Connector node.
How to add and use the file into my Knime?
And What is meant by Database URL jdbc:mysql://host:port/database_name
Host and port?
Can anyone please briefly explain and help me out?
I'm assuming based on your database URL of jdbc:mysql:// that you are wanting to connect to a MySQL database? Based on that, then there is a thread on the KNIME forum which explains pretty much all of your question, but the process is the same for any other sort of database. The steps are as follows:
Download the jdbc driver (e.g. from https://dev.mysql.com/downloads/connector/j/ for MySQL) - NB KNIME now comes bundled with several drivers already installed - MySQL is one of those - in the Database Connector node the drivers installed are listed.
In the database URL, you need to change those parts in <> - i.e. the hostname, port number and database name. Hostname may be localhost if it is a local database. The port number you will need to find from your database administrator, or will be what you set it up to be if you are running a local database (3306 is the default for MySQL), so for a database called 'myDB' on the default port on your local machine, the url should be jdbc:mysql://localhost:3306/myDB
For some of the shipped drivers, there are also connector nodes, e.g. MySQL Connector, SQLite Connector, PostgreSQL Connector etc, which still require the server name/port and database name, but take them as individual inputs rather than requiring editing of the URL
Recent versions of KNIME are based on Java 8, which dropped support for ODBC, so you should first find an alternative driver for your database and only after you can connect to that with KNIME as described on the KNIME documentation page for DB connectors.
You have several nodes which allow you to connect to a DB (especially MySQL).
I remember there was a dedicated MySQL node for connecting with the DB.
Just remember this: you have to input the IP adress : port, then insert credentials and point to the DB you want to open by default.

.svc handler for IIS Server

While configuring the SQL Server 2012 Master Data Services, I am having following problem
The required .svc handler mappings are not installed in IIS.
What I want to do is that, I want to query my database using a URL so that I can retrieve data directly using the URL it self just like we can store the querystring parameters into SQL Server
How do I deal with it, I followed several documents but not any ideas.
To fix this issue, open a command prompt and go to the .NET directory
(for example %windir%\Microsoft.NET\Framework64\v4.0.30319).
Run the command: aspnet_regiis –i
For further details check:SVC Handler mapping error in MDS Configuration Manager
I've come across these types of errors a few times when installing MDS, the problem usually comes about because just having IIS installed is not enough, there are loads of other role services and features that you need to enable and install as well which the setup program doesn't tell you about.
Thankfully they are all documented here:
Web Application Requirements (Master Data Services)
And, if you've missed any, you can go back, install them and then re-launch the configuration tool to complete the setup without having to re-install MDS from scratch.

Running batch file remotely using Hudson

What is the simplest way to schedule a batch file to run on a remote machine using Hudson (latest and greatest version)? I was exploring the master slave setup. I created a dumb slave but I am not sure what the parameters should be so that I can trigger the batch file in the remote slave machine.
Basically, I am trying to run 2 different batch files on two different remote machines sequentially, triggered from my machine (the master). The Step by step guide on the Hudson website is a dead link. There are similar questions posted on SO but it does not quite work for me when I use the parameters they mention.
If anyone has done something similar please suggest ways to make this work.
(I know how to set up jobs, and add a step to run a batch file etc what I am having trouble configuring is doing this on a remote machine using hudson in built features)
UPDATE
Thank you all for the suggestions. Quick update on this:
What I wanted to get done is partially working, below are the steps followed to get to it -
Created new Node from Manage Nodes -> New Node -> set # of Executors as 1, Remote FS root set as '/var/hudson', set Launch method as using JNLP, set slavename and saved.
Once slave was set up (from master machine), I logged into the Slave physical machine, I downloaded the _slave.jar from http://masterserver:port/jnlpJars/slave.jar, and ran the following from command line at the download location -> java -jar _slave.jar -jnlpUrl http://masterserver:port/computer/slavename/slave-agent.jnlp. The connection was made successfully.
Checked 'Restrict where this project can be run' in the Master job configuration, and set paramater as slavename.
Checked "Add Build Step" for adding my batch job script
What I am still missing now is a way to connect to 2 slaves from one job in sequence, is that possible?
It is fairly easy and straight forward. Lets assume you already have a slave running. Then you configure the job as if you are locally on the target box. The setting for Restrict where this project can be run needs to be the node that you want to on. This is all for the job configuration.
For the slave configuration read the following pages.
Installing Hudson as a Windows service
Distributed builds
On windows I prefer to run the slave as a service and let the remote machine manage the start up and shut down of the slave. The only disadvantage with this is, you need to upgrade the client every time you update the server Just get the new client.jar from the server, after the upgrade and put it on the slave. Then restart the slave and you are done.
I had troubles using the install as a service option for the slave even though I did it as a local administrator. I used then srvany to wrap the jar into a service. Here is a blog about it. The command that you need to wrap, you will get from your Hudson server from the slave page. For all of this to work, you should set up the slave management as jnlp.
If you have an ssh server on your target machine, you can use the ssl slave settings. These work for me like a charm. I use them with my unix slaves. So far the ssl option with unix is less of an hassle, than the windows service clients.
I had some similar trouble with slave setup and wrote up this blog post - I was running on Linux rather than Windows, but hopefully this will help.
I dont know about how to use built-in hudson features for this job - but in one of my project builds, i run a batch file that in turn uses PSTools
to run the job on a remote server. I found PS tools extremely easy to use - download, unpack and run the command with the right parameters, hence opted to use this.

How do I use a different database connection for package configuration?

I have an SSIS Package that sets some variable data from a SQL Server Package Configuration Table. (Selecting the "Specify configuration setings directly" option)
This works well when I'm using the Database connection that I specified when developing the package. However when I run it on a server (64 bit) in the testing environment (either as an Agent job or running the package directly) and I Specify the new connection string in the Connection managers, the package still reads the settings from the DB server that I specified in development.
All the other Connections take up the correct connection strings, it only seems to be the Package Configuration that reads from the wrong place.
Any ideas or am I doing something really wrong?
The only way I was able to do this was to use Windows Environment Variables. You can specify things like connection strings and user preferences in environment variables, and then pick up those environment variables from your SSIS Task.
I prefer to use Server Aliases in the SQL Client Configuration. That way, when you decide to point the package to another SQL Server it is as simple as editing the alias to point to the new server, no editing necessary in the SSIS package. When moving the package to a live server, you need to add the aliases, and it works.
This also helps when you have a real painful naming convention for servers, the alias can be a more descriptive name than the actual machine name.
I didn't actually understand your question completely but I store my connection settings in a configuration files usually one for each environment like dev, production etc. The packages read the connection settings from the config files when they are run.
When you're creating a job to call the SSIS package, and you're setting up the step, there is a tabbed area. The default tab is where you set the package name, and the next tab over is where you can set the configuration file. Have a config file for each package, and change for the server (dev, test, prod). The config file can be put directly on the dev, test, and prod servers, and then point to them when setting up that job.
If u are using SQL Server Package Configuration then all the properties of the packages will come from SQL Server table - Please check that
SSIS security the way it stands is terrible. No one will be able to support things when I am out of the office. The job never reads from the configuration file...I give up. It only works when I edit the string in the Data sources tab. However the password gets lost if you happen to go into the job a second time. Terrible design, absolutely horrible. You would think that when you specify a xml file in the job step it would read the connection string from there that is defined, but it does not. Does this really work for anyone else?
Goto the package properties and set deployment True. This should work for what you have done.
I had the identical question, and got the same answer, i.e. you cannot edit the connection string used for package configurations hosted in SQL Server, except if you specify that the SQL Server connection string should be in an environment variable.
This unfortunately does not work in my dev setup, where two environments are hosted on the same machine. I ended up following Scott Coleman's approach as detailed on SQL Server Central [Free sign-up and a good site]. The trick is that you create a view to store your configuration settings on one central server, and then use the machine that connects to it to determine which environment is active.
I used that approach, but also used the User connecting to the environment to make a determination, because my test and dev setups run on the same SSIS instance, but as different user names. Scott suggests in the comments that the application name should be set, but this cannot be changed in the package execution job step, so it was not an option.
One other caveat that I found was that I had to add "Instead of" triggers to my view to do the inserts, updates and deletes for configuration variables.
We want to keep our package configs in a database table, we know it gets backuped with our other data and we know where to find it. Just a preference.
I have found that to get this to work I can use an environment variable configuration to set the connection string of the connection manager that I am reading my package config from. (Although I had to restart the SQL Server agent before it could find the new environment variable. Not ideal when I deploy this to Production)
Looks Like when you run an SSIS package as a step in a scheduled task it works in this order:
Load each of the Package Configs in the order they appear in the Package Configuations Organiser
Set the Connection Strings from the Data sources tab in the Job Step properties of the Scheduled Job
Start running package.
I would have expected the first 2 to be the other way around so that I can set the data source for my package config from the scheduled job. That is where I would expect other people to look for it when maintaining the package.

Resources