What is the simplest way to schedule a batch file to run on a remote machine using Hudson (latest and greatest version)? I was exploring the master slave setup. I created a dumb slave but I am not sure what the parameters should be so that I can trigger the batch file in the remote slave machine.
Basically, I am trying to run 2 different batch files on two different remote machines sequentially, triggered from my machine (the master). The Step by step guide on the Hudson website is a dead link. There are similar questions posted on SO but it does not quite work for me when I use the parameters they mention.
If anyone has done something similar please suggest ways to make this work.
(I know how to set up jobs, and add a step to run a batch file etc what I am having trouble configuring is doing this on a remote machine using hudson in built features)
UPDATE
Thank you all for the suggestions. Quick update on this:
What I wanted to get done is partially working, below are the steps followed to get to it -
Created new Node from Manage Nodes -> New Node -> set # of Executors as 1, Remote FS root set as '/var/hudson', set Launch method as using JNLP, set slavename and saved.
Once slave was set up (from master machine), I logged into the Slave physical machine, I downloaded the _slave.jar from http://masterserver:port/jnlpJars/slave.jar, and ran the following from command line at the download location -> java -jar _slave.jar -jnlpUrl http://masterserver:port/computer/slavename/slave-agent.jnlp. The connection was made successfully.
Checked 'Restrict where this project can be run' in the Master job configuration, and set paramater as slavename.
Checked "Add Build Step" for adding my batch job script
What I am still missing now is a way to connect to 2 slaves from one job in sequence, is that possible?
It is fairly easy and straight forward. Lets assume you already have a slave running. Then you configure the job as if you are locally on the target box. The setting for Restrict where this project can be run needs to be the node that you want to on. This is all for the job configuration.
For the slave configuration read the following pages.
Installing Hudson as a Windows service
Distributed builds
On windows I prefer to run the slave as a service and let the remote machine manage the start up and shut down of the slave. The only disadvantage with this is, you need to upgrade the client every time you update the server Just get the new client.jar from the server, after the upgrade and put it on the slave. Then restart the slave and you are done.
I had troubles using the install as a service option for the slave even though I did it as a local administrator. I used then srvany to wrap the jar into a service. Here is a blog about it. The command that you need to wrap, you will get from your Hudson server from the slave page. For all of this to work, you should set up the slave management as jnlp.
If you have an ssh server on your target machine, you can use the ssl slave settings. These work for me like a charm. I use them with my unix slaves. So far the ssl option with unix is less of an hassle, than the windows service clients.
I had some similar trouble with slave setup and wrote up this blog post - I was running on Linux rather than Windows, but hopefully this will help.
I dont know about how to use built-in hudson features for this job - but in one of my project builds, i run a batch file that in turn uses PSTools
to run the job on a remote server. I found PS tools extremely easy to use - download, unpack and run the command with the right parameters, hence opted to use this.
Related
I have lots of files in a project on a remote host and I want to find out from which file another php file is called. Is it possible to use Ctrl+Shift+f search on a remote host project?
Is it possible to use Ctrl+Shift+F search on a remote host project?
Currently it's not possible. (2022-06-09: now possible with remote development using JetBrains Gateway, see at the end)
In order to execute search in a file content in a locally run IDE such file must be read first. For that the IDE must download it... which can be quite time & connection consuming task for (S)FTP connections (depends on how far the server is; how fast your connection; bandwidth limits etc.)
Even if the IDE could do it transparently for search like it does with Remote Edit functionality (where it downloads a remote file but instead of placing it in the actual project it stores it in a temp location) it still needs to download it.
If you execute one search (one term) and then need to do another search (slightly modified term or completely different search string) the IDE would need to re-download those files again (waste of time and connection).
Therefore it makes much more sense to download your project (all or desired files only) locally and then execute such search(es) on local files.
If it has to be purely remote search (when nothing gets downloaded locally)... then you just establish SSH/RDP/etc connection to that remote host (BTW: PhpStorm has built-in SSH Console functionality) and execute such search directly on the remote server with OS native tools (find/grep and alike) or some remote software (e.g. mc or notepad++).
P.S. (on related note)
Some of the disadvantages when doing Remote Edit: https://stackoverflow.com/a/36850634/783119
EDIT 2022-06-09:
BTW, JetBrains now has JetBrains Gateway for remote development where you run the IDE core on a remote server and connect to it via SSH using a local dedicated app or a plugin to your IDE (PhpStorm comes bundled with such a plugin since 2021.3 version).
To check more on JetBrains Gateway:
https://www.jetbrains.com/remote-development/gateway/
https://blog.jetbrains.com/blog/2021/11/29/introducing-remote-development-for-jetbrains-ides/
I have two jenkins masters, namely A and B. I am wondering how would a slave from master A copy data from master B? Is there any plugin available to do this kind of job?
There are few plugins that can help:
Publish via SSH
Publish to a FTP server
Publish to a Windows file share
Also you may try this python script to download last successful artifacts from Jenkins via Rest API. We use it in our production and it works very well.
I'm trying to redirect from my domain to my localhost, the issue is that I have dinamic Ip address so it changes periodically.
Is there any app that saves my ip into my online mysql database? (so then I can set the redirect using php)
If you know any other solution it will be welcome! :)
Thanks!
PD: I've tried no-ip but I don't want to pay for use my own domain.
If you are on Windows you can set an scheduled task to run on startup, but would be better to make run periodicaly because your ip address can change even without a restart.
Make The scheduled task run a script, can be php or ruby or phyton, they all have mysql adapters and can be run without a webserver, and in a bat script you can pass the ip address as an argument and The script send it to mysql.
If was Linux you could do a bash script.
Even dinamic ips can use dns servers, you should look into it too.
I'm a novice trying to install Postgresql on Cygwin as a service. I have been following the steps listed in this URL: http://www.smartpixie.com/wiki/Tech/CygwinPostgreSQL.twiki.html
Everything was working fine until I got to the step where I had to create a user and a database for myself, in my /usr/sbin directory the "createuser" file exists but the "createdb" file does not. So, as suggested by the steps, I attempted to connect to the database as the SYSTEM user and then create the database/user roles later. However, I come across this error whenever I try to connect to the database.
$ psql -U SYSTEM postgres
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Any help is appreciated, thanks.
First, I would recommend against running PostgreSQL over Cygwin. There isn't a real use case I can see since there is now a native port and Cygwin ends up adding quite a bit of overhead to things like IPC calls. You won't get good performance out of it, and I can't actually think of any case where cygwin would be a better fit than mingw for c-language stored procedures. So please question whether this is really a requirement and explore other options first.
Now if you still need to do so, the process isn't easy, but is documented at http://www.postgresql.org/message-id/3DC76EA4.7090503#usa.net
Basically you have to install the ipc service first, then use cygrunsrv to create a Windows service for PostgreSQL dependent on it. Then you can run net start ipc-daemon and then net start postgresql
Friends
I have configured WebLogic cluster with 2 managed servers and set crashrecoveryenabled to 'true' in nodemanager.properties so that in case of server crash the managed servers can start automatically.The Node manager and admin server are setup as windows services so that they can start automatically on server reboot. I have 2 questions
1.How can I make sure that the managed servers will start automatically after server reboot(I know adding managed servers as windows service is one option).
2.In nodemanager.properties do I need to set startscriptenabled to true in production environments?
thanks
Setting up a service to have the managed servers start on system reboot is the preferred approach.
I always set startScriptEnabled=true in production environments. This just uses the script to start up the managed servers.
Provided crashRecoveryEnabled is set to true and you have started each of your managed servers then it will start.
You can use wlst to check if they are running (or start them) through some sort of scheduled task if you wish.
EDIT: From the Oracle Documentation 4.2.4 Configuring Node Manager to Start Managed Servers
If a Managed Server contains other Oracle Fusion Middleware products, such as Oracle SOA Suite, Oracle WebCenter Portal, or Oracle JRF, the Managed Servers environment must be configured to set the correct classpath and parameters. This environment information is provided through the start scripts, such as startWebLogic and setDomainEnv, which are located in the domain directory.
If the Managed Servers are started by Node Manager (as is the case when the servers are started by the Oracle WebLogic Server Administration Console or Fusion Middleware Control), Node Manager must be instructed to use these start scripts so that the server environments are correctly configured. Specifically, Node Manager must be started with the property StartScriptEnabled=true.
There are several ways to ensure that Node Manager starts with this property enabled. As a convenience, Oracle Fusion Middleware provides the following script, which adds the property StartScriptEnabled=true to the nodemanager.properties file:
(UNIX) ORACLE_COMMON_HOME/common/bin/setNMProps.sh.
(Windows) ORACLE_COMMON_HOME\common\bin\setNMProps.cmd
For example, on Linux, execute the setNMProps script and start Node Manager:
ORACLE_COMMON_HOME/common/bin/setNMProps.sh
MW_HOME/wlserver_n/server/bin/startNodeManager.sh
When you start Node Manager, it reads the nodemanager.properties file with the StartScriptEnabled=true property, and uses the start scripts when it subsequently starts Managed Servers. Note that you need to run the setNMProps script only once.