I have two jenkins masters, namely A and B. I am wondering how would a slave from master A copy data from master B? Is there any plugin available to do this kind of job?
There are few plugins that can help:
Publish via SSH
Publish to a FTP server
Publish to a Windows file share
Also you may try this python script to download last successful artifacts from Jenkins via Rest API. We use it in our production and it works very well.
Related
I want to create a Apache Flink standalone cluster with serveral taskmanagers. I would like to use HDFS and Hive. Therefore i have to add some Hadoop dependencies.
After reading the documentation, the recommended way is to set the HADOOP_CLASSPATH env variable. But how do i have to add the hadoop files? Should i download the source files in some directory like /opt/hadoop ont the taskmanagers and set the variable to this path?
I only know the old but deprecated way downloading a Uber-Jar with the dependencies and place it under the /lib folder.
Normally you'd do the standard Hadoop installation, since you (for HDFS) need Node Managers running on every server (with appropriate configuration), plus the NameNode running on your master server.
So then you can do something like this on the master server where you're submitting your Flink workflow:
export HADOOP_CLASSPATH=`hadoop classpath`
export HADOOP_CONF_DIR=/etc/hadoop/conf
I have a Windows server with MS SQL Server running on it.
On the SQL Server developers have created stored procedures, views, tables, triggers.
On the Windows server developers created shell scripts.
I would like to start versioning the code described above in a BitBucket repository. I have a repository created in BitBucket.
How should the branches be organized in this repository? i.e. "SQL Server\Database\\ ...
"Windows Server\\shell_script" ...
Can I connect BitBucket to SQL Server and Windows Server and specify which code needs to be versioned?
Are both 1 and 2 options above possible?
I just need to version control the changes to the code and have the ability to mark under which project the code change was made.
I am new to BitBucket. I am using the web front end of it. I do not know how to configure command line access, so please try not to reference Bitbucket commands. Sorry if I sound confusing.
Please help.
I know this is an old question but anyway, in principle I'd recommend:
Put all the server shell scripts into one place and make that a git repo linked to your bitbucket repo
Add a server shell script to export what you want version controlled from the SQL db
The export from the SQL db should be to text files so they are easily 'diffable'
You might as well make the export to a sub-directory within the shell scripts repo so that everything is in one place and can't get out of sync
So you only have one branch, not a separate one for server shell scripts and db
Make sure people run the export script and then commit everything when they make a change
You ideally have a test server which means you'd want a way to push changes from the repo into the SQL db. I presume you can do this with a script but deleting the server setup and re-creating it from the text files.
So basically, you can't connect an SQL db to bitbucket directly. You need scripts to read and write to the db from a repo.
Our objective is as follows
a) Pick up a file "Test.csv" from a Secure FTP location.
b) After picking up the file we need to insert the contents of the file into an object in Salesforce.
I created the following connection for the Remote SFTP (the location which will contain "Test.csv")
Step 1
This is as shown below
Step 2
Then I started to build a Data Synchronization Task as below
What we want is for the Informatica Cloud to connect to the secure FTP location and extract the contents from a .csv from that location into our object in Salesforce.
But as you can see in Step 2, it does not allow me to choose .csv from that remote location.
Instead the wizard prompts me to choose a file from a local directory (which is my machine ...where the secure agent is running) and this is not what I want
What should I do in this scenario ?
Can someone help ?
You can write a UNIX script to transfer the file to your secure agent and then use informatica to read the file. Although, I have never tried using sftp in cloud, I have used cloud and I do know that all files are tied up to the location of the secure agent( either server or local computer) .
The local directory is used for template files. The idea is that you set up the task using a local template and then IC will connect to the FTP site when you actually run the task.
The Informatica video below shows how this works at around 1:10:
This video explains how it works at around 1:10:
http://videos.informaticacloud.com/2FQjj/secure-ftp-and-salesforececom-using-informatica-cloud/
Can you elaborate the Secure agent OS as in Windows or Linux.
For Windows environment you will have to call the script using WINSCP or CYGWIN utility I recommend the former.
For Linux the basic commands in script should work.
What is the simplest way to schedule a batch file to run on a remote machine using Hudson (latest and greatest version)? I was exploring the master slave setup. I created a dumb slave but I am not sure what the parameters should be so that I can trigger the batch file in the remote slave machine.
Basically, I am trying to run 2 different batch files on two different remote machines sequentially, triggered from my machine (the master). The Step by step guide on the Hudson website is a dead link. There are similar questions posted on SO but it does not quite work for me when I use the parameters they mention.
If anyone has done something similar please suggest ways to make this work.
(I know how to set up jobs, and add a step to run a batch file etc what I am having trouble configuring is doing this on a remote machine using hudson in built features)
UPDATE
Thank you all for the suggestions. Quick update on this:
What I wanted to get done is partially working, below are the steps followed to get to it -
Created new Node from Manage Nodes -> New Node -> set # of Executors as 1, Remote FS root set as '/var/hudson', set Launch method as using JNLP, set slavename and saved.
Once slave was set up (from master machine), I logged into the Slave physical machine, I downloaded the _slave.jar from http://masterserver:port/jnlpJars/slave.jar, and ran the following from command line at the download location -> java -jar _slave.jar -jnlpUrl http://masterserver:port/computer/slavename/slave-agent.jnlp. The connection was made successfully.
Checked 'Restrict where this project can be run' in the Master job configuration, and set paramater as slavename.
Checked "Add Build Step" for adding my batch job script
What I am still missing now is a way to connect to 2 slaves from one job in sequence, is that possible?
It is fairly easy and straight forward. Lets assume you already have a slave running. Then you configure the job as if you are locally on the target box. The setting for Restrict where this project can be run needs to be the node that you want to on. This is all for the job configuration.
For the slave configuration read the following pages.
Installing Hudson as a Windows service
Distributed builds
On windows I prefer to run the slave as a service and let the remote machine manage the start up and shut down of the slave. The only disadvantage with this is, you need to upgrade the client every time you update the server Just get the new client.jar from the server, after the upgrade and put it on the slave. Then restart the slave and you are done.
I had troubles using the install as a service option for the slave even though I did it as a local administrator. I used then srvany to wrap the jar into a service. Here is a blog about it. The command that you need to wrap, you will get from your Hudson server from the slave page. For all of this to work, you should set up the slave management as jnlp.
If you have an ssh server on your target machine, you can use the ssl slave settings. These work for me like a charm. I use them with my unix slaves. So far the ssl option with unix is less of an hassle, than the windows service clients.
I had some similar trouble with slave setup and wrote up this blog post - I was running on Linux rather than Windows, but hopefully this will help.
I dont know about how to use built-in hudson features for this job - but in one of my project builds, i run a batch file that in turn uses PSTools
to run the job on a remote server. I found PS tools extremely easy to use - download, unpack and run the command with the right parameters, hence opted to use this.
Friends
I have configured WebLogic cluster with 2 managed servers and set crashrecoveryenabled to 'true' in nodemanager.properties so that in case of server crash the managed servers can start automatically.The Node manager and admin server are setup as windows services so that they can start automatically on server reboot. I have 2 questions
1.How can I make sure that the managed servers will start automatically after server reboot(I know adding managed servers as windows service is one option).
2.In nodemanager.properties do I need to set startscriptenabled to true in production environments?
thanks
Setting up a service to have the managed servers start on system reboot is the preferred approach.
I always set startScriptEnabled=true in production environments. This just uses the script to start up the managed servers.
Provided crashRecoveryEnabled is set to true and you have started each of your managed servers then it will start.
You can use wlst to check if they are running (or start them) through some sort of scheduled task if you wish.
EDIT: From the Oracle Documentation 4.2.4 Configuring Node Manager to Start Managed Servers
If a Managed Server contains other Oracle Fusion Middleware products, such as Oracle SOA Suite, Oracle WebCenter Portal, or Oracle JRF, the Managed Servers environment must be configured to set the correct classpath and parameters. This environment information is provided through the start scripts, such as startWebLogic and setDomainEnv, which are located in the domain directory.
If the Managed Servers are started by Node Manager (as is the case when the servers are started by the Oracle WebLogic Server Administration Console or Fusion Middleware Control), Node Manager must be instructed to use these start scripts so that the server environments are correctly configured. Specifically, Node Manager must be started with the property StartScriptEnabled=true.
There are several ways to ensure that Node Manager starts with this property enabled. As a convenience, Oracle Fusion Middleware provides the following script, which adds the property StartScriptEnabled=true to the nodemanager.properties file:
(UNIX) ORACLE_COMMON_HOME/common/bin/setNMProps.sh.
(Windows) ORACLE_COMMON_HOME\common\bin\setNMProps.cmd
For example, on Linux, execute the setNMProps script and start Node Manager:
ORACLE_COMMON_HOME/common/bin/setNMProps.sh
MW_HOME/wlserver_n/server/bin/startNodeManager.sh
When you start Node Manager, it reads the nodemanager.properties file with the StartScriptEnabled=true property, and uses the start scripts when it subsequently starts Managed Servers. Note that you need to run the setNMProps script only once.