I am using Jenkins, Ant , Flex and Java for my web application.
Currently I update a build version file in Flex src and commit it before starting Jenkins build.
I want to avoid this manual process and let script do this for me.
Contents of file:
Build=01_01_2013_10:43
Release=2.01
Question1:
I want to update this file contents and compile my code and then commit this file back to svn. So that SVN has latest build version number.
How do I commit this changed file to SVN. Would be great if commit happens after successful build.
Question2: I want to send an email to all developers an hour before build starts. "Please commit your changes. Build will start in 1 hr." Can I set up a delay between email and (actual svn export + ant build).
or
Do I have to schedule 2 jobs an hour apart. One to send email and one to do build.
You can use the subclipse svn ant integration to commit changed files to SVN including authentication:
<svnSetting
svnkit="true"
username="bingo"
password="bongo"
id="svn.settings"
/>
<svn refid="svn.settings">
<commit file="your.file" />
</svn>
To get username and password to the build file you have different options. One would be to use a parametrized build, where you define user name and password as build parameters which can be evaluated in the build file.
username="${parameter.svn.username}"
password="${parameter.svn.password}"
A second option is using a the jenkins config file provider plugin. With this you can also use the parameters like for the parametrized build, but you import the credentials from the provided config file, e.g. a properties file can be imported via
<property file="config.file" />
Actually you can also use ant's exec task to execute your subversion commit the file.
For sending an e-mail one hour before actually building, you should setup two jobs, which are scheduled one hour apart. But I don't think this is good practice to notify before building, consider to build more often maybe even per commit to svn.
You can also use the Post build Task plugin (https://wiki.jenkins-ci.org/display/JENKINS/Post+build+task) to execute svn as a shell script (svn must be installed and authenticated from the shell once for the user that runs Jenkins).
Then the svn commit runs as a post build action. The plugin has an option (checkbox) to run the script only if the previous build/steps were successful.
The plugins is also mentioned here: Execute Shell Script after post build in Jenkins
Related
I have set up a very basic task in a VSTS build definition with the following simple steps and objective:
Setup and successfully test an endpoint to our Artifactory repository.
Implement a VSTS "Artifactory Generic Download" Task to retrieve a single jar file from the Artifactory repository.
Drop the jar file in staging directory of the build agent.
The file spec source, based on an example from the JFrog website www.jfrog.com and set up as a Task Configuration is very basic and is depicted below:
Unfortunately, triggering this build job fails horribly with the below error and I simply can't figure out why it is failing. Would appreciate some help on this.
It seems that no artifacts were found and the task fails due to the configured "Fail task if no dependencies were downloaded" flag. If you wish to change this behavior, you can uncheck the flag in your task configuration.
As for the not downloaded artifacts, make sure a repository called "list" exists and a jar file exists in the provided pattern.
More information about file-specs can be found here.
I'd like to run batch using jenkins. And the status of build depends of number of files created in a specific folder. My question is how could I manage Jenkins build status depending of number of files created?
You can execute a shell script to count the files and return 1, if the count isn't expected.
Another way would be to use the Text-finder Plugin searching for a pattern in the console log.
Groovy Postbuild Plugin is another alternative:
buildUnstable() - sets the build result to UNSTABLE.
If you like to use the CLI you can use the following command:
java -jar jenkins-cli.jar -s http://...:8080/ set-build-result
Sets the result of the current build. Works only if invoked from within a build.
When I tried to call command-line to get latest update for my TFS mapped folder
TF get /recursive
The call complete successfully from the run window of Windows 7.
However, when I put the call into a .bat file and run it via InvokeProcess within a build definition (guide to do that), it failed.
The message is as below.
Unable to determine the workspace. You may be able to correct this by
running 'tf workspaces /collection:TeamProjectCollectionUrl'
What happens? How can I get around with that?
The reason this is happening is because when you run the command locally, the folder you're downloading is mapped to a workspace. When you execute the command in your build, it's running under the build service account, which doesn't have a workspace mapped for the folder you're specifying.
You shouldn't need to use the tf get command as part of your build. When you configure your build, you can specify which folders should be mapped in the workspace on the "Source Settings" tab. The build process will handle making sure the contents of the folders you map in your workspace are present automatically.
The cause for my issue is that the build agent execute my .bat file under a different Windows account which is something like NTSERVICE; when I run it directly, my TFS login account was used (which was previously remembered in Windows 7 Credential Manager)
So the solution is to put in the login name and password as TF get /login:SomeTFSUsername,SomePassword (see more here)
My aim is to minimize the steps needed to locally clone my website + database.
I have a central git repository on a webserver and a local clone. When I pull updates on my local machine, not only should I get the latest file versions from the remote repository but also should a script run on this webserver to dump the live database and additionally add it to the repository prior to delivering the pull.
My guess is that I need the following actions to happen on the remote machine when I fire git pull on the local machine prior to delivering the repository:
Create database dump file, e.g. dump.sql (by exectuting mysqldump)
Add dump.sql to repository
Commit dump.sql to repository
… and only then deliver the pull to the local machine.
What kind of git hook should I use for this?
I'd also appreciate any additional experience with such a scenario.
git help hooks lists the types of hooks and how they work, but there isn't a hook that you can use to do what you want (you'd need something like pre-upload that would be executed by git-upload-pack).
However, you could create a wrapper script around git-upload-pack on the server that performs the necessary actions and then executes the real git-upload-pack command:
find the git-upload-pack executable
rename it to git-upload-pack.real
create a new script called git-upload-pack somewhere in PATH that does the following:
use the arguments to find the Git repository
cd into the Git repository
if hooks/pre-upload exists and is an executable file:
run it
if the hook exited with a non-zero status:
print an error message to standard error
exit with a non-zero return value
run git-upload-pack.real with the original command-line arguments
create a hooks/pre-upload script in your Git repository that does whatever you want
While working on my GAE project under my dev environment, whenever I upload data to my dev datastore, the logfiles are stored in my current directory, for instance:
C:\dev\ls
bulkloader-log-20090912.104643
bulkloader-log-20090912.104648
bulkloader-log-20090912.104731
bulkloader-log-20090912.105526
bulkloader-log-20090912.110428
bulkloader-progress-20090912.104648.sql3
bulkloader-progress-20090912.104731.sql3
bulkloader-progress-20090912.105526.sql3
bulkloader-progress-20090912.110428.sql3
project
project is my GAE app. The above is generated when I run the command appcfg.py upload_data. Is there a way to tell GAE where to store those log files, for instance in a log folder.
Use the --log_file=... option to appcfg.py, as documented here: with this command line option you can give the complete path to the log file, including folder and name. (You cannot give JUST the folder and let it figure out the name; for that, you need to write a tiny script that figures out the name then calls appcfg.py).