Until gradle 3, I used this to write gradle's output to a logfile:
def fileLogger = [
onOutput : {
File logfile = new File( 'gradle.log' )
logfile << it
}
] as org.gradle.api.logging.StandardOutputListener
gradle.useLogger( fileLogger )
This does not work with with gradle 4.
Update for Gradle 5:
It works when using logging.addStandardOutputListener instead gradle.useLogger and adding it to all tasks:
// logger
def fileLogger = [
onOutput: {
File logfile = new File('gradle.log')
logfile << it
}
] as org.gradle.api.logging.StandardOutputListener
// for configuration phase
logging.addStandardOutputListener(fileLogger)
// for execution phase
gradle.taskGraph.whenReady { taskGraph ->
taskGraph.allTasks.each { Task t ->
t.doFirst {
logging.addStandardOutputListener(fileLogger)
}
}
}
Related
I'm a newbie to jenkins dsl groovy scripting. i have a parameterized jenkins job which takes two inputs $param1 $param2
I have two stages. First stage generates output.txt file which has contents like below in the workspace. The output.txt file changes content based on shell script execution in stage1. So, the values are dynamic
output.txt
svn-test
svn_prod
svn_dev
Second stage has to get input from the file output.txt and iterate in a parallel loop dynamically creating stages. I have the below code but it doesn't take input from output.txt file. I'm unable to overirde the array in the stage and iterate parallely
def jobs = []
def parallelStagesMap = jobs.collectEntries {
["${it}" : generateStage(it)]
}
def generateStage(job) {
return {
stage("stage: ${job}") {
script{
git credentialsId: 'github', url: 'ssh://github.com/promp/${job}.git', branch: master
echo "This is ${job}."
sh ''' make ${parameter1}#"${paramete2}" '''
}
}
}
}
pipeline {
agent any
parameters {
parameters {
string(name: 'parameter1', defaultValue: 'UIAutomation', description: 'Please enter the value')
string(name: 'parameter2', defaultValue: 'UISite', description: 'Please enter the value')
}
stages {
stage('non-parallel stage') {
steps {
script {
echo 'This stage will be executed first.'
sh '''./rotate_script.sh output.txt'''
}
}
}
stage('parallel stage') {
failFast false
steps {
script {
def filePath = readFile('output.txt').trim()
def lines = filePath.readLines()
line.each {
// I have tried to read lines and pass it value. It didn't workout``
}
parallel parallelStagesMap
}
}
}
}
}
Ideally this is how my one of the second stage looks like and create multiple parallel stages based on output.txt file
stage('svn-test'){
steps{
sh 'mkdir -p svn-test'
dir("svn-test"){
script{
git credentialsId: 'github', url: 'ssh://github.com/promp/svn-test.git', branch: master
sh ''' make ${parameter1}#"${parameter2}"
'''
}
}
}
}
I finally got something like this to work. I had to move my Groovy List and the parallelStageMap into my pipeline. The StageMap is getting set at the start of your script when your list is empty, thus you are never getting any results. If you move it AFTER the list is populated it will work.
def generateStage(job) {
return {
stage("stage: ${job}") {
script{
git credentialsId: 'github', url: 'ssh://github.com/promp/${job}.git', branch: master
echo "This is ${job}."
sh ''' make ${parameter1}#"${paramete2}" '''
}
}
}
}
pipeline {
agent any
parameters {
string(name: 'parameter1', defaultValue: 'UIAutomation', description: 'Please enter the value')
string(name: 'parameter2', defaultValue: 'UISite', description: 'Please enter the value')
}
stages {
stage('non-parallel stage') {
steps {
script {
echo 'This stage will be executed first.'
sh '''./rotate_script.sh output.txt'''
}
}
}
stage('parallel stage') {
failFast false
steps {
script {
def jobs=[]
def filePath = readFile('output.txt').trim()
def lines = filePath.readLines()
lines.each { job ->
jobs.add("${job}")
}
def parallelStagesMap = jobs.collectEntries {
["${it}" : generateStage(it)]
}
parallel parallelStagesMap
}
}
}
}
}
I work an a scripted jenkins pipeline to shut down environments within our cloud providers. Since the environments are independent from each other, I want to execute them in parallel.
Here is the code:
def environmentsArray=["ZULI-TestCenter", "ZULI-FinanceCenter"]
node("someNode") {
def parEx = [:]
for (def item : environmentsArray) {
def environment = item
parEx[environment]= {
// Identify environments
environmentLowerCase = environment.split('-')[0].toLowerCase()
appNameLowerCase = environment.split('-')[1].toLowerCase()
stage("...) {
}
stage("...") {
}
} // End of parEx
} // End of for
parallel parEx
} // End of node
Unfortunately the values environmentLowerCase and appNameLowerCase are not updated per iteration, i.e. they have always the same value:
[Pipeline] { (STOPPING DMS tasks: ZULI-TestCenter) // correct
[Pipeline] stage
[Pipeline] { (STOPPING DMS tasks: ZULI-FinanceCenter) // correct
[Pipeline] echo
zuli, financecenter // wrong, should be testcenter
[Pipeline] withCredentials
[Pipeline] echo
zuli, financecenter // correct
What am I doing wrong?
As mentioned by #daggett. I simply hat to add def:
def environmentsArray=["ZULI-TestCenter", "ZULI-FinanceCenter"]
node("someNode") {
def parEx = [:]
for (def item : environmentsArray) {
def environment = item
parEx[environment]= {
// Identify environments
def environmentLowerCase = environment.split('-')[0].toLowerCase() // added def
def appNameLowerCase = environment.split('-')[1].toLowerCase() // added def
stage("...) {
}
stage("...") {
}
} // End of parEx
} // End of for
parallel parEx
} // End of node
I have the following simple flink application running within IDE, and I do a checkpoint every 5 seconds, and would like to write the checkpoint data into directory file:///d:/applog/out/mycheckpoint/, but after running for a while, i stop the application,but I didn't find anything under the directory file:///d:/applog/out/mycheckpoint/
The code is:
import java.util.Date
import io.github.streamingwithflink.util.DateUtil
import org.apache.flink.api.common.state.{ListState, ListStateDescriptor}
import org.apache.flink.api.scala._
import org.apache.flink.runtime.state.filesystem.FsStateBackend
import org.apache.flink.runtime.state.{FunctionInitializationContext, FunctionSnapshotContext}
import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction
import org.apache.flink.streaming.api.environment.CheckpointConfig.ExternalizedCheckpointCleanup
import org.apache.flink.streaming.api.functions.source.SourceFunction
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
object SourceFunctionExample {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(4)
env.getCheckpointConfig.setCheckpointInterval(5 * 1000)
env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
env.setStateBackend(new FsStateBackend("file:///d:/applog/out/mycheckpoint/"))
val numbers: DataStream[Long] = env.addSource(new ReplayableCountSource)
numbers.print()
env.execute()
}
}
class ReplayableCountSource extends SourceFunction[Long] with CheckpointedFunction {
var isRunning: Boolean = true
var cnt: Long = _
var offsetState: ListState[Long] = _
override def run(ctx: SourceFunction.SourceContext[Long]): Unit = {
while (isRunning && cnt < Long.MaxValue) {
ctx.getCheckpointLock.synchronized {
// increment cnt
cnt += 1
ctx.collect(cnt)
}
Thread.sleep(200)
}
}
override def cancel(): Unit = isRunning = false
override def snapshotState(snapshotCtx: FunctionSnapshotContext): Unit = {
println("snapshotState is called at " + DateUtil.format(new Date) + s", cnt is ${cnt}")
// remove previous cnt
offsetState.clear()
// add current cnt
offsetState.add(cnt)
}
override def initializeState(initCtx: FunctionInitializationContext): Unit = {
// obtain operator list state to store the current cnt
val desc = new ListStateDescriptor[Long]("offset", classOf[Long])
offsetState = initCtx.getOperatorStateStore.getListState(desc)
// initialize cnt variable from the checkpoint
val it = offsetState.get()
cnt = if (null == it || !it.iterator().hasNext) {
-1L
} else {
it.iterator().next()
}
println("initializeState is called at " + DateUtil.format(new Date) + s", cnt is ${cnt}")
}
}
I tested the application on Windows and Linux and in both cases the checkpoint files were created as expected.
Note that the program keeps running if a checkpoint fails, for example due to some permission errors or invalid path.
Flink logs a WARN message with the exception that caused the checkpoint to fail.
I am indexing text document using Flume, I do not see any error or warning message but data is not getting ingested to Solr Log level for both Solr and Flume is set to TRACE, ALL
Flume version : 1.5.2.2.3
Solr Version : 5.5
**Config files are as below**
**Flume Config :**
agent.sources = SpoolDirSrc
agent.channels = FileChannel
agent.sinks = SolrSink
# Configure Source
agent.sources.SpoolDirSrc.channels = fileChannel agent.sources.SpoolDirSrc.type = spooldir
agent.sources.SpoolDirSrc.spoolDir = /home/flume/source_emails agent.sources.SpoolDirSrc.basenameHeader = true agent.sources.SpoolDirSrc.fileHeader = true
agent.sources.SpoolDirSrc.deserializer =org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder
agent.channels.FileChannel.type = file agent.channels.FileChannel.capacity = 10000
agent.sinks.SolrSink.type = org.apache.flume.sink.solr.morphline.MorphlineSolrSink
agent.sinks.SolrSink.morphlineFile = /etc/flume/conf/morphline.conf agent.sinks.SolrSink.batchsize = 1000 agent.sinks.SolrSink.batchDurationMillis = 2500 agent.sinks.SolrSink.channel = fileChannel agent.sinks.SolrSink.morphlineId = morphline1 agent.sources.SpoolDirSrc.channels = FileChannel agent.sinks.SolrSink.channel = FileChannel
"
Morphline Config
solrLocator: {
collection : gsearch
zkHost : "codesolr-as-r3p:21810,codesolr-as-r3p:21811,codesolr-as-r3p:21812"
}
morphlines :
[
{
id : morphline1
importCommands : ["org.kitesdk.**", "org.apache.solr.**"]
commands :
[
{ detectMimeType { includeDefaultMimeTypes : true } }
{
solrCell {
solrLocator : ${solrLocator}
captureAttr : true
lowernames : true
capture : [_attachment_body, _attachment_mimetype, basename, content, content_encoding, content_type, file, meta]
parsers : [ { parser : org.apache.tika.parser.txt.TXTParser } ]
}
}
{ generateUUID { field : id } }
{ sanitizeUnknownSolrFields { solrLocator : ${solrLocator} } }
{ logDebug { format : "output record: {}", args : ["#{}"] } }
{ loadSolr: { solrLocator : ${solrLocator} } }
]
}
]
Please help me what could be the issue
Regards,
~Sri
Normally en flume logs you can see more detailed of your error, you can paste the trace?
May be morphilines doesnt find your solr configuration, you can add this property in your morphilines.conf
solrHomeDir : "/your_solr_config_files"
I hope it's your help
I'm currently storing images within the webapp folder of my Lift project, which I know will cause problems in future.
val path = "src/main/webapp/files/"
And the code I'm using to save it:
case Full(file) =>
val holder = new File(path, "test.txt")
val output = new FileOutputStream(holder)
try {
output.write(file)
} finally {
output.close()
}
}
What I'm trying to do is save the to the server root in an easily manageable folder called files, so SERVER_ROOT/files outside of the project folder.
Firstly how would I access the path to the root of the server so I can save them there?
Secondly how would I serve these files from my app, so I can display them on a page?
Thanks in advance, any help much appreciated :)
You have to store file to exact place on filesystem according to absolute path. I have written this code and it works, so maybe it helps you:
def storeFile (file : FileParamHolder): Box[File] =
{
getBaseApplicationPath match
{
case Full(appBasePath) =>
{
var uploadDir = new File(appBasePath + "RELATIVE PATH TO YOUR UPLOAD DIR")
val uploadingFile = new File(uploadDir, file.fileName)
println("upload file to: " + uploadingFile.getAbsolutePath)
var output = new FileOutputStream(uploadingFile)
try
{
output.write(file.file)
}
catch
{
case e => println(e)
}
finally
{
output.close
output = null
}
Full(uploadingFile)
}
case _ => Empty
}
}
and this is my getBaseApplicationPath function which finds out absolute path of local machine (server or your devel PC):
def getBaseApplicationPath: Box[String] =
{
LiftRules.context match
{
case context: HTTPServletContext =>
{
var baseApp: String = context.ctx.getRealPath("/")
if(!baseApp.endsWith(File.separator))
baseApp = baseApp + File.separator
Full(baseApp)
}
case _ => Empty
}
}