Build with parameters for GitHub organization - jenkins-plugins

Is there a way we can add build with parameter option in jenkins for github organization project??
Appreciate for your answeres.

First, if you're using the GitHub Organization - It is deprecated and hasn't been updated in 2 years.
https://plugins.jenkins.io/github-organization-folder
So you might want to consider using the Multi Branch Pipeline (MBP) in conjunction with Job DSL instead.
Next - the idea is that the MBP is dynamically creating branch 'jobs' when it scans the repo you provide. I'm not sure that you can't put a parameter block in the pipeline you create, but if you did and it worked, I wonder if the job can be automatically triggered.
https://jenkins.io/doc/book/pipeline/syntax/#parameters

Related

What are my options for merging subsets of repositories?

This question is more of an application architecture and source control type of question.
I have 2 Github repositories, one is a React single page application and the other is for a React website. For my single page application, I am making the code publicly available and the application links to its repository. For my website, I want to keep the repository private but incorporate the single page application into it so people can use it without having to download and build code.
Can I get some options on how to merge changes to the single page application repository with the website repository?
So far I am just merging code to the website manually by copying it over and pushing the code, but that is a problematic way of doing things. Neither repo is completely up and running yet, so there is still time for me to make architecture changes. Maybe there are git commands to handle everything?
Any help is appreciated, including suggested architecture/repo changes.
I think the best option here would be to use git cherry-pick, but in an automated way:
Build a simple script that listens for push events via git webhooks, from your single page repo. That way you can get the merge-in-master event automatically
Get the hash of that commit
Plug that hash into the git cherry-pick command applied on your private website repo. You can apply this commit on a separate branch in this repo, and merge it in master when you think it's appropriate

How to create Salesforce incremental package.xml automatically?

Does anyone experiment in creating salesforce Package.xml automatically for continuous integration? If there any script or some idea please share.
You know incremental package.xml helps to deploy only the modified files rather than using complete package.xml that redeploy unmodified files as well which takes a lot of time.
Thanks in advance!
Tricky. And not really a programming-related problem, consider cross-posting this to https://salesforce.stackexchange.com/ or maybe even https://devops.stackexchange.com/
I don't think there's no clear answer, you'll have to experiment. Especially that you tagged "migration tool" (so old-school, battle-tested but lower priority Metadata API; seems that all focus is now on SFDX style of deployments). Do you use any version control (ideally Git) or do you hope to somehow compare source & target org, figure out the deltas and deploy only them?
Remember that often SF gets better at detecting "no changes" with every release (how old is your migration tool's jar file?). For example when I deploy my current project to an empty sandbox (exact copy of prod, no custom objects, code etc yet) the initial deploy takes ~7 minutes. But any subsequent deploy with same content or slight changes takes just 3-4. So try to calculate time lost in the grand scheme of things and decide what gains you want to see / how much time you want to spend on experimenting and tweaking the solution.
You could look into dedicated deployment solutions such as Gearset, Autorabit, Odaseva (I'm not affiliated with either and this list is not exhaustive). They often are capable of running a comparison for you.
There are several projects that try to compose package.xml based on Git diff(erence) between two commits. Of course you need to have a repo first and some regime:
https://github.com/cloudsandbox/sfdx-gen-pack saw presentation about it at Cloudforce London 2019
https://github.com/Accenture/sfpowerkit seems to have a "diff" command (disclaimer: I used to work for Accenture but not affiliated now, haven't worked on the tool, haven't used it personally)
https://cumulusci.readthedocs.io/en/latest/ this seems to be interesting and mature. Built by SF employees, not an official tool but used to CI deploy the non-profit packages they build (maybe you heard about Non Profit Starter Pack, especially if you ever considered enabling Person Accounts). I'm not sure if they do delta deployments as such but there seems to be a command that updates package.xml with files in repository so it's a start? https://cumulusci.readthedocs.io/en/latest/tutorial.html#part-4-running-tasks
I'm not saying CumulusCI will be a silver bullet but out of these 3 seems to be most actively maintained ;) But sounds like you'd have to get familiar with SFDX (if not whole thing then at least commands to convert the project back and forth between "source" (SFDX) structure and Metadata API structure
Answering my question by myself: I found git diff master feature/vat | force-dev-tool changeset create vat working!
Thanks to Roman answered in https://salesforce.stackexchange.com/questions/184332/is-there-a-pre-build-solution-for-generating-a-package-xml-from-a-git-repo

Using Flink LocalEnvironment for Production

I wanted to understand the limitations of LocalExecutionEnvironment and if it can be used to run in production ?
Appreciate any help/insight. Thanks
LocalExecutionEnvironment spins up a Flink MiniCluster, which runs the entire Flink system (JobManager, TaskManager) in a single JVM. So you're limited to CPU cores and memory available on that one machine. You also don't have HA from multiple JobManagers. I haven't looked at other limitations of the MiniCluster environment, but I'm sure more exist.
A LocalExecutionEnvironment doesn't load a config file on startup, so you have to do all of the configuration in the application. By default it also doesn't offer a REST endpoint. You can solve both these issues by doing something like this:
String cwd = Paths.get(".").toAbsolutePath().normalize().toString();
Configuration conf = GlobalConfiguration.loadConfiguration(cwd);
env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(conf);
Logging may be another issue that will require a workaround.
I don't believe you'll be able to use the Flink CLI to control the job, but if you create the Web UI (as shown above) you can at least use the REST API to do things like triggering savepoints (after first using the REST API to get the job ID).

Run Map Reduce on non-default versions?

I have a couple of questions about the App Engine Map Reduce API. First of all there's a mapreduce package in the SDK, and there's a separate mapreduce bundle here:
https://developers.google.com/appengine/downloads
Which one should I be using? Should I be using the bundle, or is the documentation out of date and I should actually use the SDK version?
Second I'd like to be able to run mapreduce's on a non-default version to make sure that the requests from the mapreduce don't interfere with user requests.
What's the best way to do this? Can I start the pipeline with a task queue, and set the target version of that queue to be my non-default version?
We recommend using the open source version of Map Reduce for GAE at http://code.google.com/p/appengine-mapreduce/
The stale bundle link in the docs is a bug. That'll get cleaned up soon.
A few of our SDKs have bits of MapReduce (for historic reasons), but the open source version is the way to go for now.
As for using a separate version, this is kind of "it depends". If you're thinking of interference in terms of competition for the processor, that's not likely to be a noticeable issue. Depending on queue processing rates you've set up, more instances of your app will be spun up to handle mapping tasks as needed. I'd try some experiments first. Make sure you have a problem before you invest time and effort solving it.
mapreduce can be start on a not default version. And after it starts, it will continue run on that version automatically.
In my case I just deploy the code on a non default version and trigger the mapreduce with version_id.app_id.appspot.com/path_to_start_a_job.
cron job can also trigger the mapreduce on non default version without problem.

Scripting responses for use in the Maven Release Plugin

We are a SVN/Maven/Hudson shop. We are experimenting with using the Maven Release Plugin to help automate our very laborious tagging and releasing process. We are happy with what we are seeing and have researched thus far in regards to this plugin.
Our question is - if we need to have different tags for some of the modules / applications being built, is there a way to script the responses?
We have waded through the interactive dry runs successfully, however we are looking to script these out to further our automation.
Has anyone tried this or know if it is possible?
Does the "Batch Mode" allow this functionality?
Thanks
Joe R
You can -B but it will use default version names (removing -SNAPSHOT at the end).
regarding tags per module you can have a look at the parameter : autoVersionSubmodules 1
/Olivier

Resources