How can I start an RTC build via the command line? - clearcase

The build definition is created but in order to automate the build process I need to start the build via the command line.
How is this done ? Reading the doc on the scm command line client this does'nt seem to be described :
http://pic.dhe.ibm.com/infocenter/rtc/v1r0m0/index.jsp?topic=%2Fcom.ibm.team.scm.doc%2Ftopics%2Fc_scm_cli.html

I don't think scm is involved at all for launching a build.
You check out the Java API: See "Automated Build Output Management Using the Plain Java Client Libraries".
Or, you can use the JB Toolkit, and use a task like requestTeamBuild:
The requestTeamBuild task requests a build by using a specified build definition.
There must be an active engine that supports the build definition in order for the request to succeed.

Related

Why might Apache Flink write files on a Windows box, but not write files on a Linux container using simple FileSink and SimpleStringEncoder?

I'm working with the examples provided in 'flink-training' in the GitHub repository here. Specifically, I'm working on the 'ride-cleansing' example.
I've replaced the PrintSinkFunction with a simple FileSink configured as follows:
FileSink fileSink =
FileSink.forRowFormat(new Path(args[0]),
new SimpleStringEncoder<String>("UTF-8"))
.withRollingPolicy(DefaultRollingPolicy.builder()
.withRolloverInterval(Duration.ofMinutes(1))
.withInactivityInterval(Duration.ofSeconds(30))
.withMaxPartSize(512 * 512 * 512)
.build())
.build();
When I run this example on my local machine in Intellij, the expected directory are created and files are created to reflect the data streamed to the sink.
However, when I run this same example on a Linux box (on Google Colab), the directory is created, but no files are created, regardless of how long I leave it running (I've tried 10+ minutes).
On the Linux Container, I'm running the example using the gradle setup and the following command:
./gradlew :ride-cleansing:runJavaSolution --args="/content/datastream"
On the Windows box, I'm just executing the RideCleansingSolution 'main' with a simple 'Application' run configuration.
What might be different about my setup on the two systems that would decide whether data is written?
it might not work, but if you set up mono develop on whatever nx your using and write it all in c# via Xamarin in VS.NET23 it MIGHT work seamlessly across all platforms and arches... but I'm just spitballing here so `_o_/'

Setting up a workflow for autoformatting a git repository (C)

I want to set up a workflow that allows me to have a git repository with a uniform/consistent formatting. The developers (approx. 30) should be able to commit properly formatted changes to their local repository easily, independent of their operating system (either some Linux or Windows 10) and independent from their IDE. Changes shall be pushed to a Linux server which administrates the remote repository.
From my point of view there are two steps necessary to ensure that the remote repository is properly formatted:
Format the current state of repository according to a set of rules.
Format the files affected by every new commit according to these rules.
The first step can be implemented easily by running an auto-formatting tool (e.g. clang-format) on the complete repository. The implementation of the second step can be further divided into two substeps:
2a) Client side: Format a commit properly before pushing it to the server.
2b) Server side: Check if the repository will be properly formatted after the changes of the commit are applied.
The second substep (2b) can be implemented easily (simlar to step 1). However, the implementation of the first substep (2a) is more demanding and I would like to reach out to the community for tipps/tricks/ideas.
So far I've had a closer look on the Eclipse autoformatter and clang-format:
The Eclipse autoformatter can only be used when Eclipse is installed, I haven't found a Eclipse autoformatter standalone application. Is it possible to run the eclipse autoformatter from the command line without a GUI?
clang-format is a unix tool which I cannot install and run standalone on a windows system. I've seen there is a LLVM executable for windows but I am not sure if the installation will inflict any undesired changes to my system. Is anybody using LLVM/clang-format on windows?
Are there other auto-formatting tools for C which work on Linux and Windows 10? Is anybody successfully using python scripts for this purpose?

Using nix in a continuous delivery workflow

Can nix be used in a continuous-delivery workflow?
We're using semaphore as our continuous integration service, and now I'm looking into building packages upon a successful build. For this I'm considering using nix.
I don't know what would be the right way of setting up a continuous delivery pipeline with this package manager. It seems that such an automated process would involve:
Making a branch of the nixpkgs repository (in the CI server).
Updating the rev field of fetchFromGithub.
(automatically) submitting a pull-request.
But I don't know if this makes sense, and also I'm concerned that the continuous-delivery process involved a manual step (having an human aproving the pull-request).
Can nix be used in a continuous-delivery workflow?
Yes. It's typically done with Hydra, a CI system built with Nix. But, it may be possible to do this with Semaphore.
Semaphore CI provides build environments that are language specific, but... it's running Ubuntu, so theoretically you can do something like this:
Install Nix as if it were a dependency. See this article.
Add your Nix package, which I suppose you can do with Git. You don't really need to clone Nixpkgs.
Use nix-build to build your package. This will create a result symbolic link to the build output.
Deploy using git-deploy.
If you do something like this with your package you can call it directly from nix-build because you won't have to provide the package dependencies as arguments:
{ pkgs ? import <nixpkgs> {} }:
let
stdenv = pkgs.stdenv;
...
in
stdenv.mkDerivation {
..
}
Optimization
Installing Nix for every build is wasteful, but perhaps you can cache the Nix store. See this article.

background process launched by TFS is killed when switching to next step

I have some trouble keeping alive a background process when launched by TFS.
Usually I use a batch that launch a java server (new window), as long as I keep this window open it works properly.
C:\Users\TFSService\mbs-iot-sdk\osgi\bin\vms\jdk\server.bat
In order to make my process automatic, I include this in TFS. In the step I call a batch that contains the following:
cd C:\Users\TFSService\mbs-iot-sdk\osgi\bin\vms\jdk // necessary to find the batch
start C:\Users\TFSService\mbs-iot-sdk\osgi\bin\vms\jdk\server.bat
In my task manager, I can see in background tasks that java is launched (no new window is opened), exactly as it behaves when launching directly the batch. But after a few seconds, when TFS switches to the next step, it stops.
Then the next step carries on but fails as it requires the server to be launched.
Is there a particular way of doing it in TFS ?
thank you
Alexandre
It's suggest to launch the .bat file from a relative path not directly use cd to hard code the path.
Also recommend you to use Run Batch File task not Run Command Line task to launch the .bat file.
According to your description, seems you are using a run command line task in your build pipeline. Then run the command under the working directory c:\Build_work\5\s, the command cd to C:\Users\TFSService\mbs-iot-sdk\osgi\bin\vms\jdk\ on the build agent, find the server.bat, run the server.bat.
First check if the .bat file is located at the path you are specifying on the build agent. Not sure if the bat file have to run under C:\Users\TFSService\mbs-iot-sdk\osgi\bin\vms\jdk\, guess you are also hard code the path in your server.bat file. Suggest you change all the path to relative path, you could use some built-in variable in TFS.
As for your workaround in comment, seems you want to chain builds in TFS. The official docs literally say "not yet" and have a uservoice in planed. However you could use some workaround, such as create or use other's customize extension (use rest api) to call another build. Detail ways please refer huserben's answer in this question: How to chain builds in TFS 2015?
Note sure you have to go deep into this area for your original issue. Just add some related info in case you are interested or need.
Well,
Just in case someone else goes through the same kind of issue, I found a workaround:
I wish to mix different command line steps, some of them launching Python scripts:
I have one step for launching the server that is required for my testing tool, one step for my testing tool and one Python step for differential testing
I realized that I could embed everything in a Python script.
It can handle server launching process in a separate window (with subprocess), launch my Python part and launch another process for my validation tool.
I have to test the whole chain but, at least, I solved my problem of launching a background process and detach it from TFS

Using TF command-line get latest, how do I run SonarQube analysis only if new code has been checked-in?

I use a BAT file to run TF command-line get latest and then call the Sonar runner BAT file to run analysis on the latest code.
I have automated this every hour using Task Scheduler.
I want to run Sonar analysis only if TF gets new code, else it should skip.
How do I achieve this? I tried searching for exit codes, but nothing tells me if any new code was fetched.
Visual Studio Team Services or TFS 2015+
You can configure a build to be triggered on Continuous Integration (CI), i.e. whenever changes are made, the build is triggered. A build can be whatever you want, from invoking a proper msbuild / mstest and SonarQube task or doing your own stuff using cmd / powershell / bash.
To learn more about integrating SonarQube analysis with TFS Build see: https://blogs.msdn.microsoft.com/visualstudioalm/2015/08/24/build-tasks-for-sonarqube-analysis/
TFS 2013
You can configure a XAML build to run on Continous Integration
Through scripts
Continous Integration builds are the recommended way of running logic when a change happens in your code. However, if you'd like to use "tf get" - you could capture the output of "tf get" and if it matches "All files are up to date" then do nothing, else trigger the SonarQube execution.

Resources