Jenkins File System Trigger (FS Trigger) - should there be 2 polling mechanisms? How to incorporate %WORKSPACE% - jenkins-plugins

I am using the FileSystem trigger to monitor whether the code in a folder changed (from git repo)
My thought is
use normal Poll SCM every 2 minutes
check the actual downloaded source folder in question for changes after step 1 polls the every 2 minutes.
Questions:
Is the flow correct? Should it poll SCM every 2 minutes, and then the actual folder every 2 minutes? Should it directly poll the folder in the repo on bitbucket/github? Currently the build is being built every time the project fires - it bypasses the folder check.
I tried setting the folder path to %WORKSPACE%/MyProjectToMonitorFolder and the
[FSTrigger] - Monitor folder logs said that it could not find the folder. If I hardcode the actual full folder path as in the image then the folder and changes are found. How can I incorporate %WORSKPACE% into the folder path?

Both triggers serve very different use cases and usually don't go together. I assume what you want to achieve is a trigger to your job that will run it whenever a specific folder has changes in your Git repository.
You can achieve it by first configuring you SCM Git build stage to only monitor a specific folder in your repository, it will eliminate the need for using the file system trigger as it will only trigger the job when the configured folders have changed. You can find more info in the Official Documentation under the Polling ignores commits in certain paths section.
You can also check out This Answer for more info.
In addition it is highly recommended to move from a scheduled SCM pulling mechanism to a hook based Git trigger that will trigger your job whenever new code is pushed to the repository and will avoid the need to constantly check for changes, see This This Answer for more info on the git hooks configuration.
Furthermore every major Git repository manger (GitHub, Bitbucket, Gitlab...) has dedicated integration Jenkins plugins for git hooks and other operations - so you can use one of them to make your integration easy.

Related

How to implement a tool to validate content and push it to the cloud

I am currently committing text files and images to a GitHub repository that need to have a certain format and content style, with validation done via pre-commit hooks, so all files committed to the repository are valid. I also need git to keep track of when files are updated and the versions that existed previously.
In the future I want to move to storing the files to a cloud service instead of the repository. This is the solution I though of:
Have a script that needs the directory you are trying to upload, name would need to be a certain format, ex. <City><Street>.
If it exists, the script compares the folder contents to the one in the cloud, if not all the folder gets uploaded.
Before upload we run content format validation, if it doesn't pass then we throw errors to the user.
If there were previous file versions, we store them in a different folder, appending the date/time to the filename.
Cloud now has the latest version.
I would lose a lot of the advantages that I had with version control and the pre-commit hooks before. I would gain the ability to just pull a specific folder, something that GitHub doesn't allow me to do. What would be a better way to implement this? Is there a tool that would be good for this?

Meteor unwatch folder

I am trying to make a folder to be not watched. Is there a way to make one of the folders not watched in Meteor? I don't want my project to reload if I change a content in that folder.
Not exactly. Meteor assumes that if the folder content changes, it also needs to reload/restart the server, because the business logic of the application might have changed. Therefore it reloads these files and restarts the server
However, you might be able to "abuse" the tests/ directory or any of the directories/files mentioned below for that purpose. As explained in the Meteor guide on Application Structure, paragraph "Special directories":
Any directory named tests/ is not loaded anywhere. Use this for any test code you want to run using a test runner outside of Meteor’s built-in test tools.
The following directories are also not loaded as part of your app code:
Files/directories whose names start with a dot, like .meteor and .git
packages/: Used for local packages
cordova-build-override/: Used for advanced mobile build customizations
programs: For legacy reasons
So the reasonable choice would be to create a dot directory, e.g. .myStuff, and place anything that you might need to update but do not want to trigger a server restart there.
Just build your app in a package so you can decide which files you want to make available or not :)

Merging a folder in SVN records only the folder in the log, and not the files inside it

The scenario is as follows.
We're running a CI server which scans a repository for any .sql changes, then executes them against a target database.
Currently it's failing because SVN is not recording file changes within a folder (that has been merged from a branch). Merge info was commit too.
Example:
Developer branches "/Trunk" to "/Branches/CR1"
Developer adds a new folder "CR1/Scripts"
Developer adds two new files "Scripts/Script1.sql" and "Scripts/Script2.sql"
Developer commits the folder and files together
Developer merges from CR1 to Trunk, commit dialog displays status "Normal"
CI server detects no changes
Developer examines the log and sees no mention of Script1.sql or Script2.sql
All this is displayed via TortoiseSVN on Windows, the CI Server is using SharpSvn .NET library.
Any help figuring out how to get the *.sql files to show up would very much be appreciated.
It's nearing a year, and during this time we've used a workaround to find the missing files. Using the CLI command svn log -v we scanned for any directory with the COPY-FROM-PATH text and listed the contents from that directory on disk rather than SVN.
Whilst this does provide us with a full list of files in that folder, we should really be able to get this info remotely without checking out a copy of the repository. When a co-worker also encountered this issue recently they found the the answer courtesy of the IRC channel #svn on freenode.
Using the CLI command svn diff <url>[old rev] <url>[new rev] --summarize you get a difference between the revisions which thanks to the --summarize flag displays all the files and which finally answered the original question.

How to simulate multi developer scenario with RTC source control

Is it possible to simulate multi developer scenario with RTC source control so that when I make code changes I can test accepting change sets for example. This is just so I can test a multi developer environment but using just one user.
I've tried creating multiple Eclipse workspaces, and loading the same project area into each Eclipse workspace. Using this method I am unable to accept change sets as RTC source control will just ask me to resync my workspace once I make a change in work Eclipse workspace:
It seems the only method of accepting incoming changes is to
1. Right click on the stream from within 'Pending CHanges' view
2. Select load
3. Select following option :
Make sure you use the Stream (ie make sure you don't deliver directly to another repo workspace simulating another user)
(Note: this is entirely different in ClearCase, where the "out of sync" can happen between the configuration of an UCM view and the one of a Stream after a rebase)
If you create different repo workspace (loaded in different Eclipse workspace), this can cause some confusion when used within the same Eclipse instance.
As said in this thread
repository workspaces are meant to isolate changes - being your private stream.
There is no automatic accepting of changes so you are in full control of what flows in. You can also run private builds on them. that is the whole idea.
If you want to run several repository workspaces with shared code you should use a Stream I think.
The clean repo workspace would be used to accept the changes you decide to deliver to your stream.
So you are trying to use a repository workspace as a stream. While they are almost identical, I am not sure about how they would react to changes delivered to them. Especially while being loaded.
You should use two Eclipse instances. I am concerned about having the same eclipse projects loaded multiple times in the same sandbox and the same Eclipse
That "confusion" is explained in the same thread:
This is expected behavior.
When you change WS1 by delivering to it, the content you've loaded to disk for WS1 isn't updated. So you have to reload.
For this reason, you are not allowed to deliver to other user's workspaces. You can't alter someone's workspace but you can alter your own because you would know why it went out of sync.
Check out point 7 and 10 of "Good practices and key workflows for Rational Team Concert Source Control users".
Note: the article "Loading Content from a Jazz Source Control Repository in Rational Team Concert 2.0" (also valid for RTC3.0) mentions in the section "Reloading Out-of-sync Shared Folders" a similar advice than the one given by the OP:
The local workspace can become out of sync with the remote workspace due to a couple of reasons:
The remote workspace is loaded multiple times and changes have been checked in or accepted from another client session.
An error was encountered during an operation (e.g. Accept) that modifies both the local and remote workspaces.
When the local workspace became out of sync with the remote workspace in RTC 1.0, the user was forced to run the Load wizard and reselect the folders that needed to be reloaded.
In RTC 2.0, this new option will automatically select the out of sync folders and reload them so they are no longer out of sync.
Also new in RTC 2.0 is an indication in the Pending Changes view that there are projects out of sync, as shown below.
Clicking on the Reload out of sync link in the Pending Changes view will open the Load wizard.
The reload option will be selected by default and clicking next will then allow you to select which folders to reload.
As you can see in the following screen shot, all the projects in the Foundation component are out of sync and need to be reloaded.
Clicking Finish will reload these folders and bring them back in sync.
Also the thread "How to handle project out of sync " provides an interesting illustration of that mechanism (even though it isn't exactly your situation).

ClearCase: Working offline hijacking files, then checking out / merging

I'm looking at a scenario where I have an offline clear case view and I modify files in this view clearing the read-only attribute (hijacking) on the files I modify then several days later I take the view online and need to get my offline changes into the stream.
What I would do is check out the hijacked files and check them back in (merging when necessary).
Is it always safe to work this way?
Is it possible that while adding my changes I would accidentally overwrite other people's changes done while I was working offline?
Any recommendations on how to use ClearCase offline?
Thanks!
(I'm asking because a college says that this offline way of working can lead to overwriting other's changes, specifically in cases when one updates ones view after working offline for a while before converting the hijacked files into checkouts. He says it won't event propose to do a merge in some cases, just completely overwrite the contents of the element being converted with the contents of the hijacked file)
No you won't override anything while working offline.
ClearCase has a reconcliation mechanism for a snapshot view, which, when you get back online, will allow you to:
search for all hijacked files
checkout those files
then checkin them, which is when ClearCase will prompt you for a merge, if any new version has been done on that file during your time offline.
That merge will be a three-way merge with:
root version: the version before any modification by you or other
source version: the matest checkin version (done while you were offline)
destination version: your current file
What about setuping a private branch, working on it, hijacking there files and then merging your private branch on the main branch?

Resources