Bump a specific version number on SVN using Grunt - angularjs

I have an AngularJS module, built with Grunt. The version of this module is managed in the package.json file.
What I need to do
I need to create a grunt task to release the module when needed. Here what this task must do :
Perform a tag of the current module files on SVN (and it has to be SVN, not GIT).
Upgrade the version in package.json to a given version (for example, I will pass --version = X.Y.Z as option to the grunt task). I don't want a solution based only on "patch", "minor" or "major" upgrades.
Commit the change on the package.json file.
What I found so far
grunt-bump allows me to pass a specific version, using --setversion option. But it cannot commit the change on SVN, it's only working with GIT.
grunt-svn-bump allows me to commit on SVN, but I can't find a way to specify the next version. And it cannot perform the "tag" part.
grunt-svn-tag allows me to tag the files on SVN repository.
Do you know another grunt plugin that could fit ? Any help will be appreciated.

I was bored looking for an existing task that would fit, so I finally created it, based on the code of grunt-bump and grunt-svn-bump :
grunt.registerTask('custom_bump', 'Custom task for bumping version after a release', function(){
var semver = require('semver');
var shell = require('shelljs');
function shellRun( cmd ){
if (grunt.option('dryRun')) {
grunt.log.writeln('Command (not running because of dryRun option): ' + cmd);
} else {
grunt.verbose.writeln('Running: ' + cmd);
var result = shell.exec(cmd, {silent:true});
if (result.code !== 0) {
grunt.log.error('Error (' + result.code + ') ' + result.output);
}
}
}
// Options
var options = this.options({
filepath: 'package.json',
commit: true,
commitMessage : 'New version following a release'
});
// Recover the next version of the component
var nextVersion = grunt.option('nextVersion');
if( !nextVersion ){
grunt.fatal( 'Next version is not defined.', 3 );
}
else if( !semver.valid( nextVersion ) ){
grunt.warn( 'Next version is invalid.', 3 );
}
// Upgrade version into package.json
var filepath = options.filepath;
var file = grunt.file.readJSON( filepath );
var currentVersion = file.version;
if( semver.lte( nextVersion, currentVersion ) ){
grunt.warn( 'Next version is lesser or equal than current version.' );
}
file.version = nextVersion;
grunt.log.write( 'Bumping version in ' + filepath + ' from ' + currentVersion + ' to ' + nextVersion + '... ' );
grunt.file.write( filepath, JSON.stringify( file, null, 2 ) );
grunt.log.ok();
// Commit the changed package.json file
if( options.commit ){
var message =
grunt.log.write( 'Committing ' + filepath + '... ' );
shellRun( 'svn commit "' + filepath + '" -m "' + options.commitMessage + '"' );
grunt.log.ok();
}
// Update the config for next tasks
var configProperty = 'pkg';
grunt.log.write( 'Updating version in ' + configProperty + ' config... ' );
var config = grunt.config( configProperty );
if( config ){
config.version = nextVersion;
grunt.config( configProperty, config );
grunt.log.ok();
} else {
grunt.log.warn( "Cannot update pkg config !" );
}
grunt.log.ok( 'Version updated from ' + currentVersion + ' to ' + nextVersion + '.' );
});
My 'release' task uses grunt-svn-tag and my custom bump task.
grunt.initConfig({
// ...
svn_tag: {
current: {
options: {
tag: '<%= pkg.name %>-<%= pkg.version %>'
}
}
}
});
grunt.registerTask('release', [
'svn_tag:current',
'custom_bump'
]);

small disclaimer, I don't use SVN or Grunt, but GIT and Gulp, so I don't really know the syntaxes to any of those.
That being said, I would not put this in a grunt/gulp task, but I would create a NPM task to just run a small shell-script. npm release X.Y.Z
The shellscript could contain something like this:
#!/usr/bin/env bash
VERSION=$2
echo "gulp bump $VERSION"
gulp bump $VERSION
echo "staging package.json"
git add package.json
echo "commit release"
git commit -m "release $VERSION"
git tag -a "$VERSION" -m "release $VERSION"
git push origin --tags
Now I haven't tested this syntax, but something along these lines is how I would try it.

Related

How to add options to ntpd

I'd like to add a new option to ntpd however I couldn't find how to generate ntpd/ntpd-opts{.c, .h} after adding some lines to ntpd/ntpdbase-opts.def e.g.,
$ git diff ntpd/ntpdbase-opts.def
diff --git a/ntpd/ntpdbase-opts.def b/ntpd/ntpdbase-opts.def
index 66b953528..a790cbd51 100644
--- a/ntpd/ntpdbase-opts.def
+++ b/ntpd/ntpdbase-opts.def
## -479,3 +479,13 ## flag = {
the server to be discovered via mDNS client lookup.
_EndOfDoc_;
};
+
+flag = {
+ name = foo;
+ value = F;
+ arg-type = number;
+ descrip = "Some new option";
+ doc = <<- _EndOfDoc_
+ For testing purpose only.
+ _EndOfDoc_;
+};
Do you have any ideas?
how to generate ntpd/ntpd-opts{.c, .h} after adding some lines to ntpd/ntpdbase-opts.def
It is just in build scripts. Just compile https://github.com/ntp-project/ntp/blob/master-no-authorname/INSTALL#L30 it normally and make will pick it up.
https://github.com/ntp-project/ntp/blob/master-no-authorname/ntpd/Makefile.am#L304
https://github.com/ntp-project/ntp/blob/master-no-authorname/ntpd/Makefile.am#L183
In addition to #KamilCuk's answer, we need to do the following to add custom options:
Edit *.def file
Run bootstrap script
Run configure script with --disable-local-libopts option
Run make
For example,
$ git diff ntpd/ntpdbase-opts.def
diff --git a/ntpd/ntpdbase-opts.def b/ntpd/ntpdbase-opts.def
index 66b953528..a790cbd51 100644
--- a/ntpd/ntpdbase-opts.def
+++ b/ntpd/ntpdbase-opts.def
## -479,3 +479,13 ## flag = {
the server to be discovered via mDNS client lookup.
_EndOfDoc_;
};
+
+flag = {
+ name = foo;
+ value = F;
+ arg-type = number;
+ descrip = "Some new option";
+ doc = <<- _EndOfDoc_
+ For testing purpose only.
+ _EndOfDoc_;
+};
This change yields:
$ ./ntpd --help
ntpd - NTP daemon program - Ver. 4.2.8p15
Usage: ntpd [ -<flag> [<val>] | --<name>[{=| }<val>] ]... \
[ <server1> ... <serverN> ]
Flg Arg Option-Name Description
-4 no ipv4 Force IPv4 DNS name resolution
- prohibits the option 'ipv6'
...
-F Num foo Some new option
opt version output version information and exit
-? no help display extended usage information and exit
-! no more-help extended usage information passed thru pager
Options are specified by doubled hyphens and their name or by a single
hyphen and the flag character.
...

For Flink v 1.10.1 or later how can you programmatically edit the container absolute path in a savepoint metadate file?

I am exploring how to change the absolute path contained in a Flink savepoint's metadata file.
We are looking to migrate a flink stream across AWS regions; however expect to run into problems because of this absolute path. Flink documentation alludes to this problem and suggests using the SavepointV2Serializer to edit the path:
https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/savepoints.html#can-i-move-the-savepoint-files-on-stable-storage
Can anyone help me identify an example that illustrates how to do this? I have not been able to find a reference online.
Also, although looking in the _metadata file, I see an absolute path, I have not found any reference to it in the resulting deserialized objects, nor is it saved to a serialized file.
Thanks in advance for any guidance.
Here's the main file I wrote:
object Main extends App {
val meta = "src" / "main" / "resources" / "_metadata"
println( s"meta: ${meta.path}: ${meta.exists}" )
val contents = meta.contentAsString
println( contents )
// val serde1 = SavepointV1Serializer.INSTANCE
val serde2 = SavepointV2Serializer.INSTANCE
import scala.jdk.CollectionConverters._
val data = meta.inputStream() { in =>
val dis = new java.io.DataInputStream( in )
serde2.deserialize( dis, Thread.currentThread().getContextClassLoader )
}
println( s"META: ${data}" )
println( s"METADATA.version: ${data.getVersion}" )
println( s"METADATA.checkpointId: ${data.getCheckpointId}" )
println( s"METADATA.masterStates: ${Option( data.getMasterStates ).map( _.asScala.mkString( "[", ", ", "]" ) )}" )
println(
s"METADATA.operatorStates: ${Option( data.getOperatorStates ).map( _.asScala.mkString( "[", ", ", "]" ) )}"
)
println( s"METADATA.taskStates: ${Option( data.getTaskStates ).map( _.asScala.mkString( "[", ", ", "]" ) )}" )
val newMeta = "src" / "main" / "resources" / "_NEW_metedata"
val newData = new SavepointV2(
data.getCheckpointId,
Seq.empty[OperatorState].asJava,
data.getMasterStates
)
println( s"NEW_DATA:OpStates: ${newData.getOperatorStates}" )
newMeta.outputStream() { out =>
serde2.serialize( newData, new java.io.DataOutputStream( out ) )
}
}
The underlying issue was actually fixed in Flink 1.11 -- see FLINK-5763 -- savepoints are now relocatable, and no longer contain absolute paths. The only exception seems to be if you use the GenericWriteAheadLog sink.
The documentation needs to be updated, see FLINK-19381.
So if you can upgrade to 1.11.x first, then you should be able to avoid the problem.

How to copy files based on environment on gulp

I have folder with files for different build environment like production,stage, and test.
example:
src/config.prod.js
src/config.stage.js
src/config.test.js
what I want is to copy config file based on environment I got, for getting environment name from command i using following code:
var nopt = require('nopt')
, knownOpts = {
"env" : [String, null]
}
, shortHands = {
"test" : ["--env", "test"]
, "dev" : ["--dev", "dev"]
, "stage" : ["--env", "stage"]
, "prod" : ["--env", "prod"]
};
var flags = nopt(knownOpts, shortHands, process.argv, 2);
and when I hit command
gulp build --env dev
I am getting the environment name, now what I want is to copy config file to dist folder(build) based on environment. I have this task for copy file but it copy all files as I don't know how to filter it out.
gulp.task('copyConfig', function(){
gulp.src(['src/*.js'])
.pipe(gulp.dest('dist/'))
})
I am new to gulp, if some one has any suggestion. please help.
new configs directory where you mv your 3, existing configs (prod, stage, test) ... mkdir configs
Then , 2 related changes to gulp to incorporate build.ENV and a copy.config task that knows about your 3 diff config files...
var settings = {
/*
* Environment development | production
* match './configs' files of same name
*/
environment : process.env.NODE_ENV || 'development',
/*
* Where is our config folder?
*/
configFolder : 'configs',
/*
* Where is our code?
*/
srcFolder : 'app/scripts',
/*
* Where are we building to?
*/
buildFolder : 'dist',
};
/**
* Config Task
*
* Get the configuration file (dev or prod), rename it
* and move it to be built.
*/
gulp.task('config', function() {
return gulp.src(settings.configFolder + '/' + settings.environment + '.js')
.pipe(rename('config.js'))
.pipe(gulp.dest(settings.srcFolder));
});

Openshift action hook can't access environment variables

For my application on Openshift, I am trying to write a pre_build script that accesses the database. The goal is to have migration scripts between database versions that are executed when the code is deployed. The script would compare the current database version with the version needed by the application code and then run the correct script to migrate the database.
Now the problem is that apparently the pre_build script is executed on Jenkins and not on the destination cartridge and therefore the environment variables with the database connection arguments are not available.
This is the pre_build script that I've written so far:
#!/usr/bin/env python
print "*** Database migration script ***"
# get goal version
import os
homedir = os.environ["OPENSHIFT_HOMEDIR"]
migration_scripts_dir = homedir + "app-root/runtime/repo/.openshift/action_hooks/migration-scripts/"
f = open(migration_scripts_dir + "db-version.txt")
goal = int(f.read())
f.close()
print "I need database version " + str(goal)
# get database connection details
# TODO: find a solution of not hard coding the connection details here!!!
# Maybe by using jenkins environment variables like OPENSHIFT_APP_NAME and JOB_NAME
db_host = "..."
db_port = "..."
db_user = "..."
db_password = "..."
db_name = "..."
import psycopg2
try:
conn = psycopg2.connect("dbname='" + db_name + "' user='" + db_user + "' host='" + db_host + "' password='" + db_password + "' port='" + db_port + "'")
print "Successfully connected to the database"
except:
print "I am unable to connect to the database"
cur = conn.cursor()
def get_current_version(cur):
try:
cur.execute("""SELECT * from db_version""")
except:
conn.set_isolation_level(0)
cur.execute("""CREATE TABLE db_version (db_version bigint NOT NULL)""")
cur.execute("""INSERT INTO db_version VALUES (0)""")
cur.execute("""SELECT * from db_version""")
current_version = cur.fetchone()[0]
print "The current database version is " + str(current_version)
return current_version
def recursive_execute_migration(cursor):
current_version = get_current_version(cursor)
if (current_version == goal):
print "Database is on the correct version"
return
elif (current_version < goal):
sql_filename = "upgrade" + str(current_version) + "-" + str(current_version + 1) + ".sql"
print "Upgrading database with " + sql_filename
cursor.execute(open(migration_scripts_dir + sql_filename, "r").read())
recursive_execute_migration(cursor)
else:
sql_filename = "downgrade" + str(current_version) + "-" + str(current_version - 1) + ".sql"
print "Downgrading database with " + sql_filename
cursor.execute(open(migration_scripts_dir + sql_filename, "r").read())
recursive_execute_migration(cursor)
conn.set_isolation_level(0)
recursive_execute_migration(cur)
cur.close()
conn.close()
Is there another way of doing automatic database migrations?
Thanks for your help.

Python Ftp Not a Direcoty error

I am trying to download files from a ftp server and importing the data to django. So i created a list contain server address,login details,path,file name,and the path where the file to be download and pass to a function which do downloading. it is working file in my sytem when move it to client server it showing error like
" error downloading C_VAR1_31012014_1.DAT - [Errno 20] Not a directory: 'common/VARRate/C_VAR1_31012014_1.DAT"
this is how the list look like
self.fileDetails = {
'NSE FO VAR RATE FILE': ('ftp.xxx.com', username, passwd, 'common/VARRate', 'C_VAR1_\d{4}201[45]_\d{1}.DAT', 'Data/samba/Ftp/Capex10/NSECM/VAR RATE'),
}
for fileType in self.fileDetails:
self.ftpDownloadFiles(fileType)
This details will pass to the function following function
def ftpDownloadFiles(self, fileType):
logging.info('Started ' + str(fileType))
try:
ftpclient = ftplib.FTP(self.fileDetails[fileType][FDTL_SRV_POS],
self.fileDetails[fileType][FDTL_USR_POS],
self.fileDetails[fileType][FDTL_PSWD_POS],
timeout=120)
#ftpclient.set_debuglevel(2)
ftpclient.set_pasv(True)
logging.info('Logged in to ' + self.fileDetails[fileType][FDTL_SRV_POS] +\
time.asctime())
logging.info('\tfor type: '+ fileType)
except BaseException as e:
print e
return
remotepath = self.fileDetails[fileType][FDTL_PATH_POS]
#matched, unmatched, downloaded = 0
try:
ftpclient.cwd(remotepath)
ftpclient.dir(filetimestamps.append)
except BaseException as e:
logging.info('\tchange dir error : ' + remotepath + ' ' +\
e.__str__())
self.walkTree(ftpclient, remotepath, fileType)
#logging.info('\n\tMatched %d, Unmatched %d, Downloaded %d'
# % (matched, unmatched, downloaded))
ftpclient.close()
From here it will call next function here the download process will start
def walkTree(self, ftpclient, remotepath, fileType):
# process files inside remotepath; cwd already done
# remotepath to be created if it doesnt exist locally
copied=matched=downloaded=imported = 0
files = ftpclient.nlst()
localpath = self.fileDetails[fileType][FDTL_DSTPATH_POS]
rexpCompiled = re.compile(self.fileDetails[fileType][FDTL_PATRN_POS])
for eachFile in files:
try:
ftpclient.cwd(remotepath+'/'+eachFile)
self.walkTree(ftpclient, remotepath+'/'+eachFile+'/', fileType)
except ftplib.error_perm: # not a folder, process the file
# every file to be saved in same local folder as on ftp srv
saveFolder = remotepath
saveTo = remotepath + '/' + eachFile
if not os.path.exists(saveFolder):
try:
os.makedirs(saveFolder)
print "directory created"
except BaseException as e:
logging.info('\tcreating %s : %s' % (saveFolder, e.__str__()))
if (not os.path.exists(saveTo)):
try:
ftpclient.retrbinary('RETR ' + eachFile, open(saveTo, 'wb').write)
#logging.info('\tdownloaded ' + saveTo)
downloaded += 1
except BaseException as e:
logging.info('\terror downloading %s - %s' % (eachFile, e.__str__()))
except ftplib.error_perm:
logging.info('\terror downloading %s - %s' % (eachFile, ftplib.error_perm))
elif (fileType == 'NSE CASH CLOSING FILE'): # spl case if file exists
try:
# rename file
yr = int(time.strftime('%Y')) - 1
os.rename(saveTo, saveTo + str(yr))
# download it
ftpclient.retrbinary('RETR ' + eachFile, open(saveTo, 'wb').write)
downloaded += 1
except BaseException as e:
logging.info('\terror rename/ download %s - %s' % (eachFile, e.__str__()))
Can any one help me to resolve this problem
Try to use os.path.join() in stead of the hardcoded slashes as path dividers for the os to download to. / or \ depends of the local os.
e.g. in your code:
saveTo = remotepath + '/' + eachFile
would become:
saveTo = os.path.join(remotepath,eachFile)
see https://docs.python.org/2/library/os.path.html

Resources