Ok here is the problem, I'm developing an application in Java using Gradle.
I have a Gradle task that adds a license on top of each file if it does not exist.
I wanted to add a precommit hook so that when I commit the files, the Gradle task runs and changes the license on top of the files if needed. Keep in mind that the Gradle licenseFormat, may change nothing or more than 10 files at the same time, so i have no way of knowing which files are changed to add them to commit manually.
I tried this hook:
[hooks]
pre-commit.licenseFormat=C:/Users/pc/Dropbox/{REPOSITORIES}/{PETULANT}/format.bat
It simply calls a batch file that runs the Gradle command but, as I suspected, because some files are changed that are not in the current commit, the commit gets stuck and it seems like it falls into an infinite loop of calling the batch file time and time again and each time it will fire the command.
In next run of the command nothing should be changed but when the first run changed more than few files, I think the commit fires the batch file more than twice.
So the question is, how can I stop the commit hook after the very first run of the batch file and add the changed file to current or new commit?
Thanks.
the batch file is only the command :
gradlew licenseFormat
as I said, it runs a gradle task that will add license comments on top of the files needed, in other words, it first checks the header of the file and compares it to the one that should be there, if they are the same, then the file will not be touched but if they are not the same, it removes the header and adds the license text as comment on top of the head, if you want more in dept look the actual task is this :
buildscript{
repositories{
mavenCentral()
jcenter()
maven { url = "http://files.minecraftforge.net/maven" }
maven { url = "https://oss.sonatype.org/content/repositories/snapshots" }
}
dependencies{
classpath 'net.minecraftforge.gradle:ForgeGradle:1.2-SNAPSHOT'
classpath 'org.ajoberstar:gradle-git:0.10.1'
classpath 'nl.javadude.gradle.plugins:license-gradle-plugin:0.11.0'
}
}
apply plugin: 'license'
license{
ext.name = project.name
ext.organization = project.organization
ext.url = project.url
ext.year = project.inceptionYear
exclude '**/*.info'
exclude '**/*.json'
exclude '**/*.ma'
exclude '**/*.mb'
exclude '**/*.png'
header new File(projectDir, 'HEADER.txt')
sourceSets = project.sourceSets
ignoreFailures = false
strictCheck = true
mapping { java = 'SLASHSTAR_STYLE'}
}
Related
I have a laravel queue setup with a
database
Connection. Note this problem is also on redis. But i am currently using the database connection for the
failed_jobs
Table to help me check any errors that occur during the queue process.
The problem i have is that the queue stops working after a few jobs without any message showing why. But when i restart the command (php artisan queue:work) it picks up the remaining jobs. And continues. (But stops again later)
The job is configured with these values
public $tries = 1;
public $timeout = 10;
The job code is, (Not original code)
public function handle()
{
try {
$file = //function to create file;
$zip = new ZipArchive();
$zip->open(//zip_path);
$zip->addFile(//file_path, //file_name);
$zip->close();
#unlink(//remove file);
} catch (\Exception $e) {
Log::error($e);
}
}
And the failed function is setup like this:
public function failed(\Exception $exception)
{
Log::error($exception);
$this->fail($exception);
$this->delete();
}
But my there is no failed_job row, And my log is empty
Edit: I added simple info logs after every line of code. And every time i start the queue, It stops after the last line. So the code runs correct. So laravel doesn't start the new job after that
so what you need here to solve the issue is to do the following steps :
go to bootstrap/cache/ remove all file .PHP
go to the src and run php artisan queue:restart
Now after adding the snippet, we need to trigger the following commands respectively:
sudo supervisorctl reread (to check the file content and make sure
that the snippet is correctly set)
sudo supervisorctl update (release the config changes under the supervisor)
sudo supervisorctl restart all (re-trigger the queues so that the newly created queue gets initialized and start picking up messages respectively)
Did you tried queue:listen ?
php artisan queue:listen
Also i guess you need the Supervisor to keep your worker alive.
I've literally no experience in VB script or C#. I've created this SSIS package using some online tutorial which server my purpose but I've to fine-tune it to fit my requirements.
Current Scenario:
I'm trying to run an SSIS package which has a for-each loop container which imports the files with *.txt extension in a directory as the file names are not constant. This for-each loop container is followed by some other SQL tasks.
The package is executed successfully even when there are no files in the directory (May be I did something wrong while creating the container and data flow tasks, file system tasks). This is causing the SQL script at the end of the for-each loop container to execute successfully which is resulting in wrong data.
Requirement:
The package should fail if there is no file in directory. I've to implement a script before for-each loop container but not sure how to do it. Any leads would be appreciated!
I did something like this but not sure how to search wrt extension rather than file name:
Public Sub Main()
'
' Add your code here
'
Dim fileName As String
fileName = "filename.txt"
If System.IO.File.Exists(fileName) Then
Dts.Variables("User::bolFileExists").Value = True
Else
Dts.Variables("User::bolFileExists").Value = False
End If
Dts.TaskResult = ScriptResults.Success
End Sub
You should use System.IO.Directory.GetFiles() function.
If System.IO.Directory.GetFiles(<your path goes here>, "*.txt", SearchOption.AllDirectories).Length = 0 Then
Dts.Variables("User::bolFileExists").Value = False
Else
Dts.Variables("User::bolFileExists").Value = True
End If
Below would be my suggestion, I did the same in one of my requirements using event handling section that the underlying DFT is not run, then the script in event handler page would raise error. The point to be noted is that the DFT runs atleast once if there is any file in directory, raising error if it not runs would be simple rather than writing a complex script
Thanks,
Srinivas
I would like to monitor all of the files in a given directory for changes, ie an updated timestamp. This use case seems natural for Camel using the file component, but I can't seem to find a way to configure this behavior.
A uri like:
file:/some/directory
will consume the files in the provided directory but will delete them.
A uri like:
file:/some/directory?noop=true
consumes each file once when it is added or when the route is started.
It's surprising that there isn't an option along the lines of
consumeOnChange=true
Is there a straightforward way to monitor file changes and not delete the file after consuming?
You can do this by setting up the idempotentKey to tell Camel how a file is considered changed. For example if the file size changes, or its timestamp changes etc.
See more details at the Camel file documentation at: https://camel.apache.org/components/latest/file-component.html
See the section Avoiding reading the same file more than once (idempotent consumer). And read about idempotent and idempotentKey.
So something alike
from("file:/somedir?noop=true&idempotentKey=${file:name}-${file:size}")
Or
from("file:/somedir?noop=true&idempotentKey=${file:name}-${file:modified}")
You can read here about the various ${file:xxx} tokens you can use: http://camel.apache.org/file-language.html
Setting noop to true will result in Camel setting idempotent=true as well, despite the fact that idempotent is false by default.
Simplest solution to monitor files would be:
.from("file:path?noop=true&idempotent=false&delay=60s")
This will monitor changes to all files in the given directory every one minute.
This can be found in the Camel documentation at: http://camel.apache.org/file2.html.
I don't think Camel supports that specific feature but with the existent options you can come up with a similar solution of monitoring a directory.
What you need to do is set a small delay value to check the directory and maintain a repository of the already read files. Depending on how you configure the repository (by size, by filename, by a mix of them...) this solution would be able to provide you information about news files and modified files. As a caveat it would be consuming the files in the directory very often.
Maybe you could use other solutions different from Camel like Apache Commons VFS2 (I wrote a explanation about how to use it for this scenario: WatchService locks some files?
I faced the same problem i.e. wanted to copy updated files also (along with new files). Below is my configuration,
public static void main(String[] a) throws Exception {
CamelContext cc = new DefaultCamelContext();
cc.addRoutes(createRouteBuilder());
cc.start();
Thread.sleep(10 * 60 * 1000);
cc.stop();
}
protected static RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
public void configure() {
from("file://D:/Production"
+ "?idempotent=true"
+ "&idempotentKey=${file:name}-${file:size}"
+ "&include=.*.log"
+ "&noop=true"
+ "&readLock=changed")
.to("file://D:/LogRepository");
}
};
}
My testing steps:
Run the program and it copies few .log files from D:/Production to D:/LogRepository and then continues to poll D:/Production directory
I opened a already copied log say A.log from D:/Production (since noop=true nothing is moved) and edited it with some editor tool. This doubled the file size and save it.
At this point I think Camel is supposed to copy that particular file again since its size is modified and in my route definition I used "idempotent=true&idempotentKey=${file:name}-${file:size}&readLock=changed". But camel ignores the file.
When I use TRACE for logging it says "Skipping as file is already in progress...", but I did not find any lock file in D:/Production directory when I editted and saved the file.
I also checked that camel still ignores the file if I replace A.log (with same name but bigger size) in D:/Production directory from outside.
But I found, everything is working as expected if I remove noop=true option.
Am I missing something?
If you want monitor file changes in camel, use file-watch component.
Example -> RECURSIVE WATCH ALL EVENTS (FILE CREATION, FILE DELETION, FILE MODIFICATION):
from("file-watch://some-directory")
.log("File event: ${header.CamelFileEventType} occurred on file ${header.CamelFileName} at ${header.CamelFileLastModified}");
You can see the complete documentation here:
Camel file-watch component
I've a WPF Application that actually uses a Web server for downloading the app and execute it on the client... I've also created a staging enviorment for that application when I put the release as soon as new features are added / bug fixed.
I've not found a reasonable way of promoting from staging to production since the app.config is hashed... so I can't change my pointments (DB/Services) editing it...
My actual way is publishing for staging, increasing of 1 the publish version and publishing for production...but this is quite frustrating.... since I've to do twice the work...any sugeestion?
Thanks
Our team encountered the same situation a year ago. We've solved the situation by following this steps:
Determine the latest ClickOnce application version;
Removing the *.deploy extensions;
Making the necessary *.config file changes;
Updating the manifest file (*.manifest) by using 'Mage.exe' and your certificate (see also: MSDN);
Update the deployment manifest (*.application) in the application version directory and in the root directory, again by using 'Mage.exe';
Adding back the *.deploy extensions.
Hereby a short code sample for calling Mage, really not that complicated though.
// Compose the arguments to start the Mage tool.
string arguments = string.Format(
#"-update ""{0}"" -appmanifest ""{1}"" -certfile ""{2}""",
deploymentManifestFile.FullName,
applicationManifestFile.FullName,
_certificateFile);
// Add password to the list of arguments if necessary.
arguments += !string.IsNullOrEmpty(_certificateFilePassword) ? string.Format(" -pwd {0}", _certificateFilePassword) : null;
// Start the Mage process and wait it out.
ProcessStartInfo startInfo = new ProcessStartInfo(_mageToolPath, arguments);
startInfo.UseShellExecute = false;
startInfo.CreateNoWindow = true;
startInfo.RedirectStandardOutput = true;
Process mageProcess = Process.Start(startInfo);
mageProcess.WaitForExit();
// Show all output of the Mage tool to the current console.
string output = mageProcess.StandardOutput.ReadToEnd();
// Determine the update of the manifest was a success.
bool isSuccesfullyConfigured = output.ToLower().Contains("successfully signed");
I have developed a ETL which is consuming flat files. The size of flat files varies from 250 MB - 300 MB.
It is working absoultely fine when file present in the folder. But it fails when the file is in generation mode.
Ex: This ETL package runs from 8 AM to 10 AM to check whether the file is present in the folder or not. Now, at any instance(let say 9 AM) if the file is starting generated and till now it is 10 MB. ETL start processing the file and just hang and fail after 4-5 min ( hang at script task which is reading that the file is present in the folder or not).
What is the best way to trigger SSIS package only when the file generation is completely done?
Note: I have no control over the file generation.
Add a For Loop Container with a Boolean variable bFileAccessible:
The Init expression is #bFileAccessible=False
The Eval expression is #bFileAccessible==False
Inside the For Loop Container add a Script Task with a ReadWriteVariable User::bFileAccessible and the following C# script (showing only the Main() method):
public void Main()
{
try
{
using (Stream stream = new FileStream("Path\to\your\file", FileMode.Open))
{
Dts.Variables["bFileAccessible"].Value = true;
}
}
catch
{
Dts.Variables["bFileAccessible"].Value = false;
}
Dts.TaskResult = (int)ScriptResults.Success;
}
You should also use a variable for the filename and maybe a little wait interval. For more information about the script see here.
Check the FIle modified time everytime and comapre the same with previous one....
it's not good logic but a good idea if no perfect alternative