Laravel queue stops randomly without exception - database

I have a laravel queue setup with a
database
Connection. Note this problem is also on redis. But i am currently using the database connection for the
failed_jobs
Table to help me check any errors that occur during the queue process.
The problem i have is that the queue stops working after a few jobs without any message showing why. But when i restart the command (php artisan queue:work) it picks up the remaining jobs. And continues. (But stops again later)
The job is configured with these values
public $tries = 1;
public $timeout = 10;
The job code is, (Not original code)
public function handle()
{
try {
$file = //function to create file;
$zip = new ZipArchive();
$zip->open(//zip_path);
$zip->addFile(//file_path, //file_name);
$zip->close();
#unlink(//remove file);
} catch (\Exception $e) {
Log::error($e);
}
}
And the failed function is setup like this:
public function failed(\Exception $exception)
{
Log::error($exception);
$this->fail($exception);
$this->delete();
}
But my there is no failed_job row, And my log is empty
Edit: I added simple info logs after every line of code. And every time i start the queue, It stops after the last line. So the code runs correct. So laravel doesn't start the new job after that

so what you need here to solve the issue is to do the following steps :
go to bootstrap/cache/ remove all file .PHP
go to the src and run php artisan queue:restart
Now after adding the snippet, we need to trigger the following commands respectively:
sudo supervisorctl reread (to check the file content and make sure
that the snippet is correctly set)
sudo supervisorctl update (release the config changes under the supervisor)
sudo supervisorctl restart all (re-trigger the queues so that the newly created queue gets initialized and start picking up messages respectively)

Did you tried queue:listen ?
php artisan queue:listen
Also i guess you need the Supervisor to keep your worker alive.

Related

Slow blockchain syncing on TRON Nile Testnet

Trying to sync nile chain.
Starting node with command:
java -jar /data/FullNode/FullNode.jar --witness -c /data/FullNode/nile_net_config.conf
using config from <nileex.io>
Syncing is very slow, often stops syncing at all.
Just about 100-1000 blocks per day.
In tron.log I see: P2P_DISCONNECT reason TOO_MANY_PEERS
I tried /wallet/listnodes HTTP API command from public nile nodes and put all that 68 IP`s to seed section in config.
When got about 1600000 blocks, syncing stopped again, same errors in log.
Now I downloaded and unpacked backup output-directory and trying to sync to the end of chain but have same problem.
Blocks getting very slow. Sometime, maybe one times in a hour I getting 50-100 blocks.
What am I doing wrong?
I used old version of FullNode.jar
Now it`s ok with version 4.4.1.
I used old version of FullNode.jar Now it`s ok with version 4.4.1.
That's correct, after bumping version, error is gone, looking at java-tron repository seems like there are several different conditions for throwing TOO_MANY_PEERS one of it below:
#Override
public void run() {
List<PeerConnection> peerConnectionList = pool.getActivePeers();
List<PeerConnection> willDisconnectPeerList = new ArrayList<>();
for (PeerConnection peerConnection : peerConnectionList) {
NodeStatistics nodeStatistics = peerConnection.getNodeStatistics();
if (!nodeStatistics.nodeIsHaveDataTransfer()
&& System.currentTimeMillis() - peerConnection.getStartTime() >= CHECK_TIME
&& !peerConnection.isTrustPeer()
&& !nodeStatistics.isPredefined()) {
//if xxx minutes not have data transfer,disconnect the peer,
//exclude trust peer and active peer
willDisconnectPeerList.add(peerConnection);
}
nodeStatistics.resetTcpFlow();
}
if (!willDisconnectPeerList.isEmpty() && peerConnectionList.size()
> Args.getInstance().getNodeMaxActiveNodes() * maxConnectNumberFactor) {
Collections.shuffle(willDisconnectPeerList);
for (int i = 0; i < willDisconnectPeerList.size() * disconnectNumberFactor; i++) {
logger.error("{} does not have data transfer, disconnect the peer",
willDisconnectPeerList.get(i).getInetAddress());
willDisconnectPeerList.get(i).disconnect(ReasonCode.TOO_MANY_PEERS);
}
}
}

Specflow - Log steps - Given/When /Then steps logging

We run nightly regression scripts using Specflow. Was wondering if there is a way to log the specflow console logs to a file. Since it runs at night, we are not sure about the step where the failure occurred.We do use ReportUnit to convert the nunit xml to html. Will be good to have those console logs in the html too.
you can add hooks which execute before scenario steps ([BeforeStep] and [AfterStep]) and log in there. You can access the ScenarioStepContext to get details of the current step.
A lot has changed with SpecFlow in the last 6 years. Specifically SpecFlow logs each step (along with whether it passes or fails) to the standard output. You can also generate test results files using whichever unit test framework you want. That being said, I did come across a use case where the existing logging that SpecFlow does was not working with Azure DevOps. For my case, I had passing tests, which periodically ran extremely slow. In Azure DevOps release pipelines, a passing test does not get the standard output saved for viewing later. I needed to log the date/time for when a step began, and when it finished.
A before/after step hook using the ScenarioContext object was how I got this logging to work:
[Binding]
public class Hooks
{
[BeforeStep]
public void BeforeStep(ScenarioContext scenario)
{
var stepInfo = scenario.StepContext.StepInfo;
var stepText = $"{stepInfo.StepDefinitionType} {stepInfo.Text}";
// log 'stepText' some place
}
[AfterStep]
public void AfterStep(ScenarioContext scenario)
{
var stepInfo = scenario.StepContext.StepInfo;
var stepText = $"{stepInfo.StepDefinitionType} {stepInfo.Text}";
// log 'stepText' some place
}
}

mercurial precommit infinite loop

Ok here is the problem, I'm developing an application in Java using Gradle.
I have a Gradle task that adds a license on top of each file if it does not exist.
I wanted to add a precommit hook so that when I commit the files, the Gradle task runs and changes the license on top of the files if needed. Keep in mind that the Gradle licenseFormat, may change nothing or more than 10 files at the same time, so i have no way of knowing which files are changed to add them to commit manually.
I tried this hook:
[hooks]
pre-commit.licenseFormat=C:/Users/pc/Dropbox/{REPOSITORIES}/{PETULANT}/format.bat
It simply calls a batch file that runs the Gradle command but, as I suspected, because some files are changed that are not in the current commit, the commit gets stuck and it seems like it falls into an infinite loop of calling the batch file time and time again and each time it will fire the command.
In next run of the command nothing should be changed but when the first run changed more than few files, I think the commit fires the batch file more than twice.
So the question is, how can I stop the commit hook after the very first run of the batch file and add the changed file to current or new commit?
Thanks.
the batch file is only the command :
gradlew licenseFormat
as I said, it runs a gradle task that will add license comments on top of the files needed, in other words, it first checks the header of the file and compares it to the one that should be there, if they are the same, then the file will not be touched but if they are not the same, it removes the header and adds the license text as comment on top of the head, if you want more in dept look the actual task is this :
buildscript{
repositories{
mavenCentral()
jcenter()
maven { url = "http://files.minecraftforge.net/maven" }
maven { url = "https://oss.sonatype.org/content/repositories/snapshots" }
}
dependencies{
classpath 'net.minecraftforge.gradle:ForgeGradle:1.2-SNAPSHOT'
classpath 'org.ajoberstar:gradle-git:0.10.1'
classpath 'nl.javadude.gradle.plugins:license-gradle-plugin:0.11.0'
}
}
apply plugin: 'license'
license{
ext.name = project.name
ext.organization = project.organization
ext.url = project.url
ext.year = project.inceptionYear
exclude '**/*.info'
exclude '**/*.json'
exclude '**/*.ma'
exclude '**/*.mb'
exclude '**/*.png'
header new File(projectDir, 'HEADER.txt')
sourceSets = project.sourceSets
ignoreFailures = false
strictCheck = true
mapping { java = 'SLASHSTAR_STYLE'}
}

How to execute SSIS package when a file is arrived at folder

The requirement is to execute SSIS package, when a file is arrived at a folder,i do not want to start the package manually .
It is not sure about the file arrival timing ,also the files can arrive multiple times .When ever the files arrived this has to load into a table.I think, some solution like file watcher task ,still expect to start the package
The way I have done this in the past is with an infinite loop package called from SQL Server Agent, for example;
This is my infinite loop package:
Set 3 Variables:
IsFileExists - Boolean - 0
FolderLocation - String - C:\Where the file is to be put in\
IsFileExists Boolean - 0
For the For Loop container:
Set the IsFileExists variables as above.
Setup a C# script task with the ReadOnlyVariable as User::FolderLocation and have the following:
public void Main()
{
int fileCount = 0;
string[] FilesToProcess;
while (fileCount == 0)
{
try
{
System.Threading.Thread.Sleep(10000);
FilesToProcess = System.IO.Directory.GetFiles(Dts.Variables["FolderLocation"].Value.ToString(), "*.txt");
fileCount = FilesToProcess.Length;
if (fileCount != 0)
{
for (int i = 0; i < fileCount; i++)
{
try
{
System.IO.FileStream fs = new System.IO.FileStream(FilesToProcess[i], System.IO.FileMode.Open);
fs.Close();
}
catch (System.IO.IOException ex)
{
fileCount = 0;
continue;
}
}
}
}
catch (Exception ex)
{
throw ex;
}
}
// TODO: Add your code here
Dts.TaskResult = (int)ScriptResults.Success;
}
}
}
What this will do is essentially keep an eye on the folder location for a .txt file, if the file is not there it will sleep for 10 seconds (you can increase this if you want). If the file does exist it will complete and the package will then execute the load package. However it will continue to run, so the next time a file is dropped in it will execute the load package again.
Make sure to run this forever loop package as a sql server agent job so it will run all the time, we have a similar package running and it has never caused any problems.
Also, make sure your input package moves/archives the file away from the drop folder location.
As others have already suggested, using either WMI task or an infinite loop are two options to achieve this, but IMO SSIS is resource intensive. If you let a package constantly run in the background, it could eat up a lot of memory, cpu and cause performance issues with other packages depending on how many other packages you've running. So other option you may want to consider is schedule an Agent job every 5 minutes or 10 minutes or something and call your package in the job. Configure the package to continue only when a file is there or quit otherwise.
You can create a Windows service that uses WMI to detect file arrival and launch packages. Details on how to are located here: http://msbimentalist.wordpress.com/2012/04/27/trigger-ssis-package-when-files-available-in-a-folder-part2/?relatedposts_exclude=330
What about the SSIS File Watcher Task?

Neo4j store is not cleanly shut down; Recovering from inconsistent db state from interrupted batch insertion

I was importing ttl ontologies to dbpedia following the blog post http://michaelbloggs.blogspot.de/2013/05/importing-ttl-turtle-ontologies-in-neo4j.html. The post uses BatchInserters to speed up the task. It mentions
Batch insertion is not transactional. If something goes wrong and you don't shutDown() your database properly, the database becomes inconsistent.
I had to interrupt one of the batch insertion tasks as it was taking time much longer than expected which left my database in an inconsistence state. I get the following message:
db_name store is not cleanly shut down
How can I recover my database from this state? Also, for future purposes is there a way for committing after importing every file so that reverting back to the last state would be trivial. I thought of git, but I am not sure if it would help for a binary file like index.db.
There are some cases where you cannot recover from unclean shutdowns when using the batch inserter api, please note that its package name org.neo4j.unsafe.batchinsert contains the word unsafe for a reason. The intention for batch inserter is to operate as fast as possible.
If you want to guarantee a clean shutdown you should use a try finally:
BatchInserter batch = BatchInserters.inserter(<dir>);
try {
} finally {
batch.shutdown();
}
Another alternative for special cases is registering a JVM shutdown hook. See the following snippet as an example:
BatchInserter batch = BatchInserters.inserter(<dir>);
// do some operations potentially throwing exceptions
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
batch.shutdown();
}
});

Resources