Change data at specific condition in sequelize - database

I am using PostgreSQL with sequelize on Javascript project. It is about managing project schedule and I want to update my data on specific condition. For example
There is one Project with 'due_date' and 'current_status'.
If I set 'due_date' then 'current_status' become 'ongoing'.
When current date passes 'due_date', then 'current_status' should become 'delayed'.
I want to make step 3 automatically, but I have no idea where to handle. Currently I am looking for hooks but I can not find any hooks that can satisfy it. Is there any solution for this?

You will need to schedule a job to actively look for rows that need updating.
The job can be fired by some external scheduler (e.g. cron) or by the pg_cron extension if you're able to install it.

Related

How do I run occasional tasks that update data in a database?

Ooccasionaly I need to run tasks that update data in a database. I might need to run them ever again, or not a new server - no idea. For I need to run them once and in a certain release only. And they should be outside of git index.
Some tutorial suggest that I run them with "custom migration", in which a 2nd directory for migrations is created called "custom_migrations" and they'll be run from there via Ecto.Migrator. But this will case a problem: I run all of the custom_migrations, then delete all of migration files (because I won't need them anywhere else, not on a new server either, once I've run them), then create new ones when a need arises, and then Ecto.Migrator will complain about absense of the migrations that I've deleted.
I'm also aware of ./bin/my_app eval MyApp.Tasks.custom_task1 but it's not convinient because I'll have to call it manually and passing arguments to a function isn't convinient via the command line.
What I want is: create a several files that I want to be run in this current release, once. Store them in a certain directory of an application. Deploy a application. They'll get run automatically, probably on application boot and then I remove them. Then, after some time, I may want to create new ones and only those new ones will need to get run.
How to do this? What's a recommended way in Ellixir/Phoenix?

Automate the execution of a C# code that uses Entity Framework to treat data?

I have code that uses Entity Framework to treat data (retrieves data from multiple tables then performs operations on it before saving in a SQL database). The code was supposed to run when a button is clicked in an MVC web application that I created. But now the client wants the data treatment to run automatically every day at a set time (like an SSIS package). How do I go about this?
But now the client wants the data treatment to run automatically every day at a set time (like an SSIS package). How do I go about this?
In addition to adding a job scheduler to your MVC application as #Pac0 suggests, here are a couple of other options:
Leave the code in the MVC project and create an API endpoint that you can invoke on some sort of schedule. Give the client a PowerShell script that calls the API and let them take it from there.
Or
Refactor the code into a .DLL or copy/paste it into a console application that can be run on a schedule using the Windows Scheduler, SQL Agent or some other external scheduler.
You could use some tool/lib that does this for you. I could recommend Hangfire, it works fine (there are some others, I have not tried them).
The example on their homepage is pretty explicit :
RecurringJob.AddOrUpdate(
() => Console.WriteLine("Recurring!"),
Cron.Daily);
The above code needs to be executed once when your application has started up, and you're good to go. Just replace the lambda by a call to your method.
Adapt the time parameter on what you wish, or even better: make it configurable, because we know customers like to change their mind.
Hangfire needs to create its own database, usually it will stay pretty small for this kind of things. You can also monitor if the jobs ran well or not, and check no the hangfire server some useful stats.

How do I seed data or run scripts when installing a Salesforce package

I'm developing a Salesforce package that depends on some prepopulated data to work correctly (ie: a list of countries to populate a custom setting).
Is there a way to prepopulate these objects at installation/upgrade time? (e.g. uploading a csv with the data I need to insert into some custom objects).
Is there a way to run a custom script at installation/upgrade time? (e.g. have the script update information on new fields, or adapt existing data to a modified object structure).
Thanks in advance.
This is actually a new piece of functionality that is coming in the Summer '12 (API Version 25.0) release. There are two new interfaces to implement, InstallHandler and UninstallHandler, which can be setup to run on install and uninstall of a package respectively. You could implement the InstallHandler and populate the objects/custom settings in that class.
An alternative is to use a custom settings value to know if the installation procedure was run. Then you can use your package's point of entry to check for it and do the procedure if the value indicates it needs to run. It's a little complicated if you don't have a single point of entry.

Trigger Jenkins Build on Database Change

As the subject suggests I'm interested in triggering Jenkins on changes involving a pre-configured database table. For example, whenever the number of records changes I want Jenkins to perform some particular action. Is there a plugin out of the box available for this scenario?
Thank you!
Regards,
Alex
Either you have a command line client for your database or you can write a script (perl, ruby, Groovy, Java whatever) to get this functionality. This script can be executed by Jenkins. Based on the absence of information about which database we are talking about i can't give you a more detailed hint.
What database are you using?
Most of them have some kind of triggers that can be fired after table insert, update or delete.
A logical alternative to database triggers is polling: write a script that will poll the database and store the results you are watching. If they change the script can modify a file which will trigger a Jenkins build via FS Trigger plugin.
Probably the easiest way is to use a ScriptTrigger, which can easily use embedded Groovy or Shell/Windows Batch script to pool database with a query to verify state of the given data.

SSIS Package - track calling job

I'm looking for ideas on how to automatically track the job that calls the package.
We have some genric packages that are called from different jobs, each job passes in different file paths as parameters and therefore processes very different size files depending on the path.
In the package I have some custom auditing setup which basically tracks the package start time and end time, and therefore the duration of execution. I want to be able to also track the job that called the package so if the package is running long, I can determine which job called it.
Also note I would prefer this automatic using possibly some sort of system variable or such, so that human error is not an issue. I also want these auditing tasks built into all of our packages as a template, so I would prefer not to use a user variable either - as different packages may use different variables.
Just looking for some ideas - appreciate any input
We use parent and child packages instead of different jobs calling the same package. You could send the information about which parent called it to the child package and then in the child package records that data to a table along with the start date and end date.
Our solution has a whole meta database that records all the details through logging of each step. The parent tells the child which configuration to use and log details against that configuration. The jobs call the parent package - never the child package (which doesn't have a configuration in the config table as it is always configured through variables sent in by the parent package. No human intervention necessary (except initial development or research when a failure occurs) needed.
Edit for existing jobs.
Consider that jobs can have multiple steps. Make the first step a SQL script that inserts the auditing information into a table including the start time of the package, the name of the job that called it and thename of the ssispacakge being called. Then the second step calls the SSIS package and then make the last step a SQL script that inserts the same data only with the end datetime.
A simple way to do this is to set up a variable on your SSIS package as a varchar. Set the value to the value of the variable to #[System::ParentContainerGUID] using an expression when it starts. SQL Agent won't set the value, so when run as an individual job it will be an empty string. But if called by another package it will contain the GUID of the calling package. You can test for that value. You can use a precedence contraint to control the program logic.
We have packages that run as a part of a big program but sometimes we need to run them individually. Each package has an email on failure task but we only want that to execute when the package is run individually. When it is part of the big run we collect the names of all packages that error and send them as one email from the master package. We don't want individual emails and a summary email going out on the same run.

Resources