This is somewhat a design question for my current laravel side project.
So currently I have a table that stores a status value in one column and the date when that status should be altered. No I want to alter that status value automatically when the date is stored is the current date. Since that table will gain more rows about time I have to perform that altering process in a repeating manner. As well as that I want to perform some constraint checks on the data as well.
Im sure laravel is capable of doing that, but how?
Laravel has commands and a scheduler, combining these two gives exactly what you want.
Create your command in Console\Commands folder, with your desired logic. Your question is sparse, so most of it is pseudo logic and you can adjust it for your case.
namespace App\Console\Commands;
class StatusUpdater extends Command
{
protected $signature = 'update:status';
protected $description = 'Update status on your model';
public function handle()
{
$models = YourModel::whereDate('date', now())->get();
$models->each(function (YourModel $model) {
if ($model->status === 'wrong') {
$model->status = 'new';
$model->save();
}
});
}
}
For this command to run daily, you can use the scheduler to schedule the given command. Go to Commands\Kernel.php where you will find a schedule() method.
use App\Commands\StatusUpdater;
use Illuminate\Console\Scheduling\Schedule;
class Kernel extends ConsoleKernel
{
protected function schedule(Schedule $schedule)
{
$schedule->command(StatusUpdater::class)->daily();
}
}
}
For scheduling to work, you have to add the following command to cronjob on your server. Which is described in the Laravel documentation.
* * * * * cd /path-to-your-project && php artisan schedule:run >> /dev/null 2>&1
Related
We have a requirement to create a kind of user session. Our front end is react and backend is .net core 6 api and db is postgres.
When 1 user clicks on a delete button , he should not be allowed to delete that item when another user is already using that item and performing some actions.
Can you guys suggest me an approach or any kind of service that is available to achieve this. Please help
I would say dont make it too complicated. A simple approach could be to add the properties 'BeingEditedByUserId' and 'ExclusiveEditLockEnd' (datetime) to the entity and check these when performing any action on this entity. When an action is performed on the entity, the id is assigned and a timeslot (for example 10 minutes) would be assigned for this user. If any other user would try to perform an action, you block them. If the timeslot is expired anyone can edit again.
I have had to do something similar with Java (also backed by a postgres db)
There are some pitfalls to avoid with a custom lock implementation, like forgetting to unlock when finished, given that there is not guarantee that a client makes a 'goodbye, unlock the table' call when they finish editing a page, they could simply close the browser tab, or have a power outage... Here is what i decided to do:
Decide if the lock should be implemented in the API or DB?
Is this a distributed/scalable application? Does it run as just a single instance or multiple? If multiple, then you can not (as easily) implement an API lock (you could use something like a shared cache, but that might be more trouble than it is worth)
Is there a record in the DB that could be used as a lock, guaranteed to exist for each editable item in the DB? I would assume so, but if the app is backed by multiple DBs maybe not.
API locking is fairly easy, you just need to handle thread safety as most (if not all) REST/SOAP... implementations are heavily multithreaded.
If you implement at the DB consider looking into a 'Row Level Lock' which allows you to request a lock on a specific row in the DB, which you could use as a write lock.
If you want to implement in the API, consider something like this:
class LockManager
{
private static readonly object writeLock = new();
// the `object` is whatever you want to use as the ID of the resource being locked, probably a UUID/GUID but could be a String too
// the `holder` is an ID of the person/system that owns the lock
Dictionary<object, _lock> locks = new Dictionary<object, _lock>();
_lock acquireLock(object id, String holder)
{
_lock lok = new _lock();
lok.id = id;
lok.holder = holder;
lock (writeLock)
{
if (locks.ContainsKey(id))
{
if (locks[id].release > DateTime.Now)
{
locks.Remove(id);
}
else
{
throw new InvalidOperationException("Resource is already locked, lock held by: " + locks[id].holder);
}
}
lok.allocated = DateTime.Now;
lok.release = lok.allocated.AddMinutes(5);
}
return lok;
}
void releaseLock(object id)
{
lock (writeLock)
{
locks.Remove(id);
}
}
// called by .js code to renew the lock via ajax call if the user is determined to be active
void extendLock(object id)
{
if (locks.ContainsKey(id))
{
lock (writeLock)
{
locks[id].release = DateTime.Now.AddMinutes(5);
}
}
}
}
class _lock
{
public object id;
public String holder;
public DateTime allocated;
public DateTime release;
}
}
This is what i did because it does not depend on the DB or client. And was easy to implement. Also, it does not require configuring any lock timeouts or cleanup tasks to release locked items with expired locks on them, as that is taken care of in the locking step.
I'm trying to stop my job with savepoint, then start it again using the same savepoint. For my case, I update my job, and create new version for it with new jar. Here is my code example;
class Reader(bla bla) {
def read() = {
val ds = readFromKafka()
transform(ds)
}
def transform(ds: DataStream[]) = {
ds.map()
}
}
object MyJob {
def run () = {
val data = new Reader().read()
data.keyBy(id).process(new MyStateFunc).uid("my-uid") // then write to kafka
}
}
In this case, i did stop job with savepoint, then start it using the same savepoint with the same jar. Then, I add a filter to my Reader like this;
class Reader(bla bla) {
def read() = {
val ds = readFromKafka()
transform(ds)
}
def transform(ds: DataStream[]) = {
ds.map().filter() // FILTER ADDED HERE
}
}
I stop my job with savepoint, it works. Then i've tried to deploy job with new version(new filter method) using the same savepoint, it can not match the operators and job does not deploys. Why?
Unless you explicitly provide UIDs for all of your stateful operators before taking a savepoint, then after changing the topology of your job, Flink will no longer be able to figure out which state in the savepoint belongs to which operator.
I see that you have a UID on your keyed process function ("my-uid"). But you also need to have UIDs on the Kafka source and the sink, and anything else that's stateful. These UIDs need to be attached to the stateful operators themselves and need to be unique within the job (but not across all jobs). (Furthermore, each state descriptor needs to assign a name to each piece of state, using a name that is unique within the operator.)
Typically one does something like this
env
.addSource(...)
.name("KafkaSource")
.uid("KafkaSource")
results.addSink(...)
.name("KafkaSink")
.uid("KafkaSink")
where the name() method is used to supply the text that appears in the web UI.
I'm trying to manage a decentralized DB around a huge number of partial DB instances. Each instance has a subset of the whole data and they are all nodes and clients, thus asking for some data the query must be spread to every (group) instance and which one have it will return the data.
Due to avoid lost of data if one instance goes down, I figured out they must replicate its contents with some others. How this scenario can be configured with Ignite?
Supose I have a table with the name and last access datetime of users in a distributed application, like ...
class UserLogOns
{
string UserName;
DateTime LastAccess;
}
Now when the program starts I prepare Ingite for work as a decentralizad DB ...
static void Main(string[] args)
{
TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
// Override local port.
commSpi.LocalPort = 44444;
commSpi.LocalPortRange = 0;
IgniteConfiguration cfg = new IgniteConfiguration();
// Override default communication SPI.
cfg.CommunicationSpi = commSpi;
using (var ignite = Ignition.Start(cfg))
{
var cfgCache = new CacheConfiguration("mio");
cfgCache.AtomicityMode = CacheAtomicityMode.Transactional;
var cache = ignite.GetOrCreateCache<string, UserLogOns>(cfgCache);
cache.Put(Environment.MachineName, new UserLogOns { UserName = Environment.MachineName, LastAccess = DateTime.UtcNow });
}
}
And now ... I want to get LastAccess of other "computerB" when ever it was ..
Is this correct? How can it be implemented?
It depends on the exact use-case that you want to implement. In general, Ignite provides out of the box everything that you mentioned here.
This is a good way to start with using SQL in Ignite: https://apacheignite-sql.readme.io/docs
Create table with "template=partitioned" instead of "replicated" as it is shown in the example here: https://apacheignite-sql.readme.io/docs/getting-started#section-creating-tables, configure number of backups and select a field to be affinity key (a field that is used to map specific entries to cluster node) and just run some queries.
Also check out the concept of baseline topology if you are going to use native persistence: https://apacheignite.readme.io/docs/baseline-topology.
In-memory mode will trigger rebalance between nodes on each server topology change (node that can store data in/out) automatically.
I have a WPF application where the user creates entities in the database. Each entity has some metadata and an interval field. For each entity I want to create a job with the interval provided and store them in the AdoJobStore.
Now since the WPF app will not always be running, I want a create a Windows Service that reads the jobs data from the AdoJobStore and run those jobs.
So essentially there are these 2 tiers. Now I have setup the Quartz tables already in my existing database. My question is:
How to create/edit/delete jobs from my WPF application
How to inform my windows service to run the jobs (every time an entity is created in database)
I have read through a lot of blogs but these 2 primary questions are a bit unclear to me. I would really appreciate some example code on how to achieve and may be structure my solution.
Thanks
You use Zero Thread Scheduler to schedule jobs. Example scheduler initialization code:
var properties = new NameValueCollection();
properties["quartz.scheduler.instanceId"] = "AUTO";
properties["quartz.threadPool.type"] = "Quartz.Simpl.ZeroSizeThreadPool, Quartz";
properties["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz";
properties["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz";
properties["quartz.jobStore.useProperties"] = "true";
properties["quartz.jobStore.dataSource"] = "default";
properties["quartz.jobStore.tablePrefix"] = tablePrefix;
properties["quartz.jobStore.clustered"] = "false";
properties["quartz.dataSource.default.connectionString"] = connectionString;
properties["quartz.dataSource.default.provider"] = "SqlServer-20";
schedFactory = new StdSchedulerFactory(properties);
BaseScheduler = schedFactory.GetScheduler();
Example scheduling function:
protected ITrigger CreateSimpleTrigger(string tName, string tGroup, IJobDetail jd, DateTime startTimeUtc,
DateTime? endTimeUtc, int repeatCount, TimeSpan repeatInterval, Dictionary<string, string> dataMap,
string description = "")
{
if (BaseScheduler.GetTrigger(new TriggerKey(tName, tGroup)) != null) return null;
var st = TriggerBuilder.Create().
WithIdentity(tName, tGroup).
UsingJobData(new JobDataMap(dataMap)).
StartAt(startTimeUtc).
EndAt(endTimeUtc).
WithSimpleSchedule(x => x.WithInterval(repeatInterval).WithRepeatCount(repeatCount)).
WithDescription(description).
ForJob(jd).
Build();
return st;
}
Obviously, you'll need to provide all relevant fields in your UI and pass values from those fields into the function. Example screenshot of some of the required fields:
Your Windows Service will initialize a Multi Thread Scheduler in OnStart() method in a very similar fashion to the way that Zero Thread Scheduler was initialized above and that Multi Thread Scheduler will monitor all the triggers in your database and start your jobs as specified in those triggers. Quartz.net will do all the heavy lifting in that regard. Once you scheduled your jobs and triggers are in the database all you need to do is initialize that Multi Thread Scheduler, connect it to the database containing triggers and it will keep on firing those jobs and execute your code as long as the service is running.
I want to test one of my Model classes, so i have to insert, update and delete data from my database in order to test if my methods work well.
I am working with a defined Test database where i have already some data.
To test all methos i use two roles, the admin one and the user one. So i get their data using the setUp method like this:
public function setUp() {
parent::setUp();
$this->User = ClassRegistry::init('User');
$admin = $this->User->query("select * from users where admin = 1");
$this->testUser['admin']['id'] = $admin[0]['users']['id'];
$this->testUser['admin']['username'] = $admin[0]['users']['username'];
$this->testUser['admin']['password'] = $admin[0]['users']['password'];
$this->testUser['admin']['verified'] = $admin[0]['users']['verified'];
$this->testUser['admin']['created'] = $admin[0]['users']['created'];
$this->testUser['admin']['nick'] = $admin[0]['users']['nick'];
$this->testUser['admin']['admin'] = $admin[0]['users']['admin'];
$user = $this->User->query("select * from users where admin = 0 and verified = 0");
$this->testUser['user']['id'] = $user[0]['users']['id'];
$this->testUser['user']['username'] = $user[0]['users']['username'];
$this->testUser['user']['password'] = $user[0]['users']['password'];
$this->testUser['user']['verified'] = $user[0]['users']['verified'];
$this->testUser['user']['created'] = $user[0]['users']['created'];
$this->testUser['user']['nick'] = $user[0]['users']['nick'];
$this->testUser['user']['admin'] = $user[0]['users']['admin'];
}
When i want to test methods like the "banAccess" one who moves data from the Users table to the bannedUsers table, then i have a problem because the Test won't run well the next time as the user i selected for the Test won't be in the same table.
It seems that setUP() and tearDown() methods are only executed once after all test methods are called.
This way, if the bannAccess test methods is executed before the testGetUserName method, for example, this last one will fail as the user is not on Users table.
For the moment i am testing the method and deleting the user after it in order to solve this problem but i am sure it have to be a better way to do it:
public function testBanAccess() {
$result = $this->User->banAccess($this->testUser['user']['id'], 'spam', '42');
$expected = true;
$this->assertEquals($expected, $result);
$this->User->query("delete from banUsers where id = ".$this->testUser['user']['id']);
}
Thanks.
Your whole test setup is not good. You should use fixtures for that an have the records present in the fixtures. See http://book.cakephp.org/2.0/en/development/testing.html#fixtures
setUp() and tearDown() are executed only one time while startTest() and endTest() are for each test*() method.
Further you should not use query() because it is potentially unsafe because of SQL injections. The CakePHP ORM will take care of that if you would use it... To see query() present in the test make me think you've used it in the app to and built a pretty unsafe app.
Also why do you have to copy users to another table instead simply flagging them as banned with a simple tinyint field?