I succesfully connected jbpm to my database (PostgreSQL) and i store logs into it. I made it by:
KieServices ks = KieServices.Factory.get();
KieContainer kContainer = ks.getKieClasspathContainer();
KieSession kSession = kContainer.newKieSession("WorkflowSession");
EntityManagerFactory emf = new EnvironmentProducer().getEntityManagerFactory();
AbstractAuditLogger auditLogger = AuditLoggerFactory.newJPAInstance(emf);
kSession.addEventListener(auditLogger);
I would like to restore all active processes after server falling. For example:
Start scenario (start process)
Server fall down (the process is in database register as active)
After turn on server again have this process loaded to my new KieSession
Please help me with this problem.
Thanks
There's no need to reload process instances after server shutdown. Process instances are always stored in the database, and whenever they are needed, they are loaded from there. This includes user requests related to the process instance (e.g. tasks completed, signals sent), but also timers firing etc.
The only thing you should try to do upon initialization of the application in case you embed the engine yourself is to make sure the runtime manager is instantiated (for keeping track of timers). If you use the execution server (part of the jbpm-console), it will automatically do that for you.
Related
I have a list of process names retrieved from a server and I want to restrict access to them (total blocking - if the process is not executing do not allow opening, if it is executing close it)
The first part is solved by subscribing to ES_EVENT_TYPE_AUTH_EXEC and sending a deny response if the process path contains the process name I want to block.
However, I cannot figure out if there exists an event that all processes execute, so a matching process already running will be killed. This way I can subscribe to it and maybe make a system call containing a 'kill' command.
The full list of events provided by the Endpoint Security Framework API is this
If the framework does not provide such capability, how would you approach this scenario?
Thanks.
Is it possible to change the processinstance create/start date time after once it has been instantiated?
Process Instance create/start date time automatically generated by system,
and also all bpm transaction (flow node, case, process, task,...) .
All engine APIs or apiAccess only have read access from the database.
But you can modify directly from database ;)
I'm wondering is there a way to recognize the OfflineComamd is being executed or internal flag or something to represent this command has been passed or mark it has been executed successfully. I have issue in recognizing the command is passed or not with unstable internet. I keep retrieve the records from database and comparing each and every time to see this has been passed or not. But due to the flow of my application, I'm finding it very difficult to avoid duplicates.IS there any automatic process to make sure commands executed automatically or something else?
2nd question, I can use UITimer to check isOffline() to make sure internet is connected or not on the forms. Is there something equivalent on server page or where queries is written to see internet is disconnected or not. When the control moved to queries and internet is disconnected I see the dialog open from form page being frozen for unlimited time and will not end. I have to close and re-open the app to continue the synchronization process.At the same time I cannot set a timeout for dialog because I'm not sure how long it will take the complete the Synchronization process. Please advise.
Extending on the same topic but I have created a new issue just to give more clarity on my questions.
executeOfflineCommand skips a command while executing from storage on Android
There is no way to know if a connection will stay stable as it requires knowledge of the future. You can work like transaction services do where the server side processes an offline command as a transaction using the approach of 2-phase commit.
In this approach you have an algorithm similar to this:
Client sends command to server
Server returns a special unique ID for the command
Client asks server to perform the unique id
Server acknowledges that the command was performed
If the first 2 stages didn't complete you just do that again. The worst thing that could happen is some orphan commands on the server.
If the 3rd option didn't complete you just do it again. The server knows whether it processed the command and will just acknowledge it if it was already processed.
Our scheduled jobs started failing since yesterday with the following error message:
CustomUpdate.Execute - System.NullReferenceException: Object reference
not set to an instance of an object. at
System.Web.Security.Roles.GetRolesForUser(String username) at
EPiServer.Security.PrincipalInfo.CreatePrincipal(String username)
The scheduled job uses anonymous execution and logs in programmatically using the following call:
if (PrincipalInfo.CurrentPrincipal.Identity.Name == string.Empty)
{
PrincipalInfo.CurrentPrincipal = PrincipalInfo.CreatePrincipal(ApplicationSettings.ScheduledJobUsername);
}
I have put in some more logging around PrincipalInfo.CreatePrincipal call which is in Episerver.Security and noticed that PrincipalInfo.CreatePrincipal calls System.Web.Security.Roles.GetRolesForUser(username) and Roles.GetRolesForUser(username) returns an empty string array.
There were no changes code wise or on the server (updates, etc).
I checked that the user name used to run the task is in the database and has roles associated with it.
I checked that applicationname is set up correctly and is associated with the user
If i run the job manually using the same user it executes with no issues (i know there is a difference between running the job manually and using the scheduler)
I also tried creating a new user, that didn’t work either.
Has anyone come across the same or similar issue? Any thoughts how to resolve this issue?
I have finally found a problem - application pool running with more than one worker processes (in my instance I had two worker processes). Once I set worker processes limit to one everything started to work again.
We are building an application which makes every week a very large amount of request over the database, concurrently.
We have ~15-20 threads which query the database concurrently.
We are actually encountering a lot of problems:
On the database side(not enough RAM): being resolved.
But on the client too. We have Exception when trying to get a connection or execute commands. Those commande are mades through entity framework.
The application has two part: one website and one console application.
So can anyone tell me how to increase the following values?
Connection Timeout
Command Timeout
Connection pool size
I think that there several things that have to be done on the server side(SQL Server or IIS), but I can't find where?
Command timeout can be set on ObjectContext instance. Connect timeout and connection pool size is configured in connection string but if you have only 15-20 threads your problem will most probably be somewhere else because default connection pool size is 100.
Enclose your objectContext in a using block so the context disposes after you have done your work.
you can make a method to pass in thread which uses your entity context to do the work you want and then dispose the connection after the work is finished, you can use the stateinfo object variable to pass in different parameters to use during the life of your method.
void DoContextWork(object stateInfo)
{
// wrap your context in a using clause
using(var objectContext = new YourEntity()
{
// Do work here
}
}
you can have multiple threads call this method and each time your connection gets called you can do your work on your DB without getting the issues you mentioned above.