c# .net in-memory persistence - static

I would like to have a "user message" available for every request sent back by the server. If there is not a user message, the message goes back blank. If there is one, an icon is activated on each user screen after their request is completed.
[edit]
The "user message" is something that is being set by an administrator for the application I'm deploying. The administrator can enter text into a field and click a button to send this message to every other user of the system. Any time another user performs any kind of action, the current user message is attached to the JSON response and handled by the front end.
In order to optimize this, I want the message to be stored in memory (not in the database).
I have tried to use static. I have tried to use HttpApplicationState. In both cases, the value of the user message is "blanked out" after some period of time. After some research, it appears that both statics and HttpApplicationState are subject to IIS and when it decides to recycle the application pool. (or some such)
This volatility of a static is mysterious: it should be static - so long as IIS itself lives, this variable should live. It should not be dependent on some kind of "reset" or whatever. The HttpApplicationState is some other situation that I don't fully understand.
I would like a way to store a value in a non-volatile variable that I can rely on. If I set this value today, it should be there tomorrow, or next week, so long as IIS is not stopped and restarted.
Any help?
here is what i have done to solve the problem as per the accepted answer below:
the user message is a sometime thing. so when the message gets set by some administrator, store the response in the database at that point in time and store it in the Application["UserMessage"] object.
when round-trips from users come in, the in-memory text for the user message gets added to the json return value.
the message can be cleared by the administrator at any time, which clears both the in-memory message and the database field.
when IIS decides enough is enough and recycles the application, the Application_Start() method (among other tasks) will also re-seed the user message from the database value that was stored when the user message was set (as per step 1).
now the application works as expected. no extra price is paid going to the database for every user request into the system - the user message always comes from memory. in addition to this, the database is updated or loaded for the user message very few times.

Application cache is a good place for it. The problem for you is, you think you cannot rely on it. Please see the later part of my answer where you will find how to make sure that the value is always there even if after iis restarts or iis recycles your application.
You can store the value in application cache. It can be done as follows
Application.Add(name,object)
Later you can retrieve it in each request by using this code
Application[name]
It works like session but the only difference is it is application wide and all the request from all user will get it. When you first time assign set the value, store it in db as well as application cache so that you can later make a query from db and store it in application cache if value is not there and then retrieve it from application cache.
You should restore the application cache from the database on Application_Start() event which fires every time the application starts or restarts. This way you can ensure that it is always in the application cache.

I would like a way to store a value in a non-volatile variable that I
can rely on. If I set this value today, it should be there tomorrow,
or next week, so long as IIS is not stopped and restarted.
In this case you cannot store this value in memory. The memory is something that is allocated for you by IIS to host the AppDomain of your application. IIS could recycle your application at any time and wipe out the memory. While IIS continues to live your application doesn't. So you cannot rely on it. The only reliable solution in this case is to persist this information in some non-volatile storage such as a file, database, ... the choice is really up to you but it should be out of the process of your AppDomain.

Related

CN1 stop() method not working every time when issuing Rest API call

Is it sensible to use the stop() method to issue a rest api call, and send data up to the cloud (which may take 0.1s-5s based on connectivity)?
requestBuilder.acceptJson().body(jsonDataBody).getAsJsonMap()
I ask, as i can consistently reproduce an issue on the simulator where no data is being sent when i close the app, but it goes if i call the same process via a button. On real devices it seems to work fine, but i am getting occasional customer feedback that it isn't always working, ie. data isn't being sent to the cloud (tho no errors). I cannot reproduce it using my own real devices.
I'm having to code blind and just force it by putting in a new async rest call when i do screen navigation, which does the same as stop() except uses this method
requestBuilder.acceptJson().body(jsonDataBody).fetchAsJsonMap()
Background:
I have my data in a cloud database, fronted by Rest APi's. My app uses storage to store the datetime of when the last upload and download of data was. When i open my app, via start(), it issues a rest call and gets all data, with a datetime stamp > last download datetime. when i close my app i issue another call, via stop(), to send all data locally changed since the last upload datetime, to the cloud. Each record has a lastUpdateDatetime entity property.
Thanks
That's problematic due to two reasons. First the simpler case:
OS's can invoke stop()/start() quickly so you're app will stop and start almost immediately and this might trigger data corruption if you don't guard against it
The worse problem is that if an operation takes a bit longer some OS's might kill it. You can use background fetch to perform downloads/uploads while your app isn't running and that would solve the technical problem here
Personally, I would just send data on change. If change it too rapid I'd add a time threshold for sending but send during the app running and not on stop(). Notice that on the device the situation is far more complex as it can suddenly decide to kill the app to make room for the phone app or another critical app. You need to program defensively and try to avoid assumptions where possible.

Does JMeter scripts actually creates records in database

Let's say I run a recorded script for 'New User Registration' function of a web site to evaluate the response time for entire scenario. When I run the recorded script from JMeter, for each registration script, is there a new user record getting created in the application database ?
Yes, if you record registration and correlate it (meaning you create a valid unique name for every request) you will create a real user in your environment.
JMeter is simulating a real scenario which effect your environment.
That is part of the reason JMeter will be executed in different environment than production (as stage)
Well-behaved JMeter script must represent a real user using a real browser as close as it is possible.
Browsers execute HTTP requests and render the response
JMeter executes the same HTTP requests but doesn't render the response, instead it records performance metrics like response time, connect time, latency, throughput, etc.
HTTP is a stateful protocol therefore given you execute the same request you will get the same response. So if there are no mistakes in your script it either should create a new user or fail due to non-unique username error.
Yes, if your script accurately represents the full set of data flows associated with the business process, "New User Registration," then the end state of that process should be identical to that of the user behavior so modeled.
A record will be created in the database. If not, then your user is not accurate in its behavior

Programatically listing and sending requests to dynamic App Engine instances

I want to send a particular HTTP request (or otherwise communicate a message) to every (dynamic/autoscaled) instance which is currently running for a particular App Engine application.
My goal is to trigger each instance to discard some locally cached data (because I have just modified the underlying data and want them to reload it).
One possible solution is to store a value in Memcache, and have instances check this each time they handle a request to see if they should flush their cache. But this adds latency to every request.
Another possible solution would be to somehow stop all running instances. No fixed overhead, but some impact while instances are restarted.
An even less desirable solution would be to redeploy the application code in order to cause all instances to be stopped. This now adds additional delay on my end as a deployment takes some time.
You could use the management API to list instances for a given version, but I'd suggest that you'd probably want to use something like the PubSub API to create a subscription on each of your App Engine instances. Since each instance has its own subscription, any messages sent to the monitored queue will be received by all instances.
You can create the subscription at startup (the /_ah/start endpoint may be useful), and then delete it at shutdown (using the /_ah/stop endpoint).

Laravel: Gracefully Fallback if Session Database is Not Avaiable

Is there a middware, package, or general approach for having Laravel gracefully fallback to a session-less state if the session storage engine isn't available?
That is, let's say you have you a system using the database session engine. If that database goes down, Laravel's going to throw an exception whenever it can't connect to the database. I'd like a way to, instead, have Laravel not throw an exception, and just continue on without a working session engine.
(I realize this will mean careful coding on the application level to never assume sessions are available, but a pre thank you for all the warnings)
Use Case to Correct For:
Session storage system goes down temporarily (maintenance window, unexpected outage, etc).
Logged in user hits a page, sees Laravel error page because session engine can't connect
User is sad
I'd rather the user see some sort of normal web-page instead of a generic error message, even if that means we can't include stateful session data on the page.
That depends, Laravel does not persistently require a session engine to work, only on pages that actually use it. So that means that a fallback would basically not help - in fact an exception is the best thing Laravel can actually do to help you here.
Why? Because an exception can be cought and, if that is what you want to do (even though it makes little to no sense), be ignored.
Maybe I'm understanding you wrong, what exactly do you want to fall back to?
For me it's really hard to imagine how could it work and what you need it for. For example when you need user to be logged to access some page what should happen if session db or whole db is down? For me the only solution is show the user info that something gone wrong because it will be hard to pretend that website is working if it's not. So application throw exception, you catch it and display error page for user (and send site admin e-mail or sms)
If you would try to pretend you probably make your users angry because they would try to log in and they wouldn't be logged in without any info, so they would try 2nd time, 3rd time and finally they would think that your site is broken and would never come back again. In my opinion it's better to tell them something is wrong and "hey, come back here in about 2 hours"

Should I need a database to ensure immediate consistency with a message-oriented middleware?

App A wants to send domain events to App B through a middleware like RabbitMQ.
Let's take the example of one domain event called UserHasBeenRegistered, involving by the creation of the User entity.
A would inform B that this latter should send a welcome email, by sending this event.
I have in mind two workflows:
First:
- App A registers the user and the event is generated.
- App A sends the event directly to B through a queue provided by RabbitMQ
Second:
- App A registers the user and the event is generated.
- App A saves the event in some kind of event store as a database table (if relational) in the same local transaction used for persisting in database this new user.
- An asynchronous scheduler queries the event store, find this new user registration and sends the message through the RabbitMQ's queue.
Do you see the difference?
Yes, one is longer than the other... but the second is far more safe! albeit less performant.
Indeed, what while in the first case, the registration is rollback due to an exception thrown just after the publishing was made? => the mail would be sent whereas the user was not persisted.
This could be fixed by implementing a global XA transaction (two-phases commit), but it is well known that some middleware don't support it.
Therefore, is the second workflow mostly used in critical application?
What are its drawbacks?
I plan to implement one of both solutions for my project.
I had the same task and it was done as a mix of your two workflows:
App A registers the user and the event is generated.
App A sends the event which has ttl set to non-zero value directly to B through a queue provided by RabbitMQ.
App B receive event and send welcome message to user and store flag that welcome message sent.
There are background script which check whether there are newly registered users from last ttl + 1 time interval who doesn't receive messages.
You can remove background script and flag storing and stick with first workflow from you q. The cases when messages lost or any other cases are damn rare (with welcome messages sending it might be 1 failure per 1billion users) and unnecessary application complication may give you more errors.
The second workflow looks also stable, but why you are using RabbitMQ then?

Resources