Is there a way to run some clean up code when Gatling meets failure and before action like exithereiffailed - gatling

I just want to clean up data on DB when the Gatling scenario fails and before exithereiffailed. So that when I run Gatling again, the broken data on DB is already deleted.
Thanks a lot!
I just see there are tryMax, exithereiffailed such action, but they don't take a callback to do some clean up work.

IMO, the proper solution for this is automating a database restore.
Then, if you really want to implement such rollback from the Gatling side, you shouldn't try to rollback individual changes from each virtual user. Performing the rollbacks while the test is running would harm performance and your test results.
You should store the changes (in memory or disk) so that you can roll them back once the test is done, eg with a dedicated scenario triggered after your real one with andThen.

Related

Persistent job queue?

Internet says using database for queues is an anti-pattern, and you should use (RabbitMQ or Beanstalked or etc)
But I want all requests stored. So I can later lookup how long they took, any failed attempts or errors or notes logged, who requested it and with what metadata, what was the end result, etc.
It looks like all the queue libraries don't have this option. You can't persist the data to allow you to query it later.
I want what those queues do, but with a "persist to database" option. Does this not exist? How do people deal with this? Do you use a queue library and copy over all request information into your database when the request finishes?
(the language/database I'm using is anything, whatever works best for this)
If you want to log requests, and meta-data about how long they took etc, then do so - log it to the database when you know the relevant results, and run your analytic queries as you would expect to.
The reason to not be using the database as a temporary store is that under high traffic, the searching for, and locking of unprocessed jobs, and then updating or deleting them when they are complete, can take a great deal of effort. That is especially true if don't remove jobs from the active table, and so have to search ever more completed jobs to find those that have yet to be done.
One can implement the task queue by themselves using a persistent backend (like database) to persist the tasks in queues. But the problem is, it may not scale well and also, it is always better to use a proven implementation instead of reinventing the wheel. These are tougher problems to solve and it is better to use the existent frameworks.
For instance, if you are implementing in Python, the typical choice is to use Celary with Redis/RabbitMQ backend.

Spring: Sharing transaction between threads for integration testing

I have a class that uses a ThreadPoolTaskExecutor to spin off tasks that interact with a database. My integration tests for the task are failing, I think because they rely on test data inserted through DataSourceTransactionManager, and the spin-off threads are not seeing the transaction from the main class and thus aren't retrieving anything from the database. Is there any way to get the threads to see the inserted test data, without having to commit the transaction and delete the test data later?
It's a pretty common pattern to insert test data into a test database which is cleared in the finally block at the end of the test. If you edit your question and provide additional reasons why you are against that solution, I'll edit my answer.
The setup() method, where typically the test database tables are created, is another good place to make sure the new tables are cleared of any left over data from previous tests, before test data gets inserted.
Hope this helps.

Store Application Progress to Database

I have a daily launched multi-threaded loading service. I would like to keep tack of the percentage progress of the loader. I was thinking that it would be good to have an update column on a database table that writes the %Progress. Is this a good idea or will there be a large overhead(5k updates per minute). Is there a better way to do it?
The overhead in my opinion would be much too great, a much better solution would be to just keep the progress in memory on the server and make it available by exposing a request to a web service that would give you the current progress.
i agree with #scripni - expose the progress as a web service. however, if you need to keep a log of the actual run, or the errors, then you can selectively store things like start time, any pertinent event, and end time in the database for later review. (jus try to avoid every single step of the process being posted)

What does a database log keep track of?

I'm quite new to SQL Server and was wondering what the difference between the SQL Server log is and a custom log (in my case, using log4net)? I guess there's more choice on what to log using log4net, but what things are automatically logged by the database? For example, if a user signs up to my site, would I have to manually log that transaction, or would that be recorded in the database's log automatically? I'm currently starting a project and would like to figure out exactly what I should bother logging.
Thanks
Apples and Oranges.
Log4net and other custom 'logging' is just a way to capture events an application is reporting. 'Log' in this context reffers to whatever store is used by this infrastucture to persist information about these events.
The database log on the other hand is something compeltely different. In order to maintain consistency and atomicity databases use a so called Write-Ahead-Log protocol. In WAL all changes are first durable written into a journal, or log, before being applied to the data. This allows recovery to replay the log (the journal) and get the data back into a consistent state, by rolling back any uncommited work.
Database logs have absolutely nothing to do with your application code. Any database update will be automatically logged by the engine, simply because this is how any data is updated in a database. You cannot modify that, nor do you have any access to what's written in the log (strictly speaking you can look into the log, but you won't find any usefull information for your application).
SQL log handles tansaction logging for rolling back or comiting data. They are usually only dealt with by someone who knows what they are doing restoring backups or shipping the logs to use for backups.
The log4net and other logging framweworks handle in code logging of exceptions, warning, or debug level info that you would like to output for your own info. They can be sent to a table in a database, command window, flat file or web service. Common logging scenarios are catching unhandled exceptions at the application level to help track down bugs, or in any try catch statements writing out the stack trace.
It keeps track of the transactions so it can roll them back or replay in case of a crash. Quite more involved than simple logging.
The two are almost completely unrelated.
A database log is used to rollback transactions, recover from crashes, etc. All good things to ensure database consistency. It has updates/inserts/deletes in it--not really anything about intent or what your app is trying to do unless it directly affects data in the database.
The application log on the other hand (with Log4Net) can be extremely useful when building and debugging your application. It is driven by you and should contain information that traces what your app is doing. This is something that can safely be turned off or reduced (by toggling the log level) when you no longer need it.
The SQL Server log file is actually used for maintaining it's own stability, but it's not terribly useful for normal developers. It's not what you think (and I what I thought), a list of SQL statements that have been run. It's a propriety format designed to help SQL recover from a crash or roll back transactions.
If you need to track what's going on in the system, the SQL transaction log won't be helpful, and it would be very difficult to get that information back out. Instead, I would suggest adding triggers on your tables that write information off to another table, or add some code in your data layer that saves off a log of what's going on. It could be as simple as wrapping the SQL command object with your own implementation, which saved SQL statements off to log4net in addition to whatever normal code it was executing.
It is the mechanism by which the RMDBS can assure atomicity and consistency, see ACID.

How do I rollback a transaction that has already been committed?

I am implementing an undo button for a lengthy operation on a web app. Since the undo will come in another request, I have to commit the operation.
Is there a way to issue a hint on the transaction like "maybe rollback"? So after the transaction is committed, I could still rollback in another processes if needed.
Otherwise the Undo function will be as complex as the operation it is undoing.
Is this possible? Other ideas welcome!
Another option you might want to consider:
If it's possible for the user to 'undo' the operation, you may want to implement an 'intent' table where you can store pending operations. Once you go through your application flow, the user would need to Accept or Undo the operation, at which point you can just run the pending transaction and apply it to your database.
We have a similar system in place on our web application, where a user can submit a transaction for processing and has until 5pm on the day it's scheduled to run to cancel it. We store this in an intent table and process any transactions scheduled for that day after the daily cutoff time. In your case you would need an explicit 'Accept' or 'Undo' operation from the user after the initial 'lengthy operation', so that would change your process a little bit.
Hope this helps.
The idea in this case is to log for each operation - a counter operation that do the opposite, in a special log and when you need to rollback you actually run the commands that you've logged.
some databases have flashback technology that you can ask the DB to go back to certain date and time. but you need to understand how it works, and make sure it will only effect the data you want and not other staff...
Oracle Flashback
I don't think there is a similar technology on SQL server and there is a SO answer that says it doesn't, but SQL keeps evolving...
I'm not sure what technologies you're using here, but there are certainly better ways of going about this. You could start by storing the data in the session and committing when they are on the final page. Many frameworks these days also allow for long running transactions that span multiple requests.
However, you will probably want to commit at the end of every page and simply set some sort of flag for when the process is completed. That way if something goes wrong in the middle of it the user can recover and all is not lost.

Resources