Automatic stash/apply changes when changing feature branches - smartgit

Essentially what I'm looking for is a way to treat each feature branch as a separate repository, as follows in this sample workflow:
1) If my active branch is Feature A, and I have "foo.cpp" modified and want to checkout Feature B, I want SmartGit to automatically stash my changes upon doing so.
2) Piddling around in feature B and then returning to feature A, I want it to stash my changes to feature A, then apply (and likely drop) the feature A stash created in at the tail of step 1.
Is there any way to do this automatically, or am I asking for something ridiculous?

SmartGit does not assign stashes to certain branches, but you can simply commit your modifications in A as a temporary WIP (work-in-progress) commit, then switch to B, continue working there, finally commit your changes again as WIP and switch back to A. Now you have several choices:
Local|Undo Last Commit to "unstash" your previous work;
just continue working and committing with Amend option to the existing WIP commit. Once your feature is ready use Edit Commit Message in the Outgoing view;
continue working and committing to a new WIP commit and once your feature is ready use Squash Commits in the Outgoing view to compact all your WIP commits into one tidy, final commit.
Personally, I prefer the last choice, because having several WIP commits lets me better review my progress (and see possible back-and-forth I did when switching between tasks).

Related

How to get notified when no record is inserted in a table for a while in SQL Server

Goal: I have a table which handles the status of a device. Whenever I don't receive the status from it for one hour or more, I want to get notified once.
The device inserts a "heartbeat" record in the table with a timestamp (NOTE: I have to stick with this implementation).
In order to get notified for any changes, I'm using a service with a queue (which is read by another program).
What I've tried: I made a job which runs every 1 minute. It:
Looks at the last heartbeat in the table
If the timestamp is one hour ago or before:
Looks in another table which stores if I have already written the notification on the queue or not
If not, writes in the queue
Else:
Looks in another table which stores if I have already written the notification on the queue or not
If yes, resets the value to no
Issues with that: I'm concerned about my technique for executing some code when no record is inserted in a while:
I feel like there is a built in (and better) way to solve this kind of problems, but I can't find it;
It doesn't notify me as soon as possible (in the worst case, after 1 minute). This could be solved by reducing the schedule wait time, but I don't know if it may hurt performances or not. I would want the lighter solution possible;
I don't like to have to use an helper table, so I would want to remove it;
I would prefer to not use jobs if possible (I'm using a VS DB project and I would like to remove the post deployment script).

design pattern for undoing after I have commited the changes

We can undo an action using Command or Memento pattern.
If we are using kafka then we can replay the stream in reverse order to go back to the previous state.
For example, Google docs/sheet etc. also has version history.
in case of pcpartpicker, it looks like the following:
For being safe, I want to commit everything but want to go back to the previous state if needed.
I know we can disable auto-commit and use Transaction Control Language (COMMIT, ROLLBACK, SAVEPOINT). But I am talking about undoing even after I have commited the change.
How can I do That?
There isn't a real generic answer to this question. It all depends on the structure of your database, span of the transactions across entities, distributed transactions, how much time/transactions are allowed to pass before your can revert the change, etc.
Memento like pattern
Memento Pattern is one of the possible approaches, however it needs to be modified due to the nature of the relational databases as follows:
You need to have transaction log table/list that will hold the information of the entities and attributes (tables and columns) that ware affected by the transaction with their primary key, the old and new values (values before the transaction had occurred, and values after the transaction) as well as datetimestamp. This is same with the command (memento) pattern.
Next you need a mechanism to identify the non-explicit updates that ware triggered by the stored procedures in the database as a consequence of the transaction. This is important, since a change in a table can trigger changes in other tables which ware not explicitly captured by the command.
Mechanism for rollback will need to determine if the transaction is eligible for roll-back by building a list of subsequent transactions on the same entities and determine if this transaction is eligible for roll-back, or some subsequent transaction would need to be rolled-back as well before this transaction can be rolled-back.
In case of a roll-back is allowed after longer period of time, or a near-realtime consumption of the data, there should also be a list of transaction observers, processes that need to be informed that the transaction is no longer valid since they already read the new data and took a decision based on it. Example would be a process generating a cumulative report. When transaction is rolled-back, the rollback will invalidate the report, so the report needs to be generated again.
For a short term roll-back, mainly used for distributed transactions, you can check the Microservices Saga Pattern, and use it as a starting point to build your solution.
History tables
Another approach is to keep incremental updates or also known as history tables. Where each update of the row will be an insert in the history table with new version. Similar to previous case, you need to decide how far back you can go in the history when you try to rollback the committed transaction.
Regulation issues
Finally, when you work with business data such as invoice, inventory, etc. you also need to check what are the regulations related with the cancelation of committed transactions. As example, in the accounting systems, it's not allowed to delete data, rather a new row with the compensation is added (ex. removing product from shipment list will not delete the product, but add a row with -quantity to cancel the effect of the original row and keep audit track of the change at the same time.

Updating database keys where one table's keys refer to another's

I have two tables in DynamoDB. One has data about homes, one has data about businesses. The homes table has a list of the closest businesses to it, with walking times to each of them. That is, the homes table has a list of IDs which refer to items in the businesses table. Since businesses are constantly opening and closing, both these tables need to be updated frequently.
The problem I'm facing is that, when either one of the tables is updated, the other table will have incorrect data until it is updated itself. To make this clearer: let's say one business closes and another one opens. I could update the businesses table first to remove the old business and add the new one, but the homes table would then still refer to the now-removed business. Similarly, if I updated the homes table first to refer to the new business, the businesses table would not yet have this new business' data yet. Whichever table I update first, there will always be a period of time where the two tables are not in synch.
What's the best way to deal with this problem? One way I've considered is to do all the updates to a secondary database and then swap it with my primary database, but I'm wondering if there's a better way.
Thanks!
Dynamo only offers atomic operations on the item level, not transaction level, but you can have something similar to an atomic transaction by enforcing some rules in your application.
Let's say you need to run a transaction with two operations:
Delete Business(id=123) from the table.
Update Home(id=456) to remove association with Business(id=123) from the home.businesses array.
Here's what you can do to mimic a transaction:
Generate a timestamp for locking the items
Let's say our current timestamp is 1234567890. Using a timestamp will allow you to clean up failed transactions (I'll explain later).
Lock the two items
Update both Business-123 and Home-456 and set an attribute lock=1234567890.
Do not change any other attributes yet on this update operation!
Use a ConditionalExpression (check the Developer Guide and API) to verify that attribute_not_exists(lock) before updating. This way you're sure there's no other process using the same items.
Handle update lock responses
Check if both updates succeeded to Home and Business. If yes to both, it means you can proceed with the actual changes you need to make: delete the Business-123 and update the Home-456 removing the Business association.
For extra care, also use a ConditionExpression in both updates again, but now ensuring that lock == 1234567890. This way you're extra sure no other process overwrote your lock.
If both updates succeed again, you can consider the two items updated and consistent to be read by other processes. To do this, run a third update removing the lock attribute from both items.
When one of the operations fail, you may try again X times for example. If it fails all X times, make sure the process cleans up the other lock that succeeded previously.
Enforce the transaction lock throught your code
Always use a ConditionExpression in any part of your code that may update/delete Home and Business items. This is crucial for the solution to work.
When reading Home and Business items, you'll need to do this (this may not be necessary in all reads, you'll decide if you need to ensure consistency from start to finish while working with an item read from DB):
Retrieve the item you want to read
Generate a lock timestamp
Update the item with lock=timestamp using a ConditionExpression
If the update succeeds, continue using the item normally; if not, wait one or two seconds and try again;
When you're done, update the item removing the lock
Regularly clean up failed transactions
Every minute or so, run a background process to look for potentially failed transactions. If your processes take at max 60 seconds to finish and there's an item with lock value older than, say 5 minutes (remember lock value is the time the transaction started), it's safe to say that this transaction failed at some point and whatever process running it didn't properly clean up the locks.
This background job would ensure that no items keep locked for eternity.
Beware this implementation do not assure a real atomic and consistent transaction in the sense traditional ACID DBs do. If this is mission critical for you (e.g. you're dealing with financial transactions), do not attempt to implement this. Since you said you're ok if atomicity is broken on rare failure occasions, you may live with it happily. ;)
Hope this helps!

Using SQLAlchemy sessions and transactions

While learning SQLAlchemy I came across two ways of dealing with SQLAlchemy's sessions.
One was creating the session once globally while initializing my database like:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
and import this DBSession instance in all my requests (all my insert/update) operations that follow.
When I do this, my DB operations have the following structure:
with transaction manager:
for each_item in huge_file_of_million_rows:
DBSession.add(each_item)
//More create, read, update and delete operations
I do not commit or flush or rollback anywhere assuming my Zope transaction manager takes care of it for me
(it commits at the end of the transaction or rolls back if it fails)
The second way and the most frequently mentioned on the web way was:
create a DBSession once like
DBSession=sessionmaker(bind=engine)
and then create a session instance of this per transaction:
session = DBSession()
for row in huge_file_of_million_rows:
for item in row:
try:
DBsesion.add(item)
//More create, read, update and delete operations
DBsession.flush()
DBSession.commit()
except:
DBSession.rollback()
DBSession.close()
I do not understand which is BETTER ( in terms of memory usage,
performance, and healthy) and how?
In the first method, I
accumulate all the objects to the session and then the commit
happens in the end. For a bulky insert operation, does adding
objects to the session result in adding them to the memory(RAM) or
elsewhere? where do they get stored and how much memory is consumed?
Both the ways tend to be very slow when I have about a
million inserts and updates. Trying SQLAlchemy core also takes the
same time to execute. 100K rows select insert and update takes about
25-30 minutes. Is there any way to reduce this?
Please point me in the right direction. Thanks in advance.
Here you have a very generic answer, and with the warning that I don't know much about zope. Just some simple database heuristics. Hope it helps.
How to use SQLAlchemy sessions:
First, take a look to their own explanation here
As they say:
The calls to instantiate Session would then be placed at the point in the application where database conversations begin.
I'm not sure I understand what you mean with method 1.; just in case, a warning: you should not have just one session for the whole application. You instantiate Session when the database conversations begin, but you surely have several points in the application in which you have different conversations beginning. (I'm not sure from your text if you have different users).
One commit at the end of a huge number of operations is not a good idea
Indeed it will consume memory, probably in the Session object of your python program, and surely in the database transaction. How much space? That's difficult to say with the information you provide; it will depend on the queries, on the database...
You could easily estimate it with a profiler. Take into account that if you run out of resources everything will go slower (or halt).
One commit per register is also not a good idea when processing a bulk file
It means you are asking the database to persist changes every time for every row. Certainly too much. Try with an intermediated number, commit every n hundreds of rows. But then it gets more complicated; one commit at the end of the file assures you that the file is either processed or not, while intermediate commits force you to take into account, when something fails, that your file is half through - you should reposition.
As for the times you mention, it is very difficult with the information you provide + what is your database + machine. Anyway, the order of magnitude of your numbers, a select+insert+update per 15ms, probably plus commit, sounds pretty high but more or less on the expected range (again it depends on queries + database + machine)... If you have to frequently insert so many registers you could consider other database solutions; it will depend on your scenario, and probably on dialects and may not be provided by an orm like SQLAlchemy.

SOLR - old Transactions-Logs (tlogs) never deleted

first of all to mention i searched a long time but got n solution, so not i try with my specific problem, trying to keep it short:
solr-spec 4.0.0.2012.10.06.03.04.33
one master, three slaves
around 70.000 documents in index
master gets triggered to full import / generate complete new index ~ once a day
command line options for trigger are:
?command=full-import&verbose=false&clean=false&commit=true&optimize=true
slaves trigger master for new index, if GEN increases (full import + hard commit as mentioned), they pull the new index
no autoCommit / autoSoftCommit set up
the problem ist, that each hard commit the index (~670MB) gets written to disk, once a day, but the old never get deleted.
As far as i read solr keeps enough tlogs to be able to restore the last 100 changes to documents, am i right?
In my setup i am sure at least 100 documents (or data sets within the source database) are changed each day, so i dont understand why solr never deletes old tlogs.
I would be glad if someone can point to the right direction, currently i have no clue what to try next. Also i did not find a setup like this one described having problems like this.
Thx ;)
First you'll probably want to update your Solr-version, as there's been a few transaction log reference leaks fixed since 4.0.
A hard commit should usually remove old transaction logs as the documents are written to disk in the index anyway iirc, which may indicate that you're getting bit by some old references hanging around.
Another option would be to turn off the transaction log completely, since you only generate a completely new index each run anyway and dist that one.

Resources