Bazaar Version Control deletes files - version

I'm a single user looking into Bazaar Explorer gui. Consider this scenario:
Create repository.
Create FileOne and add.
Commit as rev 1.
Make changes to FileOne.
Commit as rev 2.
Create and add FileTwo.
Commit as rev 3.
Now, let's say that FileOne has problems and I want to revert to rev 1. If I do this FileTwo will be deleted. If I want to keep FileTwo I guess I can copy it somewhere outside of version control, revert to rev 1, and then add FileTwo back to version control. This seems clumsy to me. Is there a better way of doing this? Thanks..

You can do one of the following:
First, selectively revert FileOne, e.g.:
bzr revert -r 1 FileOne
bzr commit
This will restore FileOne to the way it was in revision 1.
Second, use reverse cherrypicking:
bzr merge -r 2..1
bzr commit
This will create a patch that inverts the change of going from revision 1 -> 2.
Either option will create a new commit, but with the changes made in revision 2 undone.

Related

Snowflake: COPY INTO after TRUNCATE loads old data?

I'm getting some unexpected behavior from Snowflake and I'm hoping someone could explain what's happening and the best way to handle it. Basically, I'm trying to do a nightly refresh of an entire dataset, but truncating the table and staging/copying data into it results in old data being loaded.
I'm using the Python connector with AUTOCOMMIT=False. Transactions are committed manually after every step.
Step 1: Data is loaded into an empty table.
put file://test.csv #test_db.test_schema.%test_table overwrite=true
copy into test_db.test_schema.test_table file_format=(format_name=my_format))
Step 2: Data is truncated
TRUNCATE test_db.test_schema.test_table
Step 3: New data is loaded into the now empty table (same filename, but overwrite set to True).
put file://test.csv #test_db.test_schema.%test_table overwrite=true
copy into test_db.test_schema.test_table file_format=(format_name=my_format))
At this point, if I query the data in the table, I see that it is the data from Step 1 and not Step 3. If in Step 2 I DROP and recreate the table, instead of using TRUNCATE, I see the data from Step 3 as expected. I'm trying to understand what is happening. Is Snowflake using a cached version of the data, even though I'm using PUT with OVERWRITE=TRUE? What's the best way to achieve the behavior that I want? Thank you!
I'm using the Python connector with AUTOCOMMIT=False. Transactions are committed manually after every step.
Are you certain you are manually committing each step, with the connection.commit() API call returning successfully?
Running your statements in the following manner reproduces your issue, understandably so because the TRUNCATE and COPY INTO TABLE statements are not auto-committed in this mode:
<BEGIN_SCRIPT 1>
[Step 1]
COMMIT
<END_SCRIPT 1>
<BEGIN_SCRIPT 2>
[Step 2]
[Step 3]
<END_SCRIPT 2>
SELECT * FROM test_table; -- Prints rows from file in Step 1
However, modifying it to always commit changes the behaviour to the expected one:
<BEGIN_SCRIPT 1>
[Step 1]
COMMIT
<END_SCRIPT 1>
<BEGIN_SCRIPT 2>
[Step 2]
COMMIT
[Step 3]
COMMIT
<END_SCRIPT 2>
SELECT * FROM test_table; -- Prints rows from file in Step 3
If in Step 2 I DROP and recreate the table, instead of using TRUNCATE, I see the data from Step 3
This is expected because CREATE is a DDL statement, which are always auto-committed (regardless of override) in Snowflake. Doing a CREATE in place of TRUNCATE causes a commit to happen on that step implicitly, which further reaffirms that your tests aren't properly committing at Step 2 and Step 3 somehow.
Is Snowflake using a cached version of the data, even though I'm using PUT with OVERWRITE=TRUE?
No, if the PUT succeeds, it has performed an overwrite as instructed (assuming filenames remain the same). Older version of the stage data will no longer exist after it has been overwritten.
Can you check below steps will fit to your requirement.
Create Stage table
Truncate your stage table
Load nightly refresh of an entire data set to stage table.
Use merge statement to copy the data from stage table to target table.(In order to merge two tables you need primary key(s))
Make sure your stage table is truncated successfully before proceeding to the next step.
Hope this helps.

difference between SmartGit "revert" and "revert & commit"

I have edits and a commit I want to undo.
SmartGit offers "revert" and it also offers "revert & commit". What is the difference?
Do either of these modify source code or are they strictly changes within Git itself.
Both Revert and Revert & Commit will modify your source code in the working tree. Revert & Commit will -- in addition -- also immediately commit these modifications. With Revert you have to manually Commit yourself. The advantage of Revert is that you can tweak your commit, if necessary. Also, Revert & Commit may be unable to actually perform the commit due to conflicts.

Neo4j: How do you find the first (shallowest) match for all relationship matches?

I have a data set that duplicates the commit history inside a git repository. Each Commit node has one or more parents, which are also Commit nodes. Commits have a commit_id property and have references to the files that changed in that commit. In other words:
ChangedFile<-[:CHANGED_IN]-Commit
Commit-[:CONTAINS]->ChangedFile
Commit-[:CHILD_OF]->Commit
I'm now trying to write a Cypher query that returns commit/file pairs where each commit contains the most recent change to the file. Since the graph has been designed to mimic git history with parent/child relationships, the query should support choosing a commit to start at, i.e. the HEAD.
Here's what I've got so far:
MATCH
(commit:Commit {commit_id: '460665895c91b2f9018e361b393d7e00dc86b418'}),
(file:ChangedFile)<-[:CHANGED_IN]-commit-[:CHILD_OF*]->(parent:Commit)
RETURN
file.path, parent.commit_id
Unfortunately this query returns all the commits that match at any number of levels deep within the [:CHILD_OF*] relationship. I want it to instead stop at the first match for each file. As things stand now, I end up seeing a bunch of duplicate file paths in the result set.
How do I tell Neo4j/Cypher to stop at the first match per file, regardless of depth? I've tried adding UNIQUE and a bunch of other things, but I can't seem to find something that works. Thanks in advance!
Maybe I'm misunderstanding your data model and what you are after, but why are you looking for variable length paths from the commit to its parent? Aren't you just looking for the parent?
MATCH
(commit:Commit {commit_id: '460665895c91b2f9018e361b393d7e00dc86b418'}),
(file:ChangedFile)<-[:CHANGED_IN]-commit-[:CHILD_OF]->(parent:Commit)
RETURN
file.path, parent.commit_id

TFS - Merge relationship - How to exclude ?

We have a case here where a developer creates a wrong branch. The branch should be: $\projectA\branch01\pg5Dev from $\projectA\main\pg5Dev\ but he creates a $\projectA\branch01\ from $\projectA\main\pg5Dev.
We deleted the folder and creates the branch again, but the merge relationship in merge wizard remains.
We need to know the database structure of Merge Relations ships to remove $\projectA\branch01\, because everytime we will make a merge, the worng branch is appearing in combobox of merge wizard.
Please, help us identify the tables in database that have this wrong record.
If the incorrect branch isn't needed then I would recommend destroying it. Once it is destroyed, it will no longer show up in the combobox. You can destroy it by running "tf destroy ". Note that a destroy is non-recoverable and it will delete all of the history for that branch.

How to run a create index script using RoundhousE that depends on a sp in the correct sequence

Question on roundhouse. I have a script that calls a sp to figure out how much space is required to create an index. (we are using sql express which has max db size limit). Depending on how much space is left it deletes rows from a whole bunch of tables and then creates the index with the usual checks (if not exists in sysindex...create index...). The sp called will be used in other index creation scripts in the future so unless there is no option I would prefer to keep it as a sp and not part of the create index script (inline). Problem is that roundhouse runs my index creation script in the UP folder first and then goes after the sp folder or runfirstafterUp folder. It cannot find the sp since it has not been plugged into the db first. Pls advise if there is any solution to this sequence problem. thanks
Newest RH has an indexes folder that is run after the sprocs folder https://github.com/chucknorris/roundhouse/wiki/ConfigurationOptions
Let me know if this solves your issue. Thanks!
UPDATE for Clarity: Newest version doesn't necessarily reflect what is released. The version you use needs to be greater than 324 ( http://code.google.com/p/roundhouse/source/detail?r=324 ).
You can install RH in many ways - https://github.com/chucknorris/roundhouse/wiki/Getroundhouse

Resources