Is it possible create a copy of only those files that were not committed to SVN? - file

Sometimes I need to revert many files - near 20 - 50, but need to save the files with local changes - if I in the future will use something.
Project is big - more than 10 000 files.
Is it possible create a copy of only the files that not were committed?
Manually find changes and copy takes near 2 hours - tree of project has many nested folders.

You can create a diff with svn diff and then reapplying the diff with svn patch.
However this is not really how you should work with SVN. Better up if you can create branch with your changes, then you can later merge that branch and share the content with your peers.
Note that creating a branch is relatively cheep in SVN. On the server the files as linked to the original until actually changed. Only your changed files will take space on the server.
Note:
svn diff only saves the changed lines of your files, not the complete files. But that is enough if you need to reapply the patch.

If you really want copies of files (rather than use svn diff or do a branch), tne approach (a version of which we use for server configuration file backups) is to check which files are modified. The notes below assume you are at the top level of your repo.
For instance, if you run svn status you might get output like this:
? plans/software/intro_jan12.log
? plans/software/intro_jan12.dvi
? plans/software/data.txt
? plans/software/intro_jan12.nav
M plans/software/intro_jan12.pdf
M plans/software/jan12.tex
? plans/software/jan12/flowRoot9298.png
? plans/software/jan12/viewE_comments.pdf
? plans/software/jan12/team.ps
? plans/software/jan12/team.png
? plans/it/plan.log
(The ? shown unknown files, the M shows modified files.)
You can then easily extract modified files and do stuff with them by doing something like svn status | egrep '^M'.
Turning that into a short shell script that copies modified files elsewhere is pretty easy:
# step 1
svn status | egrep '^M' | awk '{ print $2 }' > recipe_file
# step 2
rsync -a --files-from=recipe_file <repo> <dest>
Naturally <dest> can be on a remote machine.
Presumably, once you have audited the copy files at you can then do svn revert -R.

Related

How to get a list of files ignored or skipped by cloc

If the beginning of my cloc --vcs git output is something like the following:
1826 text files.
1780 unique files.
384 files ignored.
Question 1: Is there a way to get a list of all the files ignored by the cloc command?
Also, if I instead run cloc "repo_name", it shows a completely different number of files.
2106 text files.
1981 unique files.
346 files ignored.
Question 2: How can I get a list of which files are being skipped when running the --vcs command?
#1, Ref. the --ignored switch in the documentation (either at https://github.com/AlDanial/cloc or via cloc --help):
--ignored=<file> Save names of ignored files and the reason they
were ignored to <file>.
#2, --vcs git uses git ls-files to get the list of files to count. Without this, cloc does a recursive search in the given directory. If only a subset of files in "repo_name" are under git control, or if you have entries in .gitignore, the two methods of getting file names will differ. This is also explained in the documentation.
git ls-files --other
will show files that aren't under git control.

Get files and directories affected by commit

I want to get list of files and directories affected by specific commit. I have no problem getting the commit itself but I rather don't know how to get affected files and directories.
Just to make it clear I need something like this:
file x - deleted
file y - added
file z - modified
Git is snapshot-based; each commit includes a full list of files and their state. Any notion of "affected" files needs another commit to compare it to. This is commonly done against its parents, which seems to be what you're asking about. You can figure out which files are different between two commits (or more exactly, their trees) by using the git_diff family of functions.
You can find an example of doing so in the examples listing for libgit2. There is also a more general annotated diff example. The second link also shows how to list individual files as well as their contents, if you need that. Check the reference for a a full listing of available function to work with diffs.
Note that this won't give you affected directorires by itself, as Git does not track directories, but only files.
You're looking for git diff.
The same function exists in libgit2, and the documentation for it is here.
If you're analyzing older commits, "git diff [commit1] [commitAfterCommit1]" will give you a list of changes that the second commit made from the first. You could prune this output to get yourself just the changed file names.

Delete all files except

I have a folder with a few files in it; I like to keep my folder clean of any stray files that can end up in it. Such stray files may include automatically generated backup files or log files, but could be a simple as someone accidentally saving to the wrong folder (my folder).
Rather then have to pick through all this all the time I would like to know if I can create a batch file that only keeps a number of specified files (by name and location) but deletes anything not on the "list".
[edit] Sorry when I first saw the question I read bash instead of batch. I don't delete the not so useful answer since as was pointed out in the comments it could be done with cygwin.
You can list the files, exclude the one you want to keep with grep and the submit them to rm.
If all the files are in one directory:
ls | grep -v -f ~/.list_of_files_to_exclude | xargs rm
or in a directory tree
find . | grep -v -f ~/.list_of_files_to_exclude | xargs rm
where ~/.list_of_files_to_exclude is a file with the list of patterns to exclude (one per line)
Before testing it make a backup copy and substitute rm with echo to see if the output is really what you want.
White lists for file survival is an incredibly dangerous concept. I would strongly suggest rethinking that.
If you must do it, might I suggest that you actually implement it thus:
Move ALL files to a backup area (one created per run such as a directory containing the current date and time).
Use your white list to copy back files that you wanted to keep, such as with copy c:\backups\2011_04_07_11_52_04\*.cpp c:\original_dir).
That way, you keep all the non-white-listed files in case you screw up (and you will at some point, trust me) and you don't have to worry about negative logic in your batch file (remove all files that _aren't of all these types), instead using the simpler option (move back every file that is of each type).

Clearmake rules for handling stamp file and siblings from a large monolith command?

Given a build process like this:
Run a VerySlowProcess that produces one output file for each given input file.
Run a SecondProcess on each output file from VerySlowProcess
VerySlowProcess is slow to start, but can handle additional input files without much extra delay, therefore it is invoked with several input files. VerySlowProcess may access additional files referenced from the input files, but we can not match the file accesses to specific input files, and therefore all derived output files from VerySlowProcess will get the same Configuration Record by clearmake.
Since VerySlowProcess is invoked with several input files (inlcuding input files that has not changed) many of the output files are overwritten again with identical content. In those cases it would be uneccesery to execute SecondProcess on them and therefore output is written to a temporary file, that is only copied to the real file if the content has actually changed.
Example Makefile:
all: a.3 b.3
2.stamp:
#(echo VerySlowProcess simulated by two cp commands)
#(cp a.1 a.2_tmp)
#(cp b.1 b.2_tmp)
#(diff -q a.2_tmp a.2 || (echo created new a.2; cp a.2_tmp a.2))
#(diff -q b.2_tmp b.2 || (echo created new b.2; cp b.2_tmp b.2))
#(touch $#)
%.3: %.2 2.stamp
#(echo Simulating SecondProcess creating $#)
#(cp $< $#)
If only a.1 is changed only a.2 is written, but SecondProcess is still executed also for b:
> clearmake all
VerySlowProcess simulated by two cp commands
Files a.2_tmp and a.2 differ
created new a.2
Simulating SecondProcess creating a.3
Simulating SecondProcess creating b.3
As a workaround we can remove the '2.stamp' from the '%.3' dependencies, then it work to execute like this:
> clearmake 2.stamp && clearmake all
VerySlowProcess simulated by two cp commands
Files a.2_tmp and a.2 differ
created new a.2
Simulating SecondProcess creating a.3
Is there a better way to handle our problem with VerySlowProcess?
Your workaround seems valid.
The only other use of clearmake for supporting "incremental update" is presented here, but I am not sure if it applies in your case.
Incremental updating means that a compound object, such as a library is partially updated by the rebuild of one or more of its components, as opposed to being generated by the build of just one target.
The use of the .INCREMENTAL_TARGET is of importance here.
This special target tells clearmake to merge the entries of the target's previous configuration record with those of the latest build.
This way the build history of the object is not lost, because the object's configuration record is not completely overwritten every time the object gets modified.
Here's an alterative scenario though similar problem...perhaps...though your description does not quite match my scenario.
I have a very slow process that may change certain files but will regenerate all the files.
I want to avoid the slow process and also want to avoid updating the files that do not change.
Checked if regeneration (slow process) is necessary - this logic needed to be separated out from the makefile into a shell script since there are issues with clearmake with targets being updated and .INCREMENTAL could not help resolve.
Logic Overview:
If the md5sums.txt file is empty or the md5sums do not match
then long process if invoked.
To check md5sums:
md5sum -c md5sums.txt
To build slow target:
clearmake {slowTarget}
this will generate to a temp dir and afterwards update the changed elements
To regenerate md5sums:
checkout md5sums.txt
cleartool catcr -ele -s {slowTarget} | sed '1,3d;s/\\/\//g;s/##.*//;s/^.//;' | xargs -i md5sum {} > md5sums.txt
checkin md5sums.txt

Script to prepare scenarios to test findmerge command

I'm testing a clearcase merge script and I'd like to be able to have another script that could produce these 2 test scenarios every time it runs:
Modify 3 files for a trivial merge (100% automatic, no diff needed)
Modify 3 files for a conflicting merge, user resolution required
What I'd like to know are the steps/clearcase commands needed to prepare these files. All must be done through command-line (using cleartool commands). I already have a dynamic view and some test files I can use. Probably I'll need to create a destination test branch too.
The merge is done using the ct findmerge command like this:
`cleartool findmerge filepath -fver version -merge -log NUL -c comment`
I need to validate the output in each of the cases, to include them in a report and also ensure that no user interaction is required.
You need:
to have two branches where you makes your parallel evolutions in your files
to use simply cleartool checkout -nc myFile ; echo new modif >> myFile ; cleartool checkin -nc myFile for adding evolution with a trivial merge in one branch (leave the same file untouched in the other branch)
to use the same process in both branches with a different echo each time in order to add a new line different in both version of myFile: that will result in non-trivial merge.
Don't forget that you can also have trivial/non-trivial merges on the directory level (when files are added/removed): a non-trivial one would be in case of an evil-twin.

Resources