How to post review for multiple clearcase files in review board - clearcase

Old File : /vobs/code1/dir1/file1.c##/main/branch1/4 .
New File : /vobs/code1/dir1/file1.c##/main/branch1/mybranch/1
$ diff -q /vobs/code1/dir1/file1.c##/main/branch1/4 /vobs/code1/dir1/file1.c##/main/branch1/mybranch/1
Files /vobs/code1/dir1/file1.c##/main/branch1/4 and /vobs/code1/dir1/file1.c##/main/branch1/mybranch/1 differ
$ post-review --server http://reviewserver.oursite.com --revision-range='/vobs/code1/dir1/file1.c##/main/branch1/4:/vobs/code1/dir1/file1.c##/main/branch1/mybranch/1'
There don't seem to be any diffs!
$
Why i am getting above message when there are difference in files ?

Generate unified diffs of all files using option -U of GNU diff command:
diff -U 100000 file1.c##/main/4 file1.c##/main/10 > uni_diffs
diff -U 100000 file2.c##/main/br1/3 file2.c##/main/branch2/4 >> uni_diffs
diff -U 100000 file3.c##/main/abc/4 file3.c##/main/30 >> uni_diffs
....
Note 100000 is passed so that complete file can also be viewed on review board.
Assuming file length is less than 1000000.
Post the above unified diff file to review board :
post-review --diff-filename=uni_diffs ....
Deepak

In many RBTool versions there is a mistake in class ClearCaseClient, function diff_between_revisions.
Problematic part in postreview.py looks like this:
revision_range = revision_range.split(';')
Two ways to handle this issue if staying with current version of Review Board and RBTools:
1) Changing semicolon to colon in the postreview.py code
2) Using semicolon as delimiter in the command line argument.
Choose the preferred solution and use it. ;-)

Related

How to get package changes before update in zypper

does a counterpart for apt-listchanges functionality from debian/ubuntu exists for zypper?
at the moment I have to do following manually for each updated package: 1) install with zypper, 2) check the changes with rpm -q --changelog PACKAGE_NAME and it is far away from the convenient way it is done by apt-listchanges. And most important for me, how to get changes before the installation (with possibility to abort)?
not with zypper but if you can download both rpms (old and new versions), you may use pkgdiff to check the differences.
I couldn't find a way to see changes made to an individual package without downloading it. OpenSUSE collects packages in "patches" since bugs often need changes to several packages. To see what's in a patch:
Get the name/ID of the available patches with zypper list-patches
Get the info about a patch set using zypper info -t patch $ID where $ID is the ID from the output of the previous command.
If you want to look at a certain package, you can use zypper download to download it without installing. After that, you can use rpm -q --changelog -p $PATH to see the changelog of the downloaded file at $PATH.
(I don't know what apt-listchanges outputs)
The main problem is to get output that is easily parsable from zypper. This isn't perfect, but it may get you on the way:
First get the plain names of the patches from zypper output, omitting header and trailer lines:
zypper -t lp | awk -F'|' '/^---/ { OK=1; next } OK && NF == 7 { gsub(" ", "", $2); print $2 }'
For example you could get:
openSUSE-2018-172
openSUSE-2018-175
openSUSE-2018-176
openSUSE-2018-178
Then feed that output into zypper again, like this:
zypper patch-info $(zypper -t lp | awk -F'|' '/^---/ { OK=1; next } OK && NF == 7 { gsub(" ", "", $2); print $2 }')
Output would include information like this (truncated for brevity):
Summary : Security update for ffmpeg
Description :
This update for ffmpeg fixes the following issues:
Updated ffmpeg to new bugfix release 3.4.2
* Fix integer overflows, multiplication overflows, undefined
shifts, and verify buffer lengths.

Fastest way to create 1000 folders one inside another and put a file in the last folder

I've written a code for searching a specific file , where the user enters a starting path and a filename , and then the program prints its details if the file exists , or prints not found otherwise.
The code is based on recursion . I want to test it with a large folder hierarchy , let's say 1000 folders , one inside the other , and put a file called david.txt inside the 1000th folder .
How can I do that without creating 1000 times folders for the next 3 hours ?
The code is written in C , under Ubuntu .
Thanks
Type the following in your shell:
mkdir -p folder$( seq -s "/folder" 999 )1000
Then you can enter this folder:
cd folder$( seq -s "/folder" 999 )1000
and create a file:
touch david.txt
and come back to your previous dir:
cd -
As some comments described, I would use the shell for such purposes:
#!/bin/sh
for i in $(seq 1000)
do
mkdir tst
cd tst
done
touch david.txt
On a related topic, let me suggest this article, which shows how sometimes shell scripting can solve your problems in much less development time. Specially for ad-hoc problems like this one.
Simple bash loop:
$ pushd .
$ for i in {1..1000}; do
mkdir d$i;
cd d$i;
done
$ touch david.txt
$ popd
Use the same code, (almost), to create the folders and files. Once that is working, the searching/reporting is almost done as well. It's sorta self-testing :)

Moving things in terminal based on their name

Edit: I think this has been answered successfully, but I can't check 'til later. I've reformatted it as suggested though.
The question: I have a series of files, each with a name of the form XXXXNAME, where XXXX is some number. I want to move them all to separate folders called XXXX and have them called NAME. I can do this manually, but I was hoping that by naming them XXXXNAME there'd be some way I could tell Terminal (I think that's the right name, but not really sure) to move them there. Something like
mv *NAME */NAME
but where it takes whatever * was in the first case and regurgitates it to the path.
This is on some form of Linux, with a bash shell.
In the real life case, the files are 0000GNUmakefile, with sequential numbering. I'm having to make lots of similar-but-slightly-altered versions of a program to compile and run on a cluster as part of my research. It would probably have been quicker to write a program to edit all the files and put in the right place in the first place, but I didn't.
This is probably extremely simple, and I should be able to find an answer myself, if I knew the right words. Thing is, I have no formal training in programming, so I don't know what to call things to search for them. So hopefully this will result in me getting an answer, and maybe knowing how to find out the answer for similar things myself next time. With the basic programming I've picked up, I'm sure I could write a program to do this for me, but I'm hoping there's a simple way to do it just using functionality already in Terminal. I probably shouldn't be allowed to play with these things.
Thanks for any help! I can actually program in C and Python a fair amount, but that's through trial and error largely, and I still don't know what I can do and can't do in Terminal.
SO many ways to achieve this.
I find that the old standbys sed and awk are often the most powerful.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p'
If you're satisfied that the commands look right, pipe the command line through a shell:
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p' | sh
I put NAME in brackets and used \2 so that if it varies more than your example indicates, you can come up with a regular expression to handle your filenames better.
To do the same thing in gawk (GNU awk, the variant found in most GNU/Linux distros):
ls | gawk '/^[0-9]{4}NAME$/ {printf("mv -iv %s %s/%s\n", $1, substr($0,0,4), substr($0,5))}'
As with the first sample, this produces commands which, if they make sense to you, can be piped through a shell by appending | sh to the end of the line.
Note that with all these mv commands, I've added the -i and -v options. This is for your protection. Read the man page for mv (by typing man mv in your Linux terminal) to see if you should be comfortable leaving them out.
Also, I'm assuming with these lines that all your directories already exist. You didn't mention if they do. If they don't, here's a one-liner to create the directories.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mkdir -p \1:p' | sort -u
As with the others, append | sh to run the commands.
I should mention that it is generally recommended to use constructs like for (in Tim's answer) or find instead of parsing the output of ls. That said, when your filename format is as simple as /[0-9]{4}word/, I find the quick sed one-liner to be the way to go.
Lastly, if by NAME you actually mean "any string of characters" rather than the literal string "NAME", then in all my examples above, replace NAME with .*.
The following script will do this for you. Copy the script into a file on the remote machine (we'll call it sortfiles.sh).
#!/bin/bash
# Get all files in current directory having names XXXXsomename, where X is an integer
files=$(find . -name '[0-9][0-9][0-9][0-9]*')
# Build a list of the XXXX patterns found in the list of files
dirs=
for name in ${files}; do
dirs="${dirs} $(echo ${name} | cut -c 3-6)"
done
# Remove redundant entries from the list of XXXX patterns
dirs=$(echo ${dirs} | uniq)
# Create any XXXX directories that are not already present
for name in ${dirs}; do
if [[ ! -d ${name} ]]; then
mkdir ${name}
fi
done
# Move each of the XXXXsomename files to the appropriate directory
for name in ${files}; do
mv ${name} $(echo ${name} | cut -c 3-6)
done
# Return from script with normal status
exit 0
From the command line, do chmod +x sortfiles.sh
Execute the script with ./sortfiles.sh
Just open the Terminal application, cd into the directory that contains the files you want moved/renamed, and copy and paste these commands into the command line.
for file in [0-9][0-9][0-9][0-9]*; do
dirName="${file%%*([^0-9])}"
mkdir -p "$dirName"
mv "$file" "$dirName/${file##*([0-9])}"
done
This assumes all the files that you want to rename and move are in the same directory. The file globbing also assumes that there are at least four digits at the start of the filename. If there are more than four numbers, it will still be caught, but not if there are less than four. If there are less than four, take off the appropriate number of [0-9]s from the first line.
It does not handle the case where "NAME" (i.e. the name of the new file you want) starts with a number.
See this site for more information about string manipulation in bash.

On-the-fly compression of stdin failing?

From what was suggested here, I am trying to pipe the output from sqlcmd to 7zip so that I can save disk space when dumping a 200GB database. I have tried the following:
> sqlcmd -S <DBNAME> -Q "SELECT * FROM ..." | .\7za.exe a -si <FILENAME>
This does not seem to be working even when I leave the system for a whole day. However, the following works:
> sqlcmd -S <DBNAME> -Q "SELECT TOP 100 * FROM ..." | .\7za.exe a -si <FILENAME>
and even this one:
> sqlcmd -S <DBNAME> -Q "SELECT * FROM ..."
When I remove the pipe symbol, I can see the results and can even redirect it to a file within finishes in 7 hours.
I am not sure what is going on with piping large amount of output but what I could understand up until this point is that 7zip seems to be waiting to consume the whole input before it creates an archive file (because I don't really see a file being created to begin with) so I am not sure if it is actually performing on-the-fly compression. So I tried gzip and here's my experience:
> echo "Test" | .\gzip.exe > test.gz
> .\gzip.exe test.gz
gzip: test.gz: not in gzip format
I am not sure I am doing this the right way. Any suggestions?
Oh boy! It was PowerShell all along! I have no idea why this is happening at least with gzip. Gzip kept complaining that the input was not in gzip format. I switched over to the normal command prompt and everything started working.
I did observe this before. Looks like | and > have a slightly different functionality in PowerShell and Command prompt. Not sure what exactly it is but if someone knows about it, please add in here.

wget - specify directory and rename the file

I'm trying to download multiple files and need to rename as I download, how can I do that and specify the directory I want them to download to? I know i need to be using -P and -O to do this but it does not seem to be working for me.
Ok it's too late to post my answer here but I'll correct #Bill's answer
If you read in "man wget" you will see the following
...
wget [option]... [URL]...
...
That is, common sense leads to realizing that
wget -O /directory_path/filename.file_format https://example.com
is the default that aligns with the wget documentation.
Remember: Just because it works doesn't mean it's right!
I ran into a similar situation and came across your question. I was able to get what I needed by writting a little bash script that parsed a file of urls in one column and the name in the 2nd.
This is the script I used for my particular requirement. Maybe it will give you some guidance if you still need help.
#!/bin/bash
FILE=URLhtmlPageWImagesWids.txt
while read line
do
F1=$(echo $line|cut -d " " -f1)
F2=$(echo $line|cut -d " " -f2)
wget -r -l1 --no-parent -A.jpg -O $F2.jpg $F1
done < $FILE
This won't work actually because -O combines all results into one page.
You could try using the --no-directories or --cut-dirs switch and in the loop process the files in the folder how you want to rename them.
wget your_url -O your_specify_dir/your_name
Like Bill was saying
wget http://example.com/original-filename -O /home/new_filename
worked for me !
Thanks
This may works for everyone
mkdir Download1
wget -O "Download1/test 10mb.zip" "http://www.speedtest.com.sg/test_random_10mb.zip"
You need to use " " for name with space.
I'm a little late to the party, but I just wrote a script to do this. You can check it out here: bulkGetter

Resources