Change Git Log Plus/Minus Signs to Anything Custom? - sql-server

Git CMD line noob here, how do I change the default plus/minus (+/-) signs to something more unique, such as (>>>/<<<) or (|/~). Or any other symbol not as common as (+/-)!
Reason: I am trying to automate a report that collects all the changes to our schema.sql files. I have the line below that does an adequate job:
git log -p --since="14 days ago" -- *Schema*.sql
My only real issue with the output is the plus/minus (+/-) signs which are used to show what has been added or removed:
+ This line was added
- This line was removed
Comments in SQL (t-SQL) are two minus signs (--), so when a comment is removed, I end up with this:
--- This comment was removed
If I can substitute the (+/-) with a unique value I can format the results and make a nice, pretty report for the people that want to see things like that. Thanks in advance!

--output-indicator-new=<char>
--output-indicator-old=<char>
--output-indicator-context=<char>
Specify the character wanted for -old.
https://git-scm.com/docs/git-log#_common_diff_options

I don't know if git can do this natively, but you can certainly achieve what you want by piping the output of git log into sed. For example to change the plus to '$' and the minus to '%' in your report you could use the following command:
git log -p --since="14 days ago" -- *Schema*.sql | sed 's/^+/$/g' | sed 's/^-/%/g'

Related

Awk and double quoting

I need some help with awk and double quoting.
I have this text file with separated by tab values (multiple lines).
Ex.
22-03-2016 11:25 25 Session reconnection succeeded user 10.10.10.10
Now I want to change the date notation.
I want above example to be
2016-03-22 11:25 (as in %Y-%m-%d %H:%M)
I am trying to use awk (on my mac)
I manually can change the date with:
date -j -u -f "%d-%m-%Y %H:%M" "22-03-2016 11:25" "+%Y-%m-%d %H:%M"
result: 2016-03-22 11:25
I am struggling with awk to do this. Having problems with the quoting.
Any other ways of doing this are appreciated!
Regards,
Ronald
Ok,
So I found a solution using sed.. (had to switch to a real linux enviroment.. but ok)
Using this command:
sed -i.bak -r 's/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3-\2-\1/g' TextFile
Always funny how I find the answer right after I post a question....

Mercurial getting file-specific log/history information

Is there a way to get file specific information, similar to
hg log
I basically want committer, date/time, and the commit summary, but of just a single file.
You can filter the results of the hg log command by including a filename like so:
hg log file.txt
That will give you the standard log for every changeset where file.txt was changed. You can use
hg log file.txt -l 10 -r "not merge()"
to limit it to only the last 10 as well as excluding merge changes using revsets

Linux: search and remove in file, new line when it is between two lines of digits

I have a big text file that has this format:
80708730272
598305807640 45097682220
598305807660 87992655320
598305807890
598305808720
598305809030
598305809280
598305809620 564999067
598305809980
33723830870
As you can see there is a row of digits and then in some occasions there is a second row.
In the text file (on solaris) the second row is under the first one.
I don't know why they are here side by side.
I want to put a coma whenever there is a number in the second row.
598305809620
564999067
make it like:
598305809620, 564999067
And if I could put also a semicolon ';' at the end of each line it would be perfect.
Could you please help?
What could I use and basically how could I do that?
My first instinct was sed rather than awk. They are both excellent tools to have.
I couldn't find an easy way to do it all in a single regex ("regular expression"), though. No doubt someone else will.
sed -i.bak -r "s/([0-9]+)(\s+[0-9]+)/\1,\2/g" filename.txt
sed -i -r "s/[0-9]+$/&;/g" filename.txt.bak
The first line takes care of the lines with two groups of digits, writing it out to a new file with an extra '.bak' file extension, just to be paranoid (aka 'good practice') and not risk overwriting your original file if you made a mistake.
The second line appends the semi-colon to all lines that contain at least one digit - so, skipping blank lines, for example. It overwrites the .bak file in place.
Once you have verified that the result is satisfactory, replace your original file with this one.
Let me know if you want a detailed explanation of exactly what's going on here.
In this situation, awk is your friend. Give this a whirl:
awk '{if (NF==2) printf "%s, %s;\n\n", $1, $2; else if (NF==1) printf "%s;\n\n", $1}' big_text.txt | cat > txt_file.txt
This should result in the following output:
80708730272;
598305807640, 45097682220;
598305807660, 87992655320;
598305807890;
598305808720;
598305809030;
598305809280;
598305809620, 564999067;
598305809980;
33723830870;
Hope that works for you!

Moving things in terminal based on their name

Edit: I think this has been answered successfully, but I can't check 'til later. I've reformatted it as suggested though.
The question: I have a series of files, each with a name of the form XXXXNAME, where XXXX is some number. I want to move them all to separate folders called XXXX and have them called NAME. I can do this manually, but I was hoping that by naming them XXXXNAME there'd be some way I could tell Terminal (I think that's the right name, but not really sure) to move them there. Something like
mv *NAME */NAME
but where it takes whatever * was in the first case and regurgitates it to the path.
This is on some form of Linux, with a bash shell.
In the real life case, the files are 0000GNUmakefile, with sequential numbering. I'm having to make lots of similar-but-slightly-altered versions of a program to compile and run on a cluster as part of my research. It would probably have been quicker to write a program to edit all the files and put in the right place in the first place, but I didn't.
This is probably extremely simple, and I should be able to find an answer myself, if I knew the right words. Thing is, I have no formal training in programming, so I don't know what to call things to search for them. So hopefully this will result in me getting an answer, and maybe knowing how to find out the answer for similar things myself next time. With the basic programming I've picked up, I'm sure I could write a program to do this for me, but I'm hoping there's a simple way to do it just using functionality already in Terminal. I probably shouldn't be allowed to play with these things.
Thanks for any help! I can actually program in C and Python a fair amount, but that's through trial and error largely, and I still don't know what I can do and can't do in Terminal.
SO many ways to achieve this.
I find that the old standbys sed and awk are often the most powerful.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p'
If you're satisfied that the commands look right, pipe the command line through a shell:
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p' | sh
I put NAME in brackets and used \2 so that if it varies more than your example indicates, you can come up with a regular expression to handle your filenames better.
To do the same thing in gawk (GNU awk, the variant found in most GNU/Linux distros):
ls | gawk '/^[0-9]{4}NAME$/ {printf("mv -iv %s %s/%s\n", $1, substr($0,0,4), substr($0,5))}'
As with the first sample, this produces commands which, if they make sense to you, can be piped through a shell by appending | sh to the end of the line.
Note that with all these mv commands, I've added the -i and -v options. This is for your protection. Read the man page for mv (by typing man mv in your Linux terminal) to see if you should be comfortable leaving them out.
Also, I'm assuming with these lines that all your directories already exist. You didn't mention if they do. If they don't, here's a one-liner to create the directories.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mkdir -p \1:p' | sort -u
As with the others, append | sh to run the commands.
I should mention that it is generally recommended to use constructs like for (in Tim's answer) or find instead of parsing the output of ls. That said, when your filename format is as simple as /[0-9]{4}word/, I find the quick sed one-liner to be the way to go.
Lastly, if by NAME you actually mean "any string of characters" rather than the literal string "NAME", then in all my examples above, replace NAME with .*.
The following script will do this for you. Copy the script into a file on the remote machine (we'll call it sortfiles.sh).
#!/bin/bash
# Get all files in current directory having names XXXXsomename, where X is an integer
files=$(find . -name '[0-9][0-9][0-9][0-9]*')
# Build a list of the XXXX patterns found in the list of files
dirs=
for name in ${files}; do
dirs="${dirs} $(echo ${name} | cut -c 3-6)"
done
# Remove redundant entries from the list of XXXX patterns
dirs=$(echo ${dirs} | uniq)
# Create any XXXX directories that are not already present
for name in ${dirs}; do
if [[ ! -d ${name} ]]; then
mkdir ${name}
fi
done
# Move each of the XXXXsomename files to the appropriate directory
for name in ${files}; do
mv ${name} $(echo ${name} | cut -c 3-6)
done
# Return from script with normal status
exit 0
From the command line, do chmod +x sortfiles.sh
Execute the script with ./sortfiles.sh
Just open the Terminal application, cd into the directory that contains the files you want moved/renamed, and copy and paste these commands into the command line.
for file in [0-9][0-9][0-9][0-9]*; do
dirName="${file%%*([^0-9])}"
mkdir -p "$dirName"
mv "$file" "$dirName/${file##*([0-9])}"
done
This assumes all the files that you want to rename and move are in the same directory. The file globbing also assumes that there are at least four digits at the start of the filename. If there are more than four numbers, it will still be caught, but not if there are less than four. If there are less than four, take off the appropriate number of [0-9]s from the first line.
It does not handle the case where "NAME" (i.e. the name of the new file you want) starts with a number.
See this site for more information about string manipulation in bash.

Using wildcard character in bash scripting

I'm creating a bash script that will load CSV files using SQL*Loader. Please refer to the code below:
#!/bin/bash
FILENAME = '/u02/logs/$(date -d '2 days ago' +%Y-%m-%d)*.csv'
# LOAD CSV FILE USING SQL*LOADER
sqlldr username/password#localhost control=control.ctl data=$FILENAME
However, when I try to run this script, I recieved the following error: SQL*Loader-500: Unable to open file (/u02/logs/*2011-11-06*.csv). I figure out that the problem is my * wildcard which is being interpreted as a string instead of a wildcard in bash.
Is there a way to tell the bash that my asterisk (*) is a wildcard and not a string?
Thank you for your support.
i dont know about sqlldr, but i think you can try:
#!/bin/bash
FILENAME = '/u02/logs/$(date -d '2 days ago' +%Y-%m-%d)*.csv'
# LOAD CSV FILE USING SQL*LOADER
for fname in $(ls $FILENAME); do
sqlldr username/password#localhost control=control.ctl data=$fname
done
hope it helps
Your use of single ticks is the problem. Also, I'm not used to seeing bash code have spaces surrounding the equals sign. Then again, I'm old skool. So this is what I'd do:
FILENAME=/u02/logs/"$(date -d '2 days ago' +%Y-%m-%d)*.csv"
That should do it. You don't need quotes around the first part since they're literal. Only use double quotes when you need the interpreter to do do some interpolation. Use single quotes when you don't want the interpreter to touch it.
Just use
FILE=/u02/logs/$(date -d '2 days ago' +%Y-%m-%d)*.csv
I also note additional * after the logs/ in your error message, but not in your code. Adjust accordingly.
The * will stay in the filename if there is no file matching the wildcard pattern. Also, be careful where more than one matching file exists.
Maybe you can try:
sqlldr username/password#localhost control=control.ctl data="$FILENAME"

Resources