I need some help with awk and double quoting.
I have this text file with separated by tab values (multiple lines).
Ex.
22-03-2016 11:25 25 Session reconnection succeeded user 10.10.10.10
Now I want to change the date notation.
I want above example to be
2016-03-22 11:25 (as in %Y-%m-%d %H:%M)
I am trying to use awk (on my mac)
I manually can change the date with:
date -j -u -f "%d-%m-%Y %H:%M" "22-03-2016 11:25" "+%Y-%m-%d %H:%M"
result: 2016-03-22 11:25
I am struggling with awk to do this. Having problems with the quoting.
Any other ways of doing this are appreciated!
Regards,
Ronald
Ok,
So I found a solution using sed.. (had to switch to a real linux enviroment.. but ok)
Using this command:
sed -i.bak -r 's/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3-\2-\1/g' TextFile
Always funny how I find the answer right after I post a question....
Related
Git CMD line noob here, how do I change the default plus/minus (+/-) signs to something more unique, such as (>>>/<<<) or (|/~). Or any other symbol not as common as (+/-)!
Reason: I am trying to automate a report that collects all the changes to our schema.sql files. I have the line below that does an adequate job:
git log -p --since="14 days ago" -- *Schema*.sql
My only real issue with the output is the plus/minus (+/-) signs which are used to show what has been added or removed:
+ This line was added
- This line was removed
Comments in SQL (t-SQL) are two minus signs (--), so when a comment is removed, I end up with this:
--- This comment was removed
If I can substitute the (+/-) with a unique value I can format the results and make a nice, pretty report for the people that want to see things like that. Thanks in advance!
--output-indicator-new=<char>
--output-indicator-old=<char>
--output-indicator-context=<char>
Specify the character wanted for -old.
https://git-scm.com/docs/git-log#_common_diff_options
I don't know if git can do this natively, but you can certainly achieve what you want by piping the output of git log into sed. For example to change the plus to '$' and the minus to '%' in your report you could use the following command:
git log -p --since="14 days ago" -- *Schema*.sql | sed 's/^+/$/g' | sed 's/^-/%/g'
The US Naval Observatory has an API that outputs a JSON file containing the sunrise and sunset times, among other things, as documented here.
Here is an example of the output JSON file:
{
"error":false,
"apiversion":"2.0.0",
"year":2017,
"month":6,
"day":10,
"dayofweek":"Saturday",
"datechanged":false,
"lon":130.000000,
"lat":30.000000,
"tz":0,
"sundata":[
{"phen":"U", "time":"03:19"},
{"phen":"S", "time":"10:21"},
{"phen":"EC", "time":"10:48"},
{"phen":"BC", "time":"19:51"},
{"phen":"R", "time":"20:18"}],
"moondata":[
{"phen":"R", "time":"10:49"},
{"phen":"U", "time":"16:13"},
{"phen":"S", "time":"21:36"}],
"prevsundata":[
{"phen":"BC","time":"19:51"},
{"phen":"R","time":"20:18"}],
"closestphase":{"phase":"Full Moon","date":"June 9, 2017","time":"13:09"},
"fracillum":"99%",
"curphase":"Waning Gibbous"
}
I'm relatively new to using JSON, but I understand that everything in square brackets after "sundata" is a JSON array (please correct me if I'm wrong). So I searched for instructions on how to get a value from a JSON array, without success.
I have downloaded the file to my system using:
wget -O usno.json "http://api.usno.navy.mil/rstt/oneday?ID=iOnTheSk&date=today&tz=0&coords=30,130"
I need to extract the time (in HH:MM format) from this line:
{"phen":"S", "time":"10:21"},
...and then use it to create a variable (that I will later write to a separate file).
I would prefer to use Bash if possible, preferably using a JSON parser (such as jq) if it'll be easier to understand/implement. I'd rather not use Python (which was suggested by a lot of the articles I have read previously) if possible as I am trying to become more familiar with Bash specifically.
I have examined a lot of different webpages, including answers on Stack Overflow, but none of them have specifically covered an array line with two key/value pairs per line (they've only explained how to do it with only one pair per line, which isn't what the above file structure has, sadly).
Specifically, I have read these articles, but they did not solve my particular problem:
https://unix.stackexchange.com/questions/177843/parse-one-field-from-an-json-array-into-bash-array
Parsing JSON with Unix tools
Parse json array in shell script
Parse JSON to array in a shell script
What is JSON and why would I use it?
https://developers.squarespace.com/what-is-json/
Read the json data in shell script
Thanks in advance for any thoughts.
Side note: I have managed to do this with a complex 150-odd line script made up of "sed"s, "grep"s, "awk"s, and whatnot, but obviously if there's a one-liner JSON-native solution that's more elegant, I'd prefer to use that as I need to minimise power usage wherever possible (it's being run on a battery-powered device).
(Side-note to the side-note: the script was so long because I need to do it for each line in the JSON file, not just the "S" value)
If you already have jq you can easily select your desired time with:
sun_time=$(jq '.sundata[] | select(.phen == "S").time' usno.json)
echo $sun_time
# "10:21"
If you must use "regular" bash commands (really, use jq):
wget -O - "http://api.usno.navy.mil/rstt/oneday?ID=iOnTheSk&date=today&tz=0&coords=30,130" \
| sed -n '/^"sundata":/,/}],$/p' \
| sed -n -e '/"phen":"S"/{s/^.*"time":"//'\;s/...$//\;p}
Example:
$ wget -O - "http://api.usno.navy.mil/rstt/oneday?ID=iOnTheSk&date=today&tz=0&coords=30,130" | sed -n '/^"sundata":/,/}],$/p' | sed -n -e '/"phen":"S"/{s/^.*"time":"//'\;s/...$//\;p}
--2017-06-10 08:02:46-- http://api.usno.navy.mil/rstt/oneday?ID=iOnTheSk&date=today&tz=0&coords=30,130
Resolving api.usno.navy.mil (api.usno.navy.mil)... 199.211.133.93
Connecting to api.usno.navy.mil (api.usno.navy.mil)|199.211.133.93|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/json]
Saving to: ‘STDOUT’
- [ <=> ] 753 --.-KB/s in 0s
2017-06-10 08:02:47 (42.6 MB/s) - written to stdout [753]
10:21
Edit: I think this has been answered successfully, but I can't check 'til later. I've reformatted it as suggested though.
The question: I have a series of files, each with a name of the form XXXXNAME, where XXXX is some number. I want to move them all to separate folders called XXXX and have them called NAME. I can do this manually, but I was hoping that by naming them XXXXNAME there'd be some way I could tell Terminal (I think that's the right name, but not really sure) to move them there. Something like
mv *NAME */NAME
but where it takes whatever * was in the first case and regurgitates it to the path.
This is on some form of Linux, with a bash shell.
In the real life case, the files are 0000GNUmakefile, with sequential numbering. I'm having to make lots of similar-but-slightly-altered versions of a program to compile and run on a cluster as part of my research. It would probably have been quicker to write a program to edit all the files and put in the right place in the first place, but I didn't.
This is probably extremely simple, and I should be able to find an answer myself, if I knew the right words. Thing is, I have no formal training in programming, so I don't know what to call things to search for them. So hopefully this will result in me getting an answer, and maybe knowing how to find out the answer for similar things myself next time. With the basic programming I've picked up, I'm sure I could write a program to do this for me, but I'm hoping there's a simple way to do it just using functionality already in Terminal. I probably shouldn't be allowed to play with these things.
Thanks for any help! I can actually program in C and Python a fair amount, but that's through trial and error largely, and I still don't know what I can do and can't do in Terminal.
SO many ways to achieve this.
I find that the old standbys sed and awk are often the most powerful.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p'
If you're satisfied that the commands look right, pipe the command line through a shell:
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p' | sh
I put NAME in brackets and used \2 so that if it varies more than your example indicates, you can come up with a regular expression to handle your filenames better.
To do the same thing in gawk (GNU awk, the variant found in most GNU/Linux distros):
ls | gawk '/^[0-9]{4}NAME$/ {printf("mv -iv %s %s/%s\n", $1, substr($0,0,4), substr($0,5))}'
As with the first sample, this produces commands which, if they make sense to you, can be piped through a shell by appending | sh to the end of the line.
Note that with all these mv commands, I've added the -i and -v options. This is for your protection. Read the man page for mv (by typing man mv in your Linux terminal) to see if you should be comfortable leaving them out.
Also, I'm assuming with these lines that all your directories already exist. You didn't mention if they do. If they don't, here's a one-liner to create the directories.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mkdir -p \1:p' | sort -u
As with the others, append | sh to run the commands.
I should mention that it is generally recommended to use constructs like for (in Tim's answer) or find instead of parsing the output of ls. That said, when your filename format is as simple as /[0-9]{4}word/, I find the quick sed one-liner to be the way to go.
Lastly, if by NAME you actually mean "any string of characters" rather than the literal string "NAME", then in all my examples above, replace NAME with .*.
The following script will do this for you. Copy the script into a file on the remote machine (we'll call it sortfiles.sh).
#!/bin/bash
# Get all files in current directory having names XXXXsomename, where X is an integer
files=$(find . -name '[0-9][0-9][0-9][0-9]*')
# Build a list of the XXXX patterns found in the list of files
dirs=
for name in ${files}; do
dirs="${dirs} $(echo ${name} | cut -c 3-6)"
done
# Remove redundant entries from the list of XXXX patterns
dirs=$(echo ${dirs} | uniq)
# Create any XXXX directories that are not already present
for name in ${dirs}; do
if [[ ! -d ${name} ]]; then
mkdir ${name}
fi
done
# Move each of the XXXXsomename files to the appropriate directory
for name in ${files}; do
mv ${name} $(echo ${name} | cut -c 3-6)
done
# Return from script with normal status
exit 0
From the command line, do chmod +x sortfiles.sh
Execute the script with ./sortfiles.sh
Just open the Terminal application, cd into the directory that contains the files you want moved/renamed, and copy and paste these commands into the command line.
for file in [0-9][0-9][0-9][0-9]*; do
dirName="${file%%*([^0-9])}"
mkdir -p "$dirName"
mv "$file" "$dirName/${file##*([0-9])}"
done
This assumes all the files that you want to rename and move are in the same directory. The file globbing also assumes that there are at least four digits at the start of the filename. If there are more than four numbers, it will still be caught, but not if there are less than four. If there are less than four, take off the appropriate number of [0-9]s from the first line.
It does not handle the case where "NAME" (i.e. the name of the new file you want) starts with a number.
See this site for more information about string manipulation in bash.
I'm creating a bash script that will load CSV files using SQL*Loader. Please refer to the code below:
#!/bin/bash
FILENAME = '/u02/logs/$(date -d '2 days ago' +%Y-%m-%d)*.csv'
# LOAD CSV FILE USING SQL*LOADER
sqlldr username/password#localhost control=control.ctl data=$FILENAME
However, when I try to run this script, I recieved the following error: SQL*Loader-500: Unable to open file (/u02/logs/*2011-11-06*.csv). I figure out that the problem is my * wildcard which is being interpreted as a string instead of a wildcard in bash.
Is there a way to tell the bash that my asterisk (*) is a wildcard and not a string?
Thank you for your support.
i dont know about sqlldr, but i think you can try:
#!/bin/bash
FILENAME = '/u02/logs/$(date -d '2 days ago' +%Y-%m-%d)*.csv'
# LOAD CSV FILE USING SQL*LOADER
for fname in $(ls $FILENAME); do
sqlldr username/password#localhost control=control.ctl data=$fname
done
hope it helps
Your use of single ticks is the problem. Also, I'm not used to seeing bash code have spaces surrounding the equals sign. Then again, I'm old skool. So this is what I'd do:
FILENAME=/u02/logs/"$(date -d '2 days ago' +%Y-%m-%d)*.csv"
That should do it. You don't need quotes around the first part since they're literal. Only use double quotes when you need the interpreter to do do some interpolation. Use single quotes when you don't want the interpreter to touch it.
Just use
FILE=/u02/logs/$(date -d '2 days ago' +%Y-%m-%d)*.csv
I also note additional * after the logs/ in your error message, but not in your code. Adjust accordingly.
The * will stay in the filename if there is no file matching the wildcard pattern. Also, be careful where more than one matching file exists.
Maybe you can try:
sqlldr username/password#localhost control=control.ctl data="$FILENAME"
I'm writing a program which requires knowledge of the current load on the system, and the activity of any users (it's a load balancer).
This is a university assignment, and I am required to use the w command. I'm having a hard time parsing this command because it is very verbose. Any suggestions on what I can do would be appreciated. This is a small part of the program, and I am free to use whatever method i like.
The most condensed version of w which still has the information I require is `w -u -s -f' which produces this:
10:13:43 up 9:57, 2 users, load average: 0.00, 0.00, 0.00
USER TTY IDLE WHAT
fsm tty7 22:44m x-session-manager
fsm pts/0 0.00s w -u -s -f
So out of that, I am interested in the first number after load average and the smallest idle time (so i will need to parse them all).
My background process will call w, so the fact that w is the lowest idle time will not matter (all i will see is the tty time).
Do you have any ideas?
Thanks
(I am allowed to use alternative unix commands, like grep, if that helps).
Are you allowed to use other Unix commands? You could use grep, sed or head/tail to get the lines you need, and cut to split them up as needed.
Check out: http://www.gnu.org/s/libc/manual/html_node/Regexp-Subexpressions.html#Regexp-Subexpressions
Use regular expressions to match [0-9]+\.[0-9]{2} on the first line. You may have to fiddle with which characters are escaped. That will give you 3 load averages.
The remaining output is column-based, so if you count off the string positions from w, you'll be able to just strncpy the interesting bits.
Another possible theory (which sounds like it goes against the assignment, but I'd keep it in mind) is to go grab the source code of w and hack it up to just tell you the information via function calls. If you're feeling really hardcore, you can learn all the library api calls and do it directly that way.
I found i can use a combination of commands like so:
w -u -s -f | grep load | cut -d " " -f 11
and
w -u -s -f | grep tty | cut -d " " -f 13
the first takes the output of w, uses grep to only select the line with load, and then cuts everything except for the 11th chunk of data (delimiter is a space), which is the first load number with a comma.
the second does something similar, only for user load. And if there are multiple loads, its a list.
This is easy enough to parse, unless someone has an objection, or suggestion to improve it.