I am trying to make SSML like this: $ 11:20 Am
but Alexa speaks: "Dollar eleven colon twenty AM".
If I remove $, Alexa will speak: "Dollar eleven twenty AM".
What is wrong with symbol $ and how to solve it?
Related
Git CMD line noob here, how do I change the default plus/minus (+/-) signs to something more unique, such as (>>>/<<<) or (|/~). Or any other symbol not as common as (+/-)!
Reason: I am trying to automate a report that collects all the changes to our schema.sql files. I have the line below that does an adequate job:
git log -p --since="14 days ago" -- *Schema*.sql
My only real issue with the output is the plus/minus (+/-) signs which are used to show what has been added or removed:
+ This line was added
- This line was removed
Comments in SQL (t-SQL) are two minus signs (--), so when a comment is removed, I end up with this:
--- This comment was removed
If I can substitute the (+/-) with a unique value I can format the results and make a nice, pretty report for the people that want to see things like that. Thanks in advance!
--output-indicator-new=<char>
--output-indicator-old=<char>
--output-indicator-context=<char>
Specify the character wanted for -old.
https://git-scm.com/docs/git-log#_common_diff_options
I don't know if git can do this natively, but you can certainly achieve what you want by piping the output of git log into sed. For example to change the plus to '$' and the minus to '%' in your report you could use the following command:
git log -p --since="14 days ago" -- *Schema*.sql | sed 's/^+/$/g' | sed 's/^-/%/g'
I have written a script and it is printing multiline output. But in nagios it is showing only one line. Does anyone know how to print multilines in Nagios
Multiline ouput is possible only with Nagios 3 and newer.
First of all, you can use html tag <br/> for each new line of desired output.
Next important thing is disable HTML tag escaping in your cgi.cfg on Nagios server.
Find escape_html_tags=1 and change to escape_html_tags=0.
Then you restart Nagios server.
Some advice about Nagios plugins output can be found here: https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/pluginapi.html
PS: By default Nagios will only read the first 4 KB of data that a plugin returns.
Yes! I agree, multiline output is possible only with Nagios 3 and above. However your do not need to send <br/> tags to achieve this. You can simply end each line with '\r\n'. Use the following format.
DISK OK - free space: / 3326 MB (56%); | /=2643MB;5948;5958;0;5968
/ 15272 MB (77%);
/boot 68 MB (69%);
/home 69357 MB (27%);
/var/log 819 MB (84%); | /boot=68MB;88;93;0;98
/home=69357MB;253404;253409;0;253414
/var/log=818MB;970;975;0;980
The performance data is optional. Even the following format is acceptable. (Note that each line end with a \r\n)
Weather OK - Temperature OK: 22 C
Humidity (77%) | Temp=22;35;39;40; Humidity=77;80;90;99;
For more details, check the official documentation on multiline output.
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/pluginapi.html
I need some help with awk and double quoting.
I have this text file with separated by tab values (multiple lines).
Ex.
22-03-2016 11:25 25 Session reconnection succeeded user 10.10.10.10
Now I want to change the date notation.
I want above example to be
2016-03-22 11:25 (as in %Y-%m-%d %H:%M)
I am trying to use awk (on my mac)
I manually can change the date with:
date -j -u -f "%d-%m-%Y %H:%M" "22-03-2016 11:25" "+%Y-%m-%d %H:%M"
result: 2016-03-22 11:25
I am struggling with awk to do this. Having problems with the quoting.
Any other ways of doing this are appreciated!
Regards,
Ronald
Ok,
So I found a solution using sed.. (had to switch to a real linux enviroment.. but ok)
Using this command:
sed -i.bak -r 's/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3-\2-\1/g' TextFile
Always funny how I find the answer right after I post a question....
I have a line I want to separate into sentences using awk. I've set my field separator to '.' with -F. and used loop to print the grabbed sentences. But as expected it will also separate the dotted abbreviations.
For example, I have this line:
I was born in 1990. Specifically Aug. 13, 1990. Etc etc etc.
What it does is it will output:
I was born in 1990
Specifically on Aug
13, 1990
Etc etc etc
Even though what I want was:
I was bon in 1990
Specifically on Aug. 13, 1990
Etc etc etc
What is the simplest method to bypass said abbreviations? Was a . for -F enough?
EDIT
Abbreviated words were months.
$ awk -v RS='.' '{gsub(/^ +/,"")} /(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)$/{printf "%s. ",$0; next} /[^[:space:]]/{print $0 "."}' input.txt
I was born in 1990.
Specifically Aug. 13, 1990.
Etc etc etc.
How it works
-v RS='.'
Use the period as a record separator.
gsub(/^ +/,"")
Remove any leading spaces from records.
/(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)$/{printf "%s. ",$0; next}
If a record ends with a month abbreviation, print the record followed by a period and a space but no newline. Skip the remaining commands and jump to the next record.
/[^[:space:]]/{print $0 "."}
If the record contains any non-blanks, print it followed by a period.
I'm trying to feed random files from my iTunes Library into mpv, to shuffle my music without iTunes. Initially I tried to pass all the files into mpv using '**' and use mpv's '--shuffle' option, but mpv cannot take that many arguments. Instead, I want to generate the list of files in my own script and pass random elements from it into mpv.
Here is my code:
RANDOM=$$$(date +%s)
FILES=(~/Music/iTunes/iTunes\ Media/Music/**/**/*)
while [ 1 ]
do
# Get random song
nextsong=${FILES[$RANDOM % ${#FILES[#]} ]}
# Play song
mpv $nextsong --audio-display=no
done
When I run this, something is wrong with the list of files, mpv tries to play incomplete bits of filepaths.
Something along the lines of this, might work for you:
while read i
do mpv "$i" --audio-display=no
done < <(find ~/Music/iTunes/iTunes\ Media/Music/ -type f| perl -MList::Util=shuffle -e 'print shuffle(<STDIN>);')
Notice that you may have to add more filtering to the find command in order to make sure you only find appropriate files.