How can I send messages directly to a group? - adb

I need help sending text to a particular group using the adb shell.
Searching the forums here, I found the code below:
adb shell am start -n com.whatsapp/.Main
adb shell am start -a android.intent.action.SEND -t text/plain -e jid
'xxxxxxxxxxx#s.whatsapp.net' --eu android.intent.extra.STREAM
file:///storage/emulated/0/DCIM/Camera/IMG_20181025_223214.jpg -p com.whatsapp
the question is:
How to identify a group's JID.
How to change this code to send a text instead of a file.

Groups jid you can get in messages.db at chat_list table by gorup name (subject column) or at messages table by message text. it will be like 77787987018-1602484814#g.us.
To send text u can use this command:
am start -a android.intent.action.SEND -c android.intent.category.DEFAULT -t text/plain -e jid {user jid} -e android.intent.extra.TEXT "{text}" -p {whatsapp packet name}

Related

Use curl to post file from pipe

How might i take the output from a pipe and use curl to post that as a file?
E.g. the following workds
curl -F 'file=#data/test.csv' -F 'filename=test.csv' https://mydomain#apikey=secret
I'd like to get the file contents from a pipe instead but I can't quite figure out how to specify it as a file input. My first guess is -F 'file=#-' but that's not quite right.
cat data/test.csv | curl -F 'file=#-' -F 'filename=test.csv' https://mydomain#apikey=secret
(Here cat is just a substitute for a more complex sequence of events that would get the data)
Update
The following works:
cat test/data/test.csv | curl -XPOST -H 'Content-Type:multipart/form-data' --form 'file=#-;filename=test.csv' $url
If you add --trace-ascii - to the command line you'll see that curl already uses that Content-Type by default (and -XPOST doesn't help either). It was rather your fixed -F option that did the trick!

In Bash script trying to pass local variable to SSH and then execute the other commands

#!/bin/bash
count2=1
declare -a input
input=( "$#" )
echo " "
echo " Hostname passed by user is " ${input[0]}
HOST="${input[0]}"
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no user#$HOST /bin/bash << ENDSSH
echo " Connected "
echo $count2
echo $input
pwd
echo $count2: ${input[$count2]}
nic=${input[$count2]}
echo $nic
echo $(ethtool "${nic}" |& grep 'Link' | awk '{print $3}')
ENDSSH
So Actually want to pass variable 'count2' and 'input' to remote SSH and execute.
But unfortunately it is not getting passed. It is not echoing anything after SSH.
Need help in this.!!
I have sshpass installed in sever.
code output:
[user#l07 ~]$ ./check.sh <hostname> eno6
Hostname passed by user is <hostname>
Connected
After SSH it only echos "Connected". I'm not sure why $count2 and $input is not echoing.
I tired with backlash '\$count2' but that is also not working. All possible combination tried even with quote and unquote of ENDSSH. Pls help
Any help will be really appreciated!!
You basically want to supply to your remote bash a HERE-document to be executed. This is tricky, since you need to "compose" the full text of this document before you can supply it to ssh. I would therefore separate the task into two parts:
Creating the HERE-document
Running it on ssh
This makes it easy for debugging to output the document between steps 1 and 2 and to visually inspect its contents for correctness. Don't forget that once this code runs on the remote host, it can't access any of your variables anymore, unless you have "promoted" them to the remote side using the means provided by ssh.
Hence you could start like this:
# Create the parameters you want to use
nic=${input[$count2]}
# Create a variable holding the content of the remote script,
# which interpolates your parameters
read -r -d '' remote_script << ENDSSH
echo "Connected to host \$(hostname)"
echo "Running bash version: \$BASH_VERSION"
....
ethtool "$nic" |& grep Link | awk '{ print $3 }'
ENDSSH
# Print your script for verification
echo "$remote_script"
# Submit it to the host
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no "user#$HOST" /bin/bash <<<"$remote_script"
You have to add escapes(\) here:
...
echo \$nic
...
echo \$(ethtool "\${nic}" |& grep 'Link' | awk '{print \$3}')
...
But why echoing this? Try it without echo
...
ethtool "\${nic}" |& grep -i 'Link' | awk '{print \$3}'
...
#!/bin/bash
count2=1
declare -a input
input=( "$#" )
echo " Hostname passed by user is " "${input[0]}"
HOST="${input[0]}"
while [ $# -gt $count2 ]
do
sed -i 's/VALUE/'"${input[$count2]}"'/g' ./check.sh
sshpass -p '<pass>' scp ./check.sh user#"$HOST":/home/user/check.sh
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no user#"$HOST" "sh /home/user/check.sh && rm -rf /home/user/check.sh"
sed -i 's/'"${input[$count2]}"'/VALUE/g' ./check.sh
((count2++))
done
Found the another solution of this issue: It is working for me now !!!!
I wrote my entire logic which needs to be executed remotely in check.sh file and now replacing or storing the user input into this check.sh file and copying this file into remote server via scp and executing it over remotely and after successful execution removing this file from remote server and after ssh , again changing the user input to it's original value in local server using sed command.
Made this as dynamic script to work for multiple servers.

Imap: Get multiple sections from email with one command

I crossposted this in the curl mailing list (curl-users).
I know how to get multiple parts of an email header like this:
curl --url "imaps://imap.ionos.de/INBOX;UID=216;SECTION=HEADER.FIELDS%20(DATE%20FROM%20TO%20SUBJECT)" -u "user:password"
But is it possible to get multiple sections (received date and text in
my case) at once? I want to combine these two commands:
1: Fetch email receiving date
curl --url "imaps://imap.ionos.de/INBOX;UID=216;SECTION=HEADER.FIELDS%20(DATE)" -u "user:password"
2: Fetch email text
curl --url "imaps://imap.ionos.de/INBOX;UID=216;SECTION=TEXT" -u "user:password"
I tried something like this:
curl --url "imaps://imap.ionos.de/INBOX;UID=216;SECTION=HEADER.FIELDS%20(DATE);SECTION=TEXT" -u "user:password"
–––––––––––––––––––––
Thank you diciu, this works just fine. I have an additional question:
When i add --ouput output.txt the file only contains the last FETCH (SECTION=TEXT in this case). Is it possible to output all FETCH results to one output file? I know i can use >> output.txt to append but i use that to write a log file within the same command.
Here my full command:
curl --url "imaps://imap.ionos.de/INBOX;UID=295;SECTION=HEADER.FIELDS%20(DATE)" "imaps://imap.ionos.de/INBOX;UID=295;SECTION=TEXT"-u "user:password" --output output.txt --verbose >> logfile.log 2>&1
You have to add the two requests one after another, e.g.
curl --url "imaps://imap.ionos.de/INBOX;UID=216;SECTION=HEADER.FIELDS%20(DATE)" "imaps://imap.ionos.de/INBOX;UID=216;SECTION=TEXT" -u "user:password"
curl will issue the two commands on the same session, that is after SELECT-ing the folder it will yield two FETCH commands.

BCP command in Linux doesn't return the proper error code

In Linux/Unix based systems, whenever we execute a command in the shell and we echo the $?, the return value is 0 when its a success and the return value is 1 if the command fails.
So, if I am using the BULK COPY utility called BCP for SQL Server, and if the command fails when there is an error with the source file. For example, if I execute a bcp command like this.
/opt/bin/bcp <tablename> in <source_file> -S -U -P -D
and it says. "0 Rows Copied". It might be due to some errors in the source file. And, after that I do a echo $?. The value returned is still 0.
Is there a way we can capture the return value as 1, when encountered an error?
Thanks.
BCP doesn't document any return value. That's a shortcoming. What you can do is redirect output to a file, and look for error indications (probably, the text "Error").
To add a bit more detail to TT.'s answer, I write the bcp output to another file and then grep through it. My code looks like...
/opt/mssql-tools/bin/bcp "$TABLENAME" in $f \
-S $DEST_IP \
-U $USER -P $PASSWORD \
-d $DEST_DB \
-c \
-t "\t" \
-e $EXPORT_STAGE/$TABLENAME/$TABLENAME.$(basename $f).bcperror.log \
| tee $EXPORT_STAGE/$TABLENAME/$TABLENAME.$(basename $f).bcprun.log
# note here that I preserve the bcp output to can later in script
# look for error messages in the bcp process run itself (manually done since bcp does not throw error codes)
echo -e "\n Checking for error messages in bcp output\n"
if grep -q "Error" $EXPORT_STAGE/$TABLENAME/$TABLENAME.$(basename $f).bcprun.log
then
echo -e "\n\nError: error message detected in bcp process output, exiting..."
exit 255
fi
# can delete since already printing to stdout as well (and can just log the whole thing)
rm -f $EXPORT_STAGE/$TABLENAME/$TABLENAME.$(basename $f).bcprun.log

why is checking a URL failing when run through icinga?

I created my own command to check a specific URL
define command{
command_name check_url
command_line /usr/lib/nagios/plugins/check_http -f follow -H '$HOSTNAME$' -I '$HOSTADDRESS$' -u '$ARG1$'
}
If I run my command from the command line, it works:
/usr/lib/nagios/plugins/check_http -f follow -H www.example.com -u http://www.example.com/server-status
HTTP OK: HTTP/1.1 200 OK - 4826 bytes in 0.011 second response time |time=0.010625s;;;0.000000 size=4826B;;;0
But when run through Icinga, I'm getting
HTTP WARNING: HTTP/1.1 404 NOT FOUND - 314 bytes in 0.011 second response time
My guess is for check_http plugin for -u option you should provide the url appended after the server name not the whole url.
Ex.
/usr/lib/nagios/plugins/check_http -f follow -H www.example.com -u /server-status
Your manual test is not equivalent to your command definition.
The distinction with -H/-I is subtle, but very important.
When I have problems like this, where Icinga is abstracting exactly how it is executing the command, I find it helpful to find out precicely what Icinga is executing. I would accomplish this as follows:
Move check_http to a temporary location
# mv /usr/lib/nagios/plugins/check_http /usr/lib/nagios/plugins/check_http_actual
Make a bash script that Icinga will call instead of the actual check_http script
# vi /usr/lib/nagios/plugins/check_http
In that file, create this simple bash script, which simply echos the command line arguments it was called with, then exits:
#!/bin/bash
echo $#
Then of course, make that bash script executable:
# chmod +x /usr/lib/nagios/plugins/check_http
Now in Icinga, run the check_http command. At that point, the return status shown in the Icinga web interface will show exactly how Icinga is calling check_http. Seeing the raw command, it should be obvious as to what Icinga is doing wrong. Once you correct Icinga's mistake, you can simply move the original check_http script back into place:
# mv /usr/lib/nagios/plugins/{check_http_actual,check_http}

Resources