How to pull all the android archive file at same time by android debug bridge?
this is used for one android archive file:
adb pull /data/app/com.imo.android.imoim-1/base.apk E:\APK
but i want all the file same time.
To copy pull all the files from the directory, you can use the following command if you use adb with root permission:
adb pull /data/app/.
If you are not using adb with root permission, you need to copy all the files first to a "non-root" location and pull them from there:
c:\> adb shell su
root#device: / # mkdir /sdcard/data/copy_apks
root#device: / # cp -R /data/app/. /sdcard/data/copy_apks/
root#device: / # exit
c:\> adb pull /sdcard/data/copy_apks/. c:\to\your\location
c:\> adb shell rm -R /sdcard/data/copy_apks
Note: you need to have a rooted device or you need to be in custom recovery
Linux and non-root solution.
Create a shell script pm_apk.sh with content:
adb shell pm list packages -f | cut -d':' -f2- | rev | cut -d'=' -f2- | rev | awk '{print "adb pull "$0}' > adb_pm.list
Run command bash pm_apk.sh to generate adb_pm.list file(Assume your shell is bash). Example:
$ cat adb_pm.list
adb pull /data/app/~~FOOBAR==/com.azure.FOOBAR-FOOBAR==/base.apk
adb pull /data/app/~~FOOBAR==/com.adobe.FOOBAR_FOOBAR==/base.apk
....
Then create this python script adb_pm_unique_output.py with content (the commented part is to debug to ensure no duplicated output filename when pull):
pkg_names = []
out_names = []
pkg_name = ''
open('adb_pull.sh', 'w').close()
with open('adb_pull.sh', 'a') as f_run:
with open('adb_pm.list') as f:
for lines in f.readlines():
apks = lines.rstrip()
apk_name = apks.split('/')[-1]
pkg_name_raw = apks.split('/')[-2]
if '-' in pkg_name_raw:
pkg_name = pkg_name_raw.split('-')[0]
out_name = pkg_name + '_' + apk_name
else: # overlay
out_name = apk_name
#print(apk_name)
if pkg_name_raw != 'overlay':
#print('what is this no overlay and no - ?' + repr(pkg_name_raw)) # DMService ...etc
pkg_name = pkg_name_raw
if apk_name != (pkg_name_raw + '.apk'):
pass # print(apk_name, '##', pkg_name_raw)
#if pkg_name in pkg_names:
#print(pkg_name)
# print('WARNING. Duplicated pkg name!')
if out_name in out_names:
#print(out_name)
print('WARNING. Duplicated out name!')
pkg_names.append(pkg_name)
out_names.append(out_name)
#print(lines.strip() + " '" + out_name + "'")
f_run.write(lines.strip() + " '" + out_name + "'\n")
Run command python3 adb_pm_unique_output.py to generate adb_pull.sh script. Example:
$ cat adb_pull.sh
adb pull /data/app/~~FOOBAR==/com.azure.FOOBAR-FOOBAR==/base.apk 'com.azure.FOOBAR_base.apk'
adb pull /data/app/~~FOOBAR==/com.adobe.FOOBAR_FOOBAR==/base.apk 'com.adobe.FOOBAR_base.apk'
...
Then run that script with command time parallel :::: adb_pull.sh to pull apks.
time for output spent time. parallel speed up the time from ~3 minutes(non-parallel) to ~2 minutes 34 seconds(parallel) in my case.
Note that few apks such as /vendor/overlay/ get Permission denied because no root.
Related
Adding weather to the status bar in i3 can be done in several ways, including:
py3status
piping i3status to a custom bash script
i3status does not allow including arbitrary shell commands in the configuration file. NixOS environment for Python requires further configuration, and when I pipe i3status I lose the color formatting. How do I preserve color formatting and add weather without adding additional i3 extensions?
Add a shell script to /etc/nixos/i3/weather.sh (modified from Reddit user #olemartinorg):
#!/bin/sh
# weather.sh
# shell script to prepend i3status with weather
i3status -c /etc/nixos/i3/i3status.conf | while :
do
read line
weather=$(cat ~/.weather.cache)
weather_json='"name":"weather","color":"#FFFFFF", "full_text":'
weather_json+=$(echo -n "$weather" | python -c 'import json,sys; print json.dumps(sys.stdin.read())')
weather_json+='},{'
# Inject our JSON into $line after the first [{
line=${line/[{/[{$weather_json}
echo "$line" || exit 1
done
Create a cronjob in your NixOs configuration.nix:
services.cron = {
enable = true;
systemCronJobs = [
"*/5 * * * * USERNAME . /etc/profile; curl -s wttr.in/Osnabrueck?format=3 > ~/.weather.cache"
];
};
Replace "Osnabrueck" with your city name, and USERNAME with your username. This creates a file .weather.cache which will contain the local weather as a one-liner.
Finally, update i3.conf, replacing i3status with the path to your script:
bar {
status_command /etc/nixos/i3/weather.sh
tray_output primary
}
nixos-rebuild switch and start i3 ($mod+Shift+R). You should now see your weather at the bottom (or wherever your i3 status bar displays).
I have a number of project folders that all got their date modified set to the current date & time somehow, despite not having touched anything in the folders. I'm looking for a way to use either a batch applet or some other utility that will allow me to drop a folder/folders on it and have their date modified set to the date modified of the most recently modified file in the folder. Can anyone please tell me how I can do this?
In case it matters, I'm on OS X Mavericks 10.9.5. Thanks!
If you start a Terminal, and use stat you can get the modification times of all the files and their corresponding names, separated by a colon as follows:
stat -f "%m:%N" *
Sample Output
1476985161:1.png
1476985168:2.png
1476985178:3.png
1476985188:4.png
...
1476728459:Alpha.png
1476728459:AlphaEdges.png
You can now sort that and take the first line, and remove the timestamp so you have the name of the newest file:
stat -f "%m:%N" *png | sort -rn | head -1 | cut -f2 -d:
Sample Output
result.png
Now, you can put that in a variable, and use touch to set the modification times of all the other files to match its modification time:
newest=$(stat -f "%m:%N" *png | sort -rn | head -1 | cut -f2 -d:)
touch -r "$newest" *
So, if you wanted to be able to do that for any given directory name, you could make a little script in your HOME directory called setMod like this:
#!/bin/bash
# Check that exactly one parameter has been specified - the directory
if [ $# -eq 1 ]; then
# Go to that directory or give up and die
cd "$1" || exit 1
# Get name of newest file
newest=$(stat -f "%m:%N" * | sort -rn | head -1 | cut -f2 -d:)
# Set modification times of all other files to match
touch -r "$newest" *
fi
Then make that executable, just necessary one time, with:
chmod +x $HOME/setMod
Now, you can set the modification times of all files in /tmp/freddyFrog like this:
$HOME/setMod /tmp/freddyFrog
Or, if you prefer, you can call that from Applescript with a:
do shell script "$HOME/setMod " & nameOfDirectory
The nameOfDirectory will need to look Unix-y (like /Users/mark/tmp) rather than Apple-y (like Macintosh HD:Users:mark:tmp).
I'm building two scripts which combined will fully uninstall a program (Microsoft Lync) on Mac OS X. I need to be able to swap from an account with root access (this account initially executes the first script) to the user whom is currently logged in.
This is necessary because the second script needs to be executed not only by the logged-in user, but from said user's shell. The two scripts are name Uninstall1.sh and Uninstall2.sh in this example.
Uninstall1.sh (executed by root user):
#!/bin/bash
#commands ran by root user
function rootCMDs () {
pkill Lync
rm -rf /Applications/Microsoft\ Lync.app
killall cfprefsd
swapUser
}
function swapUser () {
currentUser=$(who | grep console | grep -v _mbsetupuser | grep -v root | awk '{print $1}' | head -n 1)
cp /<directory>/Uninstall2.sh${currentUser}
su -l ${currentUser} -c "<directory>/{currentUser}/testScript.sh";
<directory> actually declared in the scripts, but for the sake of privacy I've excluded it.
In the above script, I run some basic commands as the root user to remove the app to the trash, and kill cfprefsd to prevent having to reboot the machine. I then call the swapUser function, which dynamically identifies the current user account signed into and assigns this to the variable currentUser (in this case within our environment, it's safe to assume only one user is logged into the computer at a time). I'm not sure whether or not I'll need the cp directory/Uninstall2.sh portion yet, but this is intended to solve a different problem.
The main problem is getting the script to properly handle the su command. I use the -l flag to simulate a user login, which is necessary because this not only substitutes to the user account which is logged into, but it launches a new shell as said user. I need to use -l because OS X doesn't allow modifying another user's keychain from an admin account (the admin account in question has root access, but isn't nor does it switch to root). -c is intended to execute the copied script, which is as follows:
Uninstall2.sh (needs to be executed by the locally logged-in user):
#!/bin/bash
function rmFiles () {
# rm -rf commands
# rm -rf commands
certHandler1
}
function certHandler1 () {
myCert=($(security dump-keychain | grep <string> | grep alis | sed -e 's/"alis"<blob>="//' | sed -e 's/"//'))
cLen=${#myCert[#]} # Count the amount of items in the array; there are usually duplicates
for ((i = 0;
i < ${cLen};
i++));
do security delete-certificate -c ${myCert[$i]};
done
certHandler2
}
function certHandler2 () {
# Derive the name of, and delete Keychain items related to Microsoft Lync.
myAccount=$(security dump-keychain | grep KeyContainer | grep acct | sed -e 's/"acct"<blob>="//' | sed -e 's/"//')
security delete-generic-password -a ${myAccount}
lyncPW=$(security dump-keychain | grep Microsoft\ Lync | sed -e 's/<blob>="//' | awk '{print $2, $3}' | sed -e 's/"//')
security delete-generic-password -l "${lyncPW}"
}
rmFiles
In the above script, rmFiles kicks the script off by removing some files and directories from the user's ~/Library directory. This works without a problem, assuming the su from Uninstall1.sh properly executes this second script using the local user's shell.
I then use security dump-keychain to dump the local user's shell, find a specific certificate, then assign all results to the cLen array (because there may be duplicates of this item in a user's keychain). Each item in the array is then deleted, after which a few more keychain items are dynamically found and deleted.
What I've been finding is that the first script will either properly su to the logged-in user which it finds, at which point the second script doesn't run at all. Or, the second script is ran as the root user and thus doesn't properly delete the keychain items from the logged-in user it's supposed to su to.
Sorry for the long post, thanks for reading, and I look forward to some light shed on this situation!
Revision
I managed to find a way to achieve all that I am trying to do in a single bash script, rather than two. I did this by having the main script create another bash script in /tmp, then executing that as the local user. I'll provide it below to help anybody else whom may need this functionality:
Credit to the following source for the code on how to create another bash script within a bash script:
http://tldp.org/LDP/abs/html/here-docs.html - Example 19.8
#!/bin/bash
# Declare the desired directory and file name of the script to be created. I chose /tmp because I want this file to be removed upon next start-up.
OUTFILE=/tmp/fileName.sh
(
cat <<'EOF'
#!/bin/bash
# Remove user-local Microsoft Lync files and/or directories
function rmFiles () {
rm -rf ~/Library/Caches/com.microsoft.Lync
rm -f ~/Library/Preferences/com.microsoft.Lync.plist
rm -rf ~/Library/Preferences/ByHost/MicrosoftLync*
rm -rf ~/Library/Logs/Microsoft-Lync*
rm -rf ~/Documents/Microsoft\ User\ Data/Microsoft\ Lync\ Data
rm -rf ~/Documents/Microsoft\ User\ Data/Microsoft\ Lync\ History
rm -f ~/Library/Keychains/OC_KeyContainer*
certHandler1
}
# Need to build in a loop that determines the count of the output to determine whether or not we need to build an array or use a simple variable.
# Some people have more than one 'PRIVATE_STRING' certificate items in their keychain - this will loop through and delete each one. This may or may not be necessary for other applications of this script.
function certHandler1 () {
# Replace 'PRIVATE_STRING' with whatever you're searching for in Keychain
myCert=($(security dump-keychain | grep PRIVATE_STRING | grep alis | sed -e 's/"alis"<blob>="//' | sed -e 's/"//'))
cLen=${#myCert[#]} # Count the amount of items in the array
for ((i = 0;
i < ${cLen};
i++));
do security delete-certificate -c ${myCert[$i]};
done
certHandler2
}
function certHandler2 () {
# Derive the name of, then delete Keychain items related to Microsoft Lync.
myAccount=$(security dump-keychain | grep KeyContainer | grep acct | sed -e 's/"acct"<blob>="//' | sed -e 's/"//')
security delete-generic-password -a ${myAccount}
lyncPW=$(security dump-keychain | grep Microsoft\ Lync | sed -e 's/<blob>="//' | awk '{print $2, $3}' | sed -e 's/"//')
security delete-generic-password -l "${lyncPW}"
}
rmFiles
exit 0
EOF
) > $OUTFILE
# -----------------------------------------------------------
# Commands to be ran as root
function rootCMDs () {
pkill Lync
rm -rf /Applications/Microsoft\ Lync.app
killall cfprefsd # killing cfprefsd mitigates the necessity to reboot the machine to clear cache.
chainScript
}
function chainScript () {
if [ -f "$OUTFILE" ]
then
# Make the file in /tmp executable. This is necessary for /tmp as a non-root user cannot access files in this directory.
chmod 755 $OUTFILE
# Dynamically identify the user currently logged in. This may need some tweaking if multiple User Accounts are logged into the same computer at once.
currentUser=$(who | grep console | grep -v _mbsetupuser | grep -v root | awk '{print $1}' | head -n 1);
su -l ${currentUser} -c "bash /tmp/UninstallLync2.sh"
else
echo "Problem in creating file: \"$OUTFILE\""
fi
}
# This method also works for generating
#+ C programs, Perl programs, Python programs, Makefiles,
#+ and the like.
# Commence the domino effect.
rootCMDs
exit 0
# -----------------------------------------------------------
Cheers!
I am trying to make a backup using crontab on a linux machine.
I have a short script :
#!/bin/bash
export ORACLE_HOME=<oracle_home_directory>
DATE=`date +%F_%H-%M-%S`
echo $DATE
/u01/app/oracle/product/11.2.0/dbhome_1/bin/expdp system/oramanager full=Y parallel=4 directory=data_pump_dir dumpfile=prod1-ecmdb1-$DATE.dmp logfile=prod-ecmdb1-$DATE.log compression=all
I have placed this script in crontab as such:
02 17 * * * cd /u01/app/oracle/admin/ecmdb1/dpdump/ && /u01/app/oracle/admin/ecmdb1/dpdump/backup.sh > /tmp/test.out
But the script does not run. It says in logs that :
UDE-12162: operation generated ORACLE error 12162
ORA-12162: TNS:net service name is incorrectly specified
If I run the whole script line manually, it works fine. But doesnt work fine using cron. Do I need to setup variables ?
Set ORACLE_HOME AND ORACLE_SID
export ORACLE_HOME=/u01/oracle/product/......
export ORACLE_SID=dbname
Add export ORACLE_SID=<...>
And make sure the cron is set up under the same user, not root.
Here's the working script, after using all the help from Community.
Create a Bash file(used nano here):
nano DBBackUp.sh
Copy the below code and edit the contents in angle brackets:
#!/bin/bash
export ORACLE_HOME=<OracleHomeDirectory>
export ORACLE_SID=<SID>
DATE=`date +%d%m%Y`
DATETIME=`date +%F_%H%M%S`
echo $DATETIME | tee DBBackUp_$DATE.log
echo "Exporting..." | tee -a DBBackUp_$DATE.log
$ORACLE_HOME/bin/expdp <SCHEMA/PASSWORD> directory=DP dumpfile=BACKUP$DATE.dmp | echo export.log | cat export.log >> DBBackUp_$DATE.log
echo "Compressing..." | tee -a DBBackUp_$DATE.log
zip BACKUP$DATE.zip BACKUP$DATE.dmp >> DBBackUp_$DATE.log
echo "Deleting..." | tee -a DBBackUp_$DATE.log
rm BACKUP$DATE.dmp 2>&1 | tee -a DBBackUp_$DATE.log | cat DBBackUp_$DATE.log
Create a cronjob:
00 13 28 04 * /home/oracle/DBBackUp/DBBackUp.sh
Above cronjob executed at 01:00pm on 28th April 2020
System creates export.log in DP directory.
Here, all the files are in same location. [DP Directory]
Make sure that oracle user's having necessary permissions & ownership to the shell script.
Is there anyone who has installed opentsdb on Ubuntu 15.04 version? If so please share the steps to be followed. I tried number of times but I am not able to install it properly.
You need to write tcollector, for example:
Step 1: create metrics:
./tsdb mkmetric proc.loadavg.1m proc.loadavg.5m
Step 2: create collector in shell script or command line.
cat >loadavg-collector.sh <<\EOF
#!/bin/bash set -e
while true; do
awk -v now=`date +%s` -v host=`hostname` \ '{ print "put proc.loadavg.1m " now " " $1 " host=" host;
print "put proc.loadavg.5m " now " " $2 " host=" host }'
/proc/loadavg sleep 15 done | nc -w 30 host.name.of.tsd PORT EOF
Then:
chmod +x loadavg-collector.sh
nohup ./loadavg-collector.sh &
It will collect data every 15 second on metrics proc.loadavg.1m and proc.loadavg.5m. Now you will be able to see graph in webinterface of opentsdb.
For detail please check the link below:
http://opentsdb.net/docs/build/html/user_guide/quickstart.html