If I have a username and password pair how can I verify that they are actually correct in a Linux system? I know I can use passwd to do so but I want to do it programatically using C.
I should not require root privileges (so reading the shadow file is not an option).
Thank you.
If you are using a PAM, you might be able to make use of checkpassword-pam.
The manual has an example command (with debugging) which should give you a good place to start.
echo -e "username\0password\0timestamp\0" \
| checkpassword-pam -s SERVICE \
--debug --stdout -- /usr/bin/id 3<&0
This is a simple example to check the password with
some python code. No root privilegs are needed.
#!/usr/bin/python3
# simple linux password checker with
# standard python
import os, pty
def check_pass(user, passw):
# returns: 0 if check ok
# 1 check failed
# 2 account locked
if type(passw) is str:
passw = passw.encode()
pid, fd = pty.fork()
# try to su a fake shell which returns '-c OK' on ok
if not pid:
# child
argv = ('su', '-c', 'OK', '-s', '/bin/echo', user)
os.execlp(argv[0], *argv)
return # SHOULD NEVER REACHED
okflg = False
locked = False
while True:
try:
data = os.read(fd, 1024)
##print('data:', data, flush=True)
except OSError:
break
if not data:
break
data = data.strip()
if data == b'Password:':
os.write(fd, passw + b'\r\n')
elif data.endswith(b'OK'):
okflg = True
break
elif data.find(b'locked') > -1:
# show that account is locked
locked = True
print(data, flush=True)
break
os.close(fd)
# check result from su and okflg
if (not os.waitpid(pid, 0)[1]) and okflg:
return 0
return 2 if locked else 1
if __name__ == '__main__':
print(check_pass('xx', 'yy'))
Related
Please forgive a bash newbie for any silly questions.
I am really stuck here and I would love to know how this works and what I am doing wrong.
I have written this script which is supposed to capture syslog server based on protocol.
The input is as follows:
sys syslog {
include "destination remote_server {tcp(\"10.1.0.100\" port (514));tcp(\"192.168.1.5\" port (514));udp(\"192.168.1.60\" port (514));};filter f_alllogs {level (debug...emerg);};log {source(local);filter(f_alllogs);destination(remote_server);};"
remote-servers {
mysyslog {
host 192.168.1.1
}
remotesyslog1 {
host 192.168.1.2
}
remotesyslog2 {
host 192.168.1.3
local-ip 10.0.0.50
}
}
}
From this I would like to get something like in the end:
tcp=10.1.0.100
tcp=192.168.1.50
udp=192.168.1.60
udp=192.168.1.1
udp=192.168.1.2
udp=192.168.1.3
So I started with a bash script to parse the output.
#!/bin/bash
#Save output to file
syslogoutput=$(< /home/patrik/input)
echo "Testing variable:"
echo $syslogoutput
echo ""
#Declare array
tcpservers=()
echo $syslogoutput | while read line ; do
matches=($(echo $line | grep -Po '(tcp\("[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}")'))
#If number of matches is greater than 0, save them to tcpservers
if [ ${#matches[#]} -gt 0 ]; then
tcpservers=("${matches[#]}")
#Echoing matches
echo "Testing matches in loop:"
for i in "${matches[#]}"; do
echo $i
done
fi
done
echo "Testing output:"
for i in "${tcpservers[#]}"; do
echo $i
done
I expected something like this:
...input file separated by line breaks
Testing matches in loop:
tcp("10.1.0.100"
tcp("192.168.1.5"
Testing output:
tcp("10.1.0.100"
tcp("192.168.1.5"
But instead I get:
sys syslog { include "destination remote_server {tcp(\"10.1.0.100\" port (514));tcp(\"192.168.1.5\" port (514));udp(\"192.168.1.60\" port (514));};filter f_alllogs {level (debug...emerg);};log {source(local);filter(f_alllogs);destination(remote_server);};" remote-servers { mysyslog { host 192.168.1.1 } remotesyslog1 { host 192.168.1.2 } remotesyslog2 { host 192.168.1.3 local-ip 10.0.0.50 } } }
Testing matches in loop:
tcp("10.1.0.100"
tcp("192.168.1.5"
Testing output:
So on to my questions:
Why isn't tcpservers=("${matches[#]}") working?
Why isn't the output cached with line breaks?
Why does bash scripting make me want to jump from a tall building every time I try it?
/Patrik
Don't use redirection, as it starts the loop in a subshell, and variables form a subshell don't propagate into the parent shell.
while read line ; do
# ...
done <<< "$syslogoutput"
You also overwrite the tcpservers on each iteration. Change the assignment to
tcpservers+=("${matches[#]}")
# ^
# |
# add to an array, don't overwrite
What I'm trying to achieve is a capistrano3 task that does a log file grep on all servers - this would save a lot of time as we have a lot of servers so doing it manually or even scripted but sequentially takes ages.
I have a rough at the edges task that actually works except when one of the servers returns nothing for the grep. In this case the whole command falls over.
Hence wondering if there is a way to set capture to accept empty returns.
namespace :admin do
task :log_grep, :command, :file do |t,args|
command = args[:command] || 'ask for a command'
file = args[:file] || 'log_grep_results'
outs = {}
on roles(:app), in: :parallel do
outs[host.hostname] = capture(:zgrep, "#{command}")
end
File.open(file, 'w') do |fh|
outs.each do |host,out|
fh.write(out)
end
end
end
end
Should anyone else come to this question, here's solution - raise_on_non_zero_exit: false
i wanted:
resp = capture %([ -f /var/run/xxx/xxx.pid ] && echo "ok")
error:
SSHKit::Command::Failed: [ -f /var/run/xxx/xxx.pid ] && echo "ok" exit status: 1
[ -f /var/run/xxx/xxx.pid ] && echo "ok" stdout: Nothing written
[ -f /var/run/xxx/xxx.pid ] && echo "ok" stderr: Nothing written
solution:
resp = capture %([ -f /var/run/xxx/xxx.pid ] && echo "ok"), raise_on_non_zero_exit: false
# resp => ""
So the work around I did was to start adding what I'm calling Capistrano utility scripts in the repo. Then capistrano runs these scripts. All the scripts are is a wrapper around a grep and some logic to output something if the return is empty.
Capistrano code:
namespace :utils do
task :log_grep, :str, :file, :save_to do |t,args|
command_args = "#{args[:str]} #{args[:file]}"
outs = {}
on roles(:app), in: :parallel do
outs[host.hostname] = capture(:ruby, "#{fetch(:deploy_to)}/current/bin/log_grep.rb #{args[:str]} #{args[:file]}")
end
file = args[:save_to]
file ||= 'log_grep_output'
File.open(file, 'w') do |fh|
outs.each do |host,out|
s = "#{host} -- #{out}\n"
fh.write(s)
end
end
end
end
Ruby script log_grep.rb:
a = `zgrep #{ARGV[0]} #{ARGV[1]}`
if a.empty?
puts 'Nothing Found'
else
puts a
end
I have a number of raspberry PIs attached to various networks distributed over a large area so this will have to be a remote process. I need to expand the file system so it fills the full 8Gb (currently 2Gb). We use Puppet to distribute updates but I am not sure what the process of commands is.
I know this can be achieved locally using raspi-config but I will need to create a script or send a command to do this over the network.
raspi-config is a shell script. The section on memory expansion is listed below. Here are the basic steps:
Verify that the desired expansion is on a SD card, not external device, and not more than the two normal partitions.
Determine the exact partition and desired partition size. (parted)
Change the size of the partition in the partition table. (This usually requires a reboot to take effect.) (fdisk)
Expand the filesystem to the complete size of the partition (which was resized in step 3 above). This is setup as a shell script to run after reboot. (resize2fs)
Because there are minor differences in the size of SD cards, even different models from the same manufacturer, it would be extremely difficult to give a more specific set of commands.
#!/bin/sh
# Part of raspi-config http://github.com/asb/raspi-config
# ...
if ! [ -h /dev/root]; then
whiptail --msgbox '/dev/root does not exist or is not a symlink. Don't know how to expand" 20 60 2
return 0
fi
ROOT_PART=$(readlink /dev/root)
PART_NUM=${ROOT_PART#mmcblk0p}
if [ "$PART_NUM" = "$ROOT_PART" ]; then
whiptail --msgbox "/dev/root is not an SD card. Don't know how to expand" 20 60 2
return 0
fi
# NOTE: the NOOBS partition layout confuses parted. For now, let's only
# agree to work with a sufficiently simple partition layout
if [ "$PART_NUM" -ne 2 ]; then
whiptail msgbox "Your partition layout is not currently supported by this tool. You rae probably using NOOBS, in which case your root filesystem is already expanded anyway." 20 60 2
return 0
fi
LAST_PART_NUM=$(parted /dev/mmcblk0 -ms unit s p | tail -n 1 | cut -f 1 -d:)
if [ "$LAST_PART_NUM" != "$PART_NUM" ]; then
whiptail --msgbox "/dev/root is not the last partition. Don't know how to expand" 20 60 2
return 0
fi
# Get the starting offset of the root partition
PART_START=$(parted /dev/mmcblk0 -ms unit s p | grep "^${PART_NUM}" | cut -f 2 -d:)
[ "$PART_START" ] || return 1
# Return value will likely be error for fdisk as it fails to reload the
# partition table because the root fs is mounted
fdisk /dev/mmdblk0 <<EOF
p
d
$PART_NUM
n
p
$PART_NUM
$PART_START
p
w
EOF
ASK_TO_REBOOT=1
# now set up an init.d script
cat <<\EOF > /etc/init.d/resize2fs_once &&
#!/bin/sh
### BEGIN INIT INFO
# Provides: resize2fs_once
# Required-Start:
# Required-Stop:
# Default-Start: 2 3 4 5 S
# Default-Stop:
# Short-Description: Resize the root filesystem to fill partition
### END INIT INFO
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting resize2fs_once" &&
resize2fs /dev/root &&
rm /etc/init.d/resize2fs_once &&
update-rc.d resize2fs_once remove &&
log_end_msg $?
;;
*)
echo "Usage $0 start" >&2
exit 3
;;
esac
EOF
chmod +x /etc/init.d/resize2fs_once &&
update-rc.d resize2fs_once defaults &&
if [ '$INTERACTIVE" = True ]; then
whiptail --msgbox "Root partition has been resized.\nThe filesystem will be enlarged upon the next reboot" 20 60 2
fi
I'm currently writing a shell script that reads a Vagrantfile and bootstraps it (in a nutshell ;) )
But I'm hitting a wall with the following piece of code:
TEST=()
while read result; do
TEST+=(`echo ${result}`)
done <<< `awk '/config.vm.define[ \s]\"[a-z]*\"[ \s]do[ \s]\|[a-zA-Z_]*\|/, /end/ { print }' Vagrantfile`
echo "${TEST[1]}"
When I pass a Vagrantfile into this awk pattern regex with two machines defined (config.vm.define) in it they are found.
The output
config.vm.define "web" do |web|
web.vm.box = "CentOs"
web.vm.box_url = "http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130731.box"
web.vm.hostname = 'dev.local'
web.vm.network :forwarded_port, guest: 90, host: 9090
web.vm.network :private_network, ip: "22.22.22.11"
web.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "web.pp"
puppet.module_path = "puppet/modules"
puppet.options = ["--verbose", "--hiera_config /vagrant/hiera.yaml", "--parser future"]
end
config.vm.define "db" do |db_mysql|
db_mysql.vm.box = "CentOs"
db_mysql.vm.box_url = "http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130731.box"
db_mysql.vm.hostname = 'db.mysql.local'
db_mysql.vm.network :private_network, ip: "22.22.22.22"
db_mysql.vm.network :forwarded_port, guest: 3306, host: 3306
db_mysql.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "db.pp"
puppet.module_path = "puppet/modules"
puppet.options = ["--verbose", "--hiera_config /vagrant/hiera.yaml", "--parser future"]
end
But I can't seem to pass them into a array nicely. What I want is that the TEST array contains two indexes with the machine config.vm.define block as their corresponding values.
E.g.
TEST[0] = 'config.vm.define "web" do |web|
.... [REST OF THE BLOCK CONTENT] ...
end'
TEST[1] = 'config.vm.define "db" do |db_mysql|
.... [REST OF THE BLOCK CONTENT] ...
end'
The output echo "${TEST[1]}" is nothing. echo "${TEST[0]}" returns the whole block as plotted above.
I played with IFS / RS / FS but I can't seem to get the output I want.
A solution might be to write the two blocks to two separate files (blk1 and blk2) as:
awk '
/config.vm.define[[:space:]]\"[a-z]*\"[[:space:]]do[[:space:]]\|[a-zA-Z_]*\|/{f=1; i++}
f{print $0 > "blk"i}
/end/ {f=0}' Vagrantfile
and then later read these two files into the bash array as
IFS= TEST=( $(cat <"blk1") $(cat <"blk2") )
Note:
The regex \s seems to work only for the latest version of gawk (Works with version 4.1, but not version 3.1.8.
For gawk version 3.1.8, use [[:space:]] instead.
For gawk version 4.1, the regex \s does not work inside brackets [\s]. Use either config.vm.define[[:space:]] or config.vm.define\s..
Update
An alternative could be to insert an artificial separator between the blocks, for instance the string ###. Then you could do
IFS= TEST=()
while IFS= read -r -d '#' line ; do
TEST+=($line)
done < <(awk '
/config.vm.define[[:space:]]\"[a-z]*\"[[:space:]]do[[:space:]]\|[a-zA-Z_]*\|/{f=1; i++}
f{print }
/end/ {f=0; print "###"}' Vagrantfile)
I'm writing a script using multiple dialog screens in bash. One of the functions of the script should be killing specific processes. I load all my PID values in one array called pid:
pid = ('1234' '1233' '1232' '1231' '1230')
Then I present the user with a dialog checklist that contains a list of the processes.
After they select some of them dialog returns the checklist entry number, for example 0,2,4.
My initial plan was to store the selected entries in a second array and then use it to get specific PID's from the first array, but so far, nothing I've tried worked, in this case that would be: 1234, 1232, 1230. So I can kill those specific processes.
Does anyone have a better solution? I want the easiest way of killing processes based on selections made by the user at the dialog checklist.
Here is the function in question:
stop_tunnel() {
local tunnels
local pid
declare -a tunnels
declare -a pid
#this is executed on a remote system in the real script
ps aux | grep -w ssh > $_temp
awk -F "ssh" '{print "ssh" $2}' $_temp > $_temp1
awk '{print $2}' $_temp > $_temp2
IFS='
'
tunnels=( $( < $_temp1 ) )
pid=( $( < $_temp2 ) )
dialog --checklist "Select tunnel to stop:" 10 72 0 \
0 "${tunnels[0]}" off \
1 "${tunnels[1]}" off \
2 "${tunnels[2]}" off \
3 "${tunnels[3]}" off \
4 "${tunnels[4]}" off \
2>$_temp
nr=$( < $_temp )
dialog --title " Tunnel stop " --msgbox "\nYou stopped these tunnels: ${nr[#]}" 6 44
}
The nr array holds the users selection. And I wanted to use that to pull specific members out of the pid array.
Select might be what you need:
select p in ${pid[#]} ; do echo "kill" $p && break; done
Note that blanks around the assignment won't work:
# wrong:
pid = ('1234' '1233' '1232' '1231' '1230')
# right:
pid=('1234' '1233' '1232' '1231' '1230')
To allow to kill multiple processes in sequence:
select p in ${pid[#]} ; do
if [[ -n $p ]]
then echo "kill" $p
else break
fi
done
1) 1234
2) 1233
3) 1232
4) 1231
5) 1230
#? 3
kill 1232
#? 4
kill 1231
#? 6
The echo is of course just for testing.
A hint for the user, that an invalid index will terminate the process killing seems appropriate. A second approach could be an explicit termination case:
pid=('1234' '1233' '1232' '1231' '1230' 'terminate')
which you would handle with the break.
If you just want to iterate over the selections, being made:
sel=(0 2 4)
for n in ${sel[#]} ; do echo kill ${pid[$n]}; done
update towards you comment:
I don't have dialog installed, but I guess zenity is similar. There you capture the output of a list seleciton:
selection=($(zenity --list --text "kill something" --multiple --column "kill" --separator " " --checklist --column ps TRUE foo FALSE bar FALSE baz TRUE fozboa))