I am using ansible 2.4.2.0 and want to use ansible_ssh_user as user1 and then run the commands in reote box as user2. How can we achieve this. I have tried using:
become: yes
become_user: user2
But this is not working ' saying user1 doesnot have provileges to execute commands on remote machine as user2'.
Can someone please help?
I have continuing problems with this and it always ends up being a struggle due to nonsense with permissions
1) First of all refer to this specific bit:
https://docs.ansible.com/ansible/2.5/user_guide/become.html#becoming-an-unprivileged-user
And usually this line in config:
allow_world_readable_tmpfiles = true
should get you around your problem
2) The next option is a bit of a hack and is to simply run it directly as a user:
"sudo -u USER bash -c 'COMMAND TO RUN HERE'"
3) Finally the last option is, do you actually need to run that command as that user? Can it simply be done using sudo: true using things like owner: otheruser
Hope this helps
Related
I'm passing a variable to my .dacpac but the text received is not what I passed. Example command:
sqlpackage /v:TextTest="abc]123" /Action:Publish /SourceFile:"my.dacpac" /TargetDatabaseName:MyDb /TargetServerName:"."
My variable $(TextTest) comes out as "abc]]123" instead of the original "abc]123".
Is there anything I can do to prevent SqlPackage from corrupting my input variables before they are passed to the .dacpac scripts?
Unfortunately, I don't think there is a good answer. This appears to be a very old bug. I'm seeing references to this issue going back 10 years.
Example A: https://web.archive.org/web/20220831180208/https://social.msdn.microsoft.com/forums/azure/en-US/f1d153c2-8f42-4148-b313-3449075c612f/sql-server-database-project-sqlcmd-variables-with-closing-square-brackets
They mention a "workaround" in the post, but they link to a Microsoft Connect issue which no longer exists and is not available on archive.org.
My best guess is that the "workaround" is to generate the deploy script rather than publishing, and then manually modify the variable value in the script...which is not really a workaround if you are working on a build/release pipeline or any sort of automation.
I tried testing this to see if it would make any difference using Microsoft.SqlServer.Dac.DacServices.Publish() directly (via dbatools PowerShell module), but unfortunately the problem exists there as well.
I also tested it against every keyboard accessible symbol and that is the only character it seems to have a problem with.
Another option, though still not great, is to generate the deployment script, then execute it using SQLCMD.EXE.
So for example this would work:
sqlpackage /Action:Script `
/DeployScriptPath:script.sql `
/SourceFile:foobar.dacpac `
/TargetConnectionString:'Server=localhost;Database=foobar;Uid=sa;Password=yourStrong(!)Password' `
/p:CommentOutSetVarDeclarations=True
SQLCMD -S 'localhost' -d 'foobar' -U 'sa' -P 'yourStrong(!)Password' `
-i .\script.sql `
-v TextTest = "abc]123" `
-v DatabaseName = "foobar"
/p:CommentOutSetVarDeclarations=True - This setting is key, otherwise SQLCMD will be overridden by what's in the file. Just make sure you specify ALL variables, and not just the one you need. So open the file to look at what is commented out and make sure you are supplying what is needed.
It's not a great option...but it's at least scriptable and doesn't require manual intervention.
we're trying to make our deployment scripts as generic as possible. Is it possible to have capistrano 3 prompt for the server address rather than setting it in the config files.
So far i have a capistrao task that does
namespace :config do
task :setup do
ask(:db_user, 'db_user')
ask(:db_pass, 'db_pass')
ask(:db_name, 'db_name')
ask(:db_host, 'db_host')
ask(:application, 'application')
ask(:web_server, 'server')
setup_config = <<-EOF
#{fetch(:rails_env)}:
adapter: postgresql
database: #{fetch(:db_name)}
username: #{fetch(:db_user)}
password: #{fetch(:db_pass)}
host: #{fetch(:db_host)}
EOF
on roles(:app) do
execute "mkdir -p #{shared_path}/config"
upload! StringIO.new(setup_config), "# {shared_path}/config/database.yml"
end
end
end
and in my production.rb file i have
set :application, "#{fetch(:application)}"
set :server_name, "#{fetch(:application)}.#{fetch(:server)}"
set :app_port, "80"
But when I do cap production config:setup to run the config script I get an error asking me for a password. If i hard code the server address in the production.rb file it works fine...how can I resolve this?
Thanks
I hope that someone else offers a more elegant solution, but if not:
I've done this in some cases with environment variables. If you want, you can also use a Makefile to simplify some of the env combinations.
I have written the following code:
#!/bin/bash
#Simple array
array=(1 2 3 4 5)
echo ${array[*]}
And I am getting error:
array.sh: 3: array.sh: Syntax error: "(" unexpected
From what I came to know from Google, that this might be due to the fact that Ubuntu is now not taking "#!/bin/bash" by default... but then again I added the line but the error is still coming.
Also I have tried by executing bash array.sh but no luck! It prints blank.
My Ubuntu version is: Ubuntu 14.04
Given that script:
#!/bin/bash
#Simple array
array=(1 2 3 4 5)
echo ${array[*]}
and assuming:
It's in a file in your current directory named array.sh;
You've done chmod +x array.sh;
You have a sufficiently new version of bash installed in /bin/bash (you report that you have 4.3.8, which is certainly new enough); and
You execute it correctly
then that should work without any problem.
If you execute the script by typing
./array.sh
the system will pay attention to the #!/bin/bash line and execute the script using /bin/bash.
If you execute it by typing something like:
sh ./array.sh
then it will execute it using /bin/sh. On Ubuntu, /bin/sh is typically a symbolic link to /bin/dash, a Bourne-like shell that doesn't support arrays. That will give you exactly the error message that you report.
The shell used to execute a script is not affected by which shell you're currently using or by which shell is configured as your login shell in /etc/passwd or equivalent (unless you use the source or . command).
In your own answer, you say you fixed the problem by using chsh to change your default login shell to /bin/bash. That by itself should not have any effect. (And /bin/bash is the default login shell on Ubuntu anyway; had you changed it to something else previously?)
What must have happened is that you changed the command you use from sh ./array.sh to ./array.sh without realizing it.
Try running sh ./array.sh and see if you get the same error.
Instead of using sh to run the script,
try the following command:
bash ./array.sh
I solved the problem miraculously. In order to solve the issue, I found a link where it was described to be gone by using the following code. After executing them, the issue got resolved.
chsh -s /bin/bash adhikarisubir
grep ^adhikarisubir /etc/passwd
FYI, "adhikarisubir" is my username.
After executing these commands, bash array.sh produced the desired result.
This is the process we perform manually.
$ sudo su - gvr
[gvr/DB:DEV3FXCU]/home/gvr>
$ ai_dev.env
Gateway DEV3 $
$ gw_report integrations long
report is ******
Now i am attempting to automate this process using a shell script:
#!/bin/ksh
sudo su - gvr
. ai_dev3.env
gw_report integrations long
but this is not working. Getting stuck after entering the env.
Stuck at this place (Gateway DEV3 $)
You're not running the same commands in the two examples - gw_report long != gw_report integrations long. Maybe the latter takes much longer (or hangs).
Also, in the original code you run ai_dev.env and in the second you source it. Any variables set when running a script are gone when returning from that script, so I suspect this accounts for the different behavior.
My organization is using Nagios with the check_mk plugin to monitor our nodes. My question is: is it possible run a manual check from the command line? It is important, process-wise, to be able to test a configuration change before deploying it.
For example, I've prepared a configuration change which uses the ps.perf check type to check the number of httpd processes on our web servers. The check looks like this:
checks = [
( ["web"], ALL_HOSTS, "ps.perf", "Number of httpd processes", ( "/usr/sbin/httpd", 1, 2, 80, 100 ) )
]
I would like to test this configuration change before committing and deploying it.
Is it possible to run this check via the command line, without first adding it to main.mk? I'm envisioning something like:
useful_program -H my.web.node -c ps.perf -A /usr/sbin/httpd,1,2,80,100
I don't see any way to do something like this in the check_mk documentation, but am hoping there is a way to achieve something like this.
Thanks!
that is easy to check.
Just make your config changes and then run:
cmk -nv HOSTNAME.
That (-n) will try run everything and return (-v) the output.
So can see the same results like later in the GUI.
List the check
$check_mk -L | grep ps.perf
if it listing ps.perf then run following command,
$check_mk --checks=ps.perf -I Hostname