we're trying to make our deployment scripts as generic as possible. Is it possible to have capistrano 3 prompt for the server address rather than setting it in the config files.
So far i have a capistrao task that does
namespace :config do
task :setup do
ask(:db_user, 'db_user')
ask(:db_pass, 'db_pass')
ask(:db_name, 'db_name')
ask(:db_host, 'db_host')
ask(:application, 'application')
ask(:web_server, 'server')
setup_config = <<-EOF
#{fetch(:rails_env)}:
adapter: postgresql
database: #{fetch(:db_name)}
username: #{fetch(:db_user)}
password: #{fetch(:db_pass)}
host: #{fetch(:db_host)}
EOF
on roles(:app) do
execute "mkdir -p #{shared_path}/config"
upload! StringIO.new(setup_config), "# {shared_path}/config/database.yml"
end
end
end
and in my production.rb file i have
set :application, "#{fetch(:application)}"
set :server_name, "#{fetch(:application)}.#{fetch(:server)}"
set :app_port, "80"
But when I do cap production config:setup to run the config script I get an error asking me for a password. If i hard code the server address in the production.rb file it works fine...how can I resolve this?
Thanks
I hope that someone else offers a more elegant solution, but if not:
I've done this in some cases with environment variables. If you want, you can also use a Makefile to simplify some of the env combinations.
Related
I'm passing a variable to my .dacpac but the text received is not what I passed. Example command:
sqlpackage /v:TextTest="abc]123" /Action:Publish /SourceFile:"my.dacpac" /TargetDatabaseName:MyDb /TargetServerName:"."
My variable $(TextTest) comes out as "abc]]123" instead of the original "abc]123".
Is there anything I can do to prevent SqlPackage from corrupting my input variables before they are passed to the .dacpac scripts?
Unfortunately, I don't think there is a good answer. This appears to be a very old bug. I'm seeing references to this issue going back 10 years.
Example A: https://web.archive.org/web/20220831180208/https://social.msdn.microsoft.com/forums/azure/en-US/f1d153c2-8f42-4148-b313-3449075c612f/sql-server-database-project-sqlcmd-variables-with-closing-square-brackets
They mention a "workaround" in the post, but they link to a Microsoft Connect issue which no longer exists and is not available on archive.org.
My best guess is that the "workaround" is to generate the deploy script rather than publishing, and then manually modify the variable value in the script...which is not really a workaround if you are working on a build/release pipeline or any sort of automation.
I tried testing this to see if it would make any difference using Microsoft.SqlServer.Dac.DacServices.Publish() directly (via dbatools PowerShell module), but unfortunately the problem exists there as well.
I also tested it against every keyboard accessible symbol and that is the only character it seems to have a problem with.
Another option, though still not great, is to generate the deployment script, then execute it using SQLCMD.EXE.
So for example this would work:
sqlpackage /Action:Script `
/DeployScriptPath:script.sql `
/SourceFile:foobar.dacpac `
/TargetConnectionString:'Server=localhost;Database=foobar;Uid=sa;Password=yourStrong(!)Password' `
/p:CommentOutSetVarDeclarations=True
SQLCMD -S 'localhost' -d 'foobar' -U 'sa' -P 'yourStrong(!)Password' `
-i .\script.sql `
-v TextTest = "abc]123" `
-v DatabaseName = "foobar"
/p:CommentOutSetVarDeclarations=True - This setting is key, otherwise SQLCMD will be overridden by what's in the file. Just make sure you specify ALL variables, and not just the one you need. So open the file to look at what is commented out and make sure you are supplying what is needed.
It's not a great option...but it's at least scriptable and doesn't require manual intervention.
I am trying to do the following tutorial:
https://itnext.io/docker-mongodb-authentication-kubernetes-node-js-75ff995151b6
However, in there, they use raw values for the mongo init.js file that is placed within docker-entrypoint-initdb.d folder.
I would like to use environment variables that come from my CI/CD system (Gitlab). Does anyone know how to pass environment variables to the init.js file? I have tried several things like for example use init.sh instead for the shell but without any success.
If I run manually the init shell version, I can have it working because I call mongo with --eval and pass the values, however, the docker-entrypoint-blabla is called automatically, so I do not have control of how this is called and I do not know what I could do for achieving what I want.
Thank you in advance and regards.
you can make use of a shell script to retrieve env variables and create the user.
initdb.d/init-mongo.sh
set -e
mongo <<EOF
use $MONGO_INITDB_DATABASE
db.createUser({
user: '$MONGO_INITDB_USER',
pwd: '$MONGO_INITDB_PWD',
roles: [{
role: 'readWrite',
db: '$MONGO_INITDB_DATABASE'
}]
})
EOF
docker-compose.yml
version: "3.7"
services:
mongodb:
container_name: "mongodb"
image: mongo:4.4
hostname: mongodb
restart: always
volumes:
- ./data/mongodb/mongod.conf:/etc/mongod.conf
- ./data/mongodb/initdb.d/:/docker-entrypoint-initdb.d/
- ./data/mongodb/data/db/:/data/db/
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root
- MONGO_INITDB_DATABASE=development
- MONGO_INITDB_USER=mongodb
- MONGO_INITDB_PWD=mongodb
ports:
- 27017:27017
command: [ "-f", "/etc/mongod.conf" ]
Now you can connect to development database using mongodb as user and password credentials.
Use shell script (e.g mongo-init.sh) to access variables. Can still run JavaScript code inside as below.
set -e
mongo <<EOF
use admin
db.createUser({
user: '$MONGO_ADMIN_USER',
pwd: '$MONGO_ADMIN_PASSWORD',
roles: [{
role: 'readWrite',
db: 'dummydb'
}]
})
EOF
Shebang line is not necessary at the beginning as this file will be sourced.
Until recently, I simply used a .sh shell script in the docker-entrypoint-initdb.d directory to access ENV variables, much like #Lazaro answer.
It is now possible to access environment variables from javascript files using process.env, provided the file is run with the newer mongosh instead of mongo, which is now deprecated.
However, according to the Docs (see 'Initializing a fresh instance'), mongosh is only used for .js files in docker-entrypoint-initdb.d if using version 6 or greater. I can confirm this is working using the mongo:6 image tag.
You can use envsubs.
If command not found : here. Install it on your runners host if you use shell runners, else, within the docker image used by the runner, or directly in your script.
(NB: Your link isn't free, so I can't adapt to your situation :p )
Example:
init.js.template:
console.log('$GREET $PEOPLE $PUNCTUATION')
console.log('Pipeline from $CI_COMMIT_BRANCH')
gitlab_ci.yml:
variables:
GREET: "hello"
PEOPLE: "world"
PUNCTUATION: "!"
# ...
script:
- (envsubst < path/to/init.js.template) > path/to/init.js
- cat path/to/init.js
Output:
$ (envsubst < init.js.template) > init.js
$ cat init.js
console.log('hello world !')
console.log('Pipeline from master')
At the end the answer is that you can use a .sh file instead of a .js file within the docker-entrypoint-initdb.d folder. Within the sh script, you can use directly environment variables. However, I could not do that at the beginning because I had a typo and environment variables were not created properly.
I prefer this method because it allows you to keep a normal .js file which you lint instead of embedding the .js file into a string.
Create a dockerfile like so:
FROM mongo:5.0.9
USER mongodb
WORKDIR /docker-entrypoint-initdb.d
COPY env_init_mongo.sh env_init_mongo.sh
WORKDIR /writing
COPY mongo_init.js mongo_init.js
WORKDIR /db/data
At the top of your mongo_init.js file, you can just define variables you need
db_name = DB_NAME
schema_version = SCHEMA_VERSION
and then in your env_init_mongo.sh file, you can just replace the strings you need with environment variables or add lines to the top of the file:
mongo_init="/writing/mongo_init.js"
sed "s/SCHEMA_VERSION/$SCHEMA_VERSION/g" -i $mongo_init
sed "s/DB_NAME/${MONGO_INITDB_DATABASE}/g" -i $mongo_init
sed "1s/^/use ${MONGO_INITDB_DATABASE}\n/" -i $mongo_init # add to top of file
mongo < $mongo_init
I am using ansible 2.4.2.0 and want to use ansible_ssh_user as user1 and then run the commands in reote box as user2. How can we achieve this. I have tried using:
become: yes
become_user: user2
But this is not working ' saying user1 doesnot have provileges to execute commands on remote machine as user2'.
Can someone please help?
I have continuing problems with this and it always ends up being a struggle due to nonsense with permissions
1) First of all refer to this specific bit:
https://docs.ansible.com/ansible/2.5/user_guide/become.html#becoming-an-unprivileged-user
And usually this line in config:
allow_world_readable_tmpfiles = true
should get you around your problem
2) The next option is a bit of a hack and is to simply run it directly as a user:
"sudo -u USER bash -c 'COMMAND TO RUN HERE'"
3) Finally the last option is, do you actually need to run that command as that user? Can it simply be done using sudo: true using things like owner: otheruser
Hope this helps
I have a Database project in Visual Studio that I am attempting to deploy automatically to a test environment nightly. To accomplish this I am using TFS which leverages a PowerShell script to run "SqlPackage.exe" to deploy any changes that have occurred during the day.
Some of my procs contain logic that is run inside of a script that is part of a agent job step and contains the following code(In dynamic SQL):
$(ESCAPE_SQUOTE(JOBID))
When deploying changes that affect this proc, I get the following issue:
SQL Execution error: A fatal error occurred. Incorrect syntax was
encountered while $(ESCAPE_SQUOTE( was being parsed.
This is a known issue, it appears as though that is not supported. It appears to be a function of the "SQLCmd" command misinterpreting the $( characters as a variable:
"override the value of a SQL command (sqlcmd) variable used during a
publish action."
So how do I get around this? It seems to be a major limitation of "sqlcmd" that you can't disable variables, I don't see that parameter that supports that...
Update 1
Seems as through you can disable variable substitution from "sqlcmd" by feeding it the "-x" argument(Source, Documentation):
-x (disable variable substitution)
Still don't see a way to do this from "SqlPackage.exe" though.
It seems that sqlcmd looks for the $( as a token, so separating those two characters is good enough. You can do this with a dynamic query that does nothing more than break the query into two strings:
DECLARE #query nvarchar(256) = N'... $' + N'(ESCAPE_SQUOTE(JOBID)) ...';
EXEC sp_executesql #query
One way to get around this is to refactor the "$(ESCAPE_SQUOTE(JOBID))" string into a scalar function, then setup a PowerShell script to directly invoke the "Sqlcmd" command with the "-x" parameter to "Create/Alter" said function before running "SqlPackage.exe".
Looks something like this in PowerShell:
$sql = #"
USE $database
GO
CREATE FUNCTION [dbo].[GetJobID] ()
RETURNS NVARCHAR(MAX)
AS
BEGIN
RETURN '$(ESCAPE_SQUOTE(JOBID))'
END
GO
"#;
Sqlcmd -S $servername -U $username -P $password -Q $sql -x;
This is a pretty poor workaround, but it does accomplish the task. Really hoping for something better.
I propose another workaround
my job has a step running : DTEXEC.exe /SERVER "$(ESCAPE_NONE(SRVR))"
I just have to add a SQLCMD variable before:
:setvar SRVR "$(ESCAPE_NONE(SRVR))"
this way the toked is considered as SQLCMD variables $(SRVR) and is replaced by the requested value
I'm trying to copy all my production database (that I have in Mongo) to my staging environment. So I'm trying to build a task. First I need to connect to production environment to be able to access to all my models in production (Model.all.each...) But I don't know how to reproduce the production environment. I know in console I can do 'export RAILS_ENV=heroku_production' but I don't know how to do it inside a Rake Task. This is what I'm trying for now but it does not work because I print Rails.env and it prints "development"...so I'm a bit lost
namespace :db do
namespace :sync_production_staging do
desc "Copy production database to staging"
task :staging => :environment do
system "export RAILS_ENV=heroku_production"
ap Rails.env
ap User.all
end
end
end
I have a script that copies my database from heroku to my local its a really strait forward process, I am sorry that this is PG and not mongo but I am sure that this should help
#lib/tasks/db.rake
namespace :db do
desc "Import most recent database dump"
task :import_from_prod => :environment do
puts 'heroku run pg:backups capture --app sushi-prod'
restore_backup 'sushi-prod'
end
def path_to_heroku
['/usr/local/heroku/bin/heroku', '/usr/local/bin/heroku'].detect {|path| File.exists?(path)}
end
def heroku(command, site)
`GEM_HOME='' BUNDLE_GEMFILE='' GEM_PATH='' RUBYOPT='' #{path_to_heroku} #{command} -a #{site}`
end
def restore_backup(site = 'sushi-prod')
dump_file = "#{Rails.root}/tmp/postgres.dump"
unless File.exists?(dump_file)
pgbackups_url = heroku('pg:backups public-url -q', site).chomp
puts "curl -o #{dump_file} #{pgbackups_url}"
system "curl -o #{dump_file} '#{pgbackups_url}'"
end
database_config = YAML.load(File.open("#{Rails.root}/config/database.yml")).with_indifferent_access
dev_db = database_config[Rails.env]
system "pg_restore -d #{dev_db[:database]} -c #{dump_file}".gsub(/\s+/,' ')
puts
puts "'rm #{dump_file}' to redownload postgres dump."
puts "Done!"
end
end