Capistrano 3: way to handle different deploy flows - capistrano3

I am new to Capistrano, but I have succeded on getting it working for a custom deploy. I have to deploy Moodle to a cluster with an auto-scaling group in AWS and one or more static servers.
It works great! I have managed to alter the flow with custom tasks to put my site on maintenance and clear cache without problems:
namespace :moodle do
desc 'Save config.php from current release directory' task :'save-config'
on roles(:web) do
execute :sudo, :cp, shared_path.join('config.php'), release_path
execute :sudo, :chown, 'www-data-aulatp:www-data', release_path.join('config.php')
end
end
desc 'Copy config.php to release directory' task :'restore-config' do
on roles(:web) do
execute :sudo, :cp, shared_path.join('config.php'), release_path
execute :sudo, :chown, 'www-data-aulatp:www-data', release_path.join('config.php')
end
end
desc 'Enable maintenance mode on Moodle site' task :'enable-maintenance' do
on roles(:admin) do
execute :sudo, '-u', 'www-data-aulatp', '/usr/bin/php7.0', current_path.join('admin', 'cli', 'maintenance.php'), '--enable'
end
end
desc 'Disable maintenance mode on Moodle site' task :'disable-maintenance' do
on roles(:admin) do
execute :sudo, '-u', 'www-data-aulatp', '/usr/bin/php7.0', current_path.join('admin', 'cli', 'maintenance.php'), '--disable'
end
end
desc 'Purge all internal Moodle caches' task :'purge-caches' do
on roles(:admin) do
execute :sudo, '-u', 'www-data-aulatp', '/usr/bin/php7.0', current_path.join('admin', 'cli', 'purge_caches.php')
end
end
end
before 'deploy:starting', 'moodle:save-config'
before 'deploy:updated', 'moodle:enable-maintenance'
after 'deploy:updated', 'moodle:restore-config'
after 'deploy:finished', 'moodle:enable-maintenance'
after 'deploy:finished', 'moodle:disable-maintenance'
after 'deploy:finished', 'moodle:purge-caches'
The problem is, sometimes I will need to make quick deploys, a small patch that does not need enabling and disabling maintenance mode or purging caches.
Would it be posible with capistrano?
As a posible alternative, I have been looking also to capistrano-patch, a way to deploy a simple patch to every server without creating a full deploy process. It looks like it has not ben updated for some years and I suppose it will not work with capistrano 3. Any similar ideas to do HOTFIX with Capistrano 3?

You could use environment variables to state that something needs to be bypassed. E.g., you could run your deploy with NO_CACHE_PURGE=true cap production deploy and then wrap the relevant code with something like:
desc 'Purge all internal Moodle caches'
task :'purge-caches' do
if ENV['NO_CACHE_PURGE'].nil?
on roles(:admin) do
execute :sudo, '-u', 'www-data-aulatp', '/usr/bin/php7.0', current_path.join('admin', 'cli', 'purge_caches.php')
end
else
puts 'Skipping purge-caches due to env variable'
end
end
If you know Ruby well, and you want to automate this, you could probably write a method which would simplify some of this (the following code is untested):
def disableable_task(*args)
if ENV[args[:env_name]].nil?
task(args)
else
puts "Skipping #{args[0]} due to #{args[:env_name]} environment variable being set"
end
desc 'Purge all internal Moodle caches'
disableable_task :'purge-caches', env_name: 'NO_CACHE_PURGE' do
on roles(:admin) do
execute :sudo, '-u', 'www-data-aulatp', '/usr/bin/php7.0', current_path.join('admin', 'cli', 'purge_caches.php')
end
end

Related

How to create SQL language anonymous block in Snowflake with dynamic DDL statements

I have over 2 dozen tasks in our Snowflake database, all having the names in a similar pattern ending with a number (example : TSK_x, where x = 1,2,...,27).
I am trying to trying to write a procedure or anonymous block in Snowflake (without using Javascript stored proc) to generate a descending order task number statements and execute them from inside the procedure like :
ALTER TASK TSK_27 RESUME;
ALTER TASK TSK_26 RESUME;
...
ALTER TASK TSK_1 RESUME;
The task (TSK_1) is the parent task and needs to be enabled last.
As a background, that script will be included in Jenkins as part of our build. Our Jenkins does not allow multiple SQL statements in one file and so I am thinking of a stored proc like the one mentioned above.
Any help/suggestion will be much appreciated. I am new to Snowflake.
Query to "generate a descending order task number statements"
First execute -
show tasks;
created_on
name
state
2022-06-02 12:53:23.662 -0700
T1
started
2022-06-13 20:11:11.032 -0700
TASK_1
started
2022-06-13 20:24:20.211 -0700
TASK_10
started
2022-06-13 20:11:17.883 -0700
TASK_2
started
2022-06-13 20:24:10.871 -0700
TASK_2A
suspended
2022-06-13 20:11:22.769 -0700
TASK_3
started
2022-06-13 20:11:26.497 -0700
TASK_4
started
2022-06-13 20:11:30.725 -0700
TASK_5
started
2022-06-13 20:11:34.765 -0700
TASK_6
started
2022-06-13 20:11:38.313 -0700
TASK_7
started
Query (change order clause as needed - add desc in end) -
select "name" as name,"state" from table(result_scan(LAST_QUERY_ID()))
where regexp_like("name",'TASK_[[:digit:]]+$')
order by substr("name",1,4), to_number(substr("name",6));
NAME
state
TASK_1
started
TASK_2
started
TASK_3
started
TASK_4
started
TASK_5
started
TASK_6
started
TASK_7
started
TASK_10
started
Anonymous procedure to set tasks to resume -
show tasks;
EXECUTE IMMEDIATE $$
DECLARE
p_tsk string;
c1 CURSOR FOR select "name" as name from table(result_scan(LAST_QUERY_ID())) where regexp_like("name",'TASK_[[:digit:]]+$') order by substr("name",1,4), to_number(substr("name",6)) desc;
BEGIN
for record in c1 do
p_tsk:=record.name;
execute immediate 'alter task '||:p_tsk ||' suspend';
end for;
RETURN p_tsk;
END;
$$
;
To recursively resume all dependent tasks tied to a root task in a simple tree of tasks, query the SYSTEM$TASK_DEPENDENTS_ENABLE function rather than enabling each task individually (using ALTER TASK … RESUME).
Example:
select system$task_dependents_enable('mydb.myschema.mytask');

Validating the SSIS config files configured on the SQL Server Agent Jobs

Recently I faced an issue in SQL Server Agent Job. The error is "Login Time Out Expired".
I have analyzed it. It seems like the server name mentioned on the SSIS config file is wrong. I have corrected the server name now the job runs fine.
Our job design - SQL Server Agent Job invokes an SSIS package along with its Config file.
The actual problem is we have lot of sql server agent jobs (200 + jobs). All are running on its own schedule. Currently we are fixing these issues as soon as we get an error in the Job history. This is purely a manual approach. This is one of the environment. We have almost 10 plus environments which have the same set of jobs.
I am looking for an approach where we can pre validate all the config files configured on the SQL Server agent jobs and report the files which have the incorrect server names or incorrect file paths. As you know, doing this task manually is an headache process even while doing this we may miss some jobs / create other issues as well.
Is any way we can validate the config files prior to running the SQL jobs.
You can get the sqlagent job steps information by querying the msdb tables. In that, you can find out, which configuration file is being used in the job step.
Refer to sql agent jobs documentation
SELECT
[sJOB].[job_id] AS [JobID]
, [sJOB].[name] AS [JobName]
, [sJSTP].[step_uid] AS [StepID]
, [sJSTP].[step_id] AS [StepNo]
, [sJSTP].[step_name] AS [StepName]
, CASE [sJSTP].[subsystem]
WHEN 'ActiveScripting' THEN 'ActiveX Script'
WHEN 'CmdExec' THEN 'Operating system (CmdExec)'
WHEN 'PowerShell' THEN 'PowerShell'
WHEN 'Distribution' THEN 'Replication Distributor'
WHEN 'Merge' THEN 'Replication Merge'
WHEN 'QueueReader' THEN 'Replication Queue Reader'
WHEN 'Snapshot' THEN 'Replication Snapshot'
WHEN 'LogReader' THEN 'Replication Transaction-Log Reader'
WHEN 'ANALYSISCOMMAND' THEN 'SQL Server Analysis Services Command'
WHEN 'ANALYSISQUERY' THEN 'SQL Server Analysis Services Query'
WHEN 'SSIS' THEN 'SQL Server Integration Services Package'
WHEN 'TSQL' THEN 'Transact-SQL script (T-SQL)'
ELSE sJSTP.subsystem
END AS [StepType]
, [sPROX].[name] AS [RunAs]
, [sJSTP].[database_name] AS [Database]
, [sJSTP].[command] AS [ExecutableCommand]
, CASE [sJSTP].[on_success_action]
WHEN 1 THEN 'Quit the job reporting success'
WHEN 2 THEN 'Quit the job reporting failure'
WHEN 3 THEN 'Go to the next step'
WHEN 4 THEN 'Go to Step: '
+ QUOTENAME(CAST([sJSTP].[on_success_step_id] AS VARCHAR(3)))
+ ' '
+ [sOSSTP].[step_name]
END AS [OnSuccessAction]
, [sJSTP].[retry_attempts] AS [RetryAttempts]
, [sJSTP].[retry_interval] AS [RetryInterval (Minutes)]
, CASE [sJSTP].[on_fail_action]
WHEN 1 THEN 'Quit the job reporting success'
WHEN 2 THEN 'Quit the job reporting failure'
WHEN 3 THEN 'Go to the next step'
WHEN 4 THEN 'Go to Step: '
+ QUOTENAME(CAST([sJSTP].[on_fail_step_id] AS VARCHAR(3)))
+ ' '
+ [sOFSTP].[step_name]
END AS [OnFailureAction]
FROM
[msdb].[dbo].[sysjobsteps] AS [sJSTP]
INNER JOIN [msdb].[dbo].[sysjobs] AS [sJOB]
ON [sJSTP].[job_id] = [sJOB].[job_id]
LEFT JOIN [msdb].[dbo].[sysjobsteps] AS [sOSSTP]
ON [sJSTP].[job_id] = [sOSSTP].[job_id]
AND [sJSTP].[on_success_step_id] = [sOSSTP].[step_id]
LEFT JOIN [msdb].[dbo].[sysjobsteps] AS [sOFSTP]
ON [sJSTP].[job_id] = [sOFSTP].[job_id]
AND [sJSTP].[on_fail_step_id] = [sOFSTP].[step_id]
LEFT JOIN [msdb].[dbo].[sysproxies] AS [sPROX]
ON [sJSTP].[proxy_id] = [sPROX].[proxy_id]
ORDER BY [JobName], [StepNo]
Now, you need to look into Executable command to see the exact configuration file being used in the SSIS execution and accordingly take action.
[ExecutableCommand]: The actual command which will be executed by the subsystem.

SQL Trigger Fires on Every 24 Hrs

I want to make a MSSQL Trigger which will Fires in Everyday when date will change.
For MSSS Express editions create MS Windows job which will start Sqlcmd, see https://technet.microsoft.com/en-us/library/ms165702(v=sql.105).aspx
which will run an Sql script. Note, when sqlcmd is run from the command line, sqlcmd uses the OLE DB provider.
How to create a Sqlcmd job by using Windows Task Scheduler https://support.microsoft.com/en-us/kb/2019698 . This article deals with DB backup task. Replace the Sql script at step A with the one you need and adjust following steps accordingly.
You have to schedule a JOB in SQL which will fire in defined time and put your query in JOB
Expand the SQL Server Agent node and right click the Jobs node in SQL Server Agent and select 'New Job'.
In the 'New Job' window enter the name of the job and a description on the 'General' tab.
Select 'Steps' on the left hand side of the window and click 'New' at the bottom.
In the 'Steps' window enter a step name and select the database you want the query to run against.
Paste in the T-SQL command you want to run into the Command window and click 'OK'.
Click on the 'Schedule' menu on the left of the New Job window and enter the schedule information (e.g. daily and a time).
Click 'OK' - and that should be it.
For that purpose you can use PowerShell and Task Sheduler. All action below must be done on the machine where SQL Server is running.
At first create .sql file with a batch to run. I call it my_batch.sql. F.e. with this inside:
USE [MyDB]
INSERT INTO [dbo].[test]
([id]
,[somevalue]
,[New Column]
,[NewColumn])
VALUES
(NEWID()
,'testing'
,'test'
,'just a test')
Do not use GO in this script!
Then create .ps1 script to run that batch file (my_batch.ps1):
$conn=new-object System.Data.SqlClient.SQLConnection
$ConnectionString = "Server=(local)\SQLEXPRESS;Database=MyDB;Integrated Security=True;Connect Timeout=0"
$conn.ConnectionString=$ConnectionString
$conn.Open()
$fileToGetContent = 'D:\my_batch.sql'
$commandText = Get-Content -Path $fileToGetContent
$command = $conn.CreateCommand()
$command.CommandText = $commandText
$command.ExecuteNonQuery()
$conn.Close()
Then create a schedule task. You can make it manually (here is a good sample) or via PowerShell (I prefer this way):
#Create a new trigger that is configured to trigger at startup
$STTrigger = New-ScheduledTaskTrigger -Daily -At 00:01
#Name for the scheduled task
$STName = "Run SQL batch"
#Action to run as
$STAction = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument "D:\my_batch.ps1"
#Configure when to stop the task and how long it can run for. In this example it does not stop on idle and uses the maximum possible duration by setting a timelimit of 0
$STSettings = New-ScheduledTaskSettingsSet -DontStopOnIdleEnd -ExecutionTimeLimit ([TimeSpan]::Zero)
#Configure the principal to use for the scheduled task and the level to run as
$STPrincipal = New-ScheduledTaskPrincipal -User "DOMAIN\user" -RunLevel "Highest"
#Register the new scheduled task
Register-ScheduledTask $STName -Action $STAction -Trigger $STTrigger -Principal $STPrincipal -Settings $STSettings

Retrieving only PRINT command from SQL Server procedure in VB.NET

I apologize if I am missing a few things here, I am still learning .NET.
I have a project I have been working on where I am restoring a database as well as transaction logs. After each transaction log I have a PRINT statement to output that the transaction log has been processed.
I have been using the OnInfoMessage function to get the messages out of what is done in SQL Server, but was looking to ONLY extract the PRINT command out of it and display it in a textbox / label / RichTextBox as sort of a status box that updates as it goes.
My procedure in SQL Server is as follows
....
WHILE ##FETCH_STATUS=0
BEGIN
SET #sql = 'RESTORE LOG FCS FROM DISK = ''C:\Databases\'+#fileName +''' WITH NORECOVERY';
EXEC (#sql);
PRINT '===' +#sql;
FETCH NEXT FROM cFile INTO #fileName
END
CLOSE cFile
DEALLOCATE cFile
In VB.net I am able to get ALL messages to display in a log file.
Private Shared Sub OnInfoMessage(ByVal sender As Object, ByVal e As System.Data.SqlClient.SqlInfoMessageEventArgs)
Using LogFile As IO.StreamWriter = New IO.StreamWriter("C:\SDBT\worklog.ft", True)
LogFile.WriteLine(e.Message)
End Using
End Sub
This in turn gives me the following output...
Processed 10392 pages for database 'FCS', file 'FCS' on file 1.
Processed 2 pages for database 'FCS', file 'FCS_log' on file 1.
RESTORE DATABASE successfully processed 10394 pages in 2.652 seconds (30.619 MB/sec).
Processed 0 pages for database 'FCS', file 'FCS' on file 1.
Processed 6 pages for database 'FCS', file 'FCS_log' on file 1.
RESTORE LOG successfully processed 6 pages in 0.061 seconds (0.720 MB/sec).
===RESTORE LOG FCS FROM DISK = 'C:\SDBT_TestDB\log_00001.trn' WITH NORECOVERY
Processed 0 pages for database 'FCS', file 'FCS' on file 1.
Processed 2 pages for database 'FCS', file 'FCS_log' on file 1.
RESTORE LOG successfully processed 2 pages in 0.058 seconds (0.252 MB/sec).
===RESTORE LOG FCS FROM DISK = 'C:\SDBT_TestDB\log_00002.trn' WITH NORECOVERY
Ideally I would like to only retrieve the lines preceded with "===" and output those to the user.
Is it possible to get the #sql variable value from SQL Server to VB.net or is there something built into .NET that I am missing?
Thanks in advance!
In your OnInfoMessage sub, check each line of text in e.Message to see if it starts with === and only write those lines.
For Each line In e.Message.Split(vbNewLine)
If(line.StartsWith("===")) Then
LogFile.WriteLine(line)
End If
Next

Optionally including scripts in SQL Server Projects 2012

I am building a SQL Publish Script that will be used to generate a database to our internal servers, and then used externally by our client.
The problem I have is that our internal script will automate quite a few things for us, in which the actual production environment will require these completed manually.
For example, internally we would use the following script
-- Global variables
:setvar EnvironmentName 'Local'
-- Script.PostDeployment.sql
:r .\PopulateDefaultValues.sql
IF ($(EnvironmentName) = 'Test')
BEGIN
:r .\GivePermissionsToDevelopmentTeam.sql
:r .\PopulateTestData.sql
:r .\RunETL.sql
END
ELSE IF ($(EnvironmentName) = 'Client_Dev')
BEGIN
:r .\GivePermissionsToDevWebsite.sql
END
This would generate a script like this:
-- (Ignore syntax correctness, its just the process I'm after)
IF($(EnvironmentName) = 'Test')
BEGIN
CREATE LOGIN [Developer1] AS USER [MyDomain\Developer1] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer2] AS USER [MyDomain\Developer2] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer3] AS USER [MyDomain\Developer3] WITH DEFAULT SCHEMA=[dbo];
-- Populate entire database (10000's of rows over 100 tables)
INSERT INTO Products ( Name, Description, Price ) VALUES
( 'Cheese Balls', 'Cheesy Balls ... mm mm mmmm', 1.00),
( 'Cheese Balls +', 'Cheesy Balls with a caffeine kick', 2.00),
( 'Cheese Squares', 'Cheesy squares with a hint of ginger', 2.50);
EXEC spRunETL 'AUTO-DEPLOY';
END
ELSE IF($(EnvironmentName) = 'Client_Dev')
BEGIN
CREATE LOGIN [WebLogin] AS USER [FABRIKAM\AppPoolUser];
END
END IF
This works fine, for us. When this script is taken on site, the script fails because it cannot authenticate the users of our internal environment.
One item I thought about permissions was to just give our internal team sysadmin privileges, but the test data just fills the script up. When going on site, having all of this test data just bloats the published script and isn't used anyway.
Is there any way to exclude a section entirely from a published file, so that all of the test data and extraeous inserts are removed, without any manual intervention of the published file?
Unfortunately, there is currently no way to remove the contents of a referenced script from the generated file entirely.
The only way to achieve this is to post-process the generated script (Powershell/Ruby/scripting language of choice) to find and remove the parts you care about using some form of string and file manipulation.
Based on: My experience with doing this exact same thing to remove a development-environment-only script which was sizable and bloated the Production deployment script with a lot of 'noise', making it harder for DBA's to review the script sensibly.

Resources