scm setup for copying project from windows to ubuntu using capistrano - capistrano3

I want to deploy my sample project from ubuntu to windows I have succesfully make a ssh key on ubuntu. Now I want to do this via only copying files from the windows to the ubuntu remotely. When I execute 'cap production deploy', I got this error
'fatal: No remote configured to list refs from.'
I guest this is from the git.
my question is, How can setup inscm if I had only copy and paste the project from windows to server?
by the way this are the codes of my follwing rb files:
Deployer.rb
lock '3.5.0'
--set :application, 'my_app_name'
--set :repo_url, 'git#example.com:me/my_repo.git'
set :application, "zemsoft"
--set :deploy_to, "/var/www/my-app.com"
set :deploy_to, "/var/www/e"
set :domain, "zemsofterp2.com"
set :scm, "git"
--set :repository, "file:/// Users/deployer/sites/my-app"
set :repository, "C:/xampp/htdocs/vendor"
set :deploy_via, :copy
set :use_sudo, false
set :keep_releases, 3
-- Default branch is :master
-- ask :branch, git rev-parse --abbrev-ref HEAD.chomp
-- Default deploy_to directory is /var/www/my_app_name
-- set :deploy_to, '/var/www/my_app_name'
-- Default value for :scm is :git
-- set :scm, :git
-- Default value for :format is :airbrussh.
-- set :format, :airbrussh
-- You can configure the Airbrussh format using :format_options.
-- These are the defaults.
-- set :format_options, command_output: true, log_file: 'log/capistrano.log', color: :auto, truncate: :auto
-- Default value for :pty is false
-- set :pty, true
-- Default value for :linked_files is []
-- set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
-- Default value for linked_dirs is []
-- set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'public/system')
-- Default value for default_env is {}
-- set :default_env, { path: "/opt/ruby/bin:$PATH" }
-- Default value for keep_releases is 5
-- set :keep_releases, 5
namespace :deploy do
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
-- Here we can do anything such as:
-- within release_path do
-- execute :rake, 'cache:clear'
end
end
end
end
Production.rb
role :app, %w{ely029#192.168.1.241} # EDIT your ssh username and server ip address
set :ssh_options, {
auth_methods: %w(password),
password: "embuscado29" # EDIT your ssh password

set :deploy_via, :copy does nothing; this is not a valid Capistrano 3 setting.
Capistrano 3 has no built-in mechanism for deploying by way of copying files from one machine to another. You need a central source code repository, such as a remote Git repository that the server can access.
There are third-party Capistrano plugins that may provide the copying behavior you need (search GitHub for capistrano copy), but I cannot vouch for their quality or effectiveness. My recommendation would be to use a remote Git repository.

Related

Postgres extension AGE not getting loaded

After starting the Postgres server process for a cluster:
bin/pg_ctl -D demo -l logfile start
Starting a process for a database 'demo':
bin/psql demo
When I try to load AGE extension by
LOAD 'age';
It shows error that access to 'age' is denied.
Do I need to change some security/credential information for the user?
I expected the extension to be loaded so that I can execute cypher queries.
Run install check to see if postgresql and Apache AGE have been sucessfully installed without any error using the command in the age folder:
make PG_CONFIG=/home/path/to/age/bin/pg_config installcheck
and if this is the case then you have to create an extension of age and then load as follows:
CREATE EXTENSION age;
Load 'age';
Now set Search Path and run a simple cypher query:
SET search_path = ag_catalog, "$user", public;
SELECT create_graph('demo_graph');
To load the APACHE AGE extension, run the following commands after successful installation (verify using installcheck):
CREATE EXTENSION IF NOT EXISTS age;
LOAD 'age';
SET search_path = ag_catalog, "$user", public;
Create a graph using:
SELECT create_graph('graph_name');
To avoid running the load command each time, set the required parameters in the postgresql.conf file:
Locate the file at database_name/postgresql.conf (in your case,
it would be demo/postgresql.conf)
Add the following lines to the file:
shared_preload_libraries = 'age'
search_path = 'ag_catalog, "$user", public'
You might need superuser privileges as described here in order to execute the CREATE EXTENSION statement.
Here's a possible relevant issue with a solution in GitHub issues

How to remove comments and settings from pg_dump output?

I am trying to dump only data from a PostgreSQL database using pg_dump and then to restore those data into another one. But generating sql script with this tool also add some comments and settings into the output file.
Running this command :
pg_dump --column-inserts --data-only my_db > my_dump.sql
I get something like :
--
-- PostgreSQL database dump
--
-- Dumped from database version 8.4.22
-- Dumped by pg_dump version 10.8 (Ubuntu 10.8-0ubuntu0.18.04.1)
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET row_security = off;
--
-- Data for Name: sf_guard_user; Type: TABLE DATA; Schema: public; Owner: admin
--
INSERT INTO public.....
Is there any way to avoid pg_dump generating those comments and settings ?
I could do a small script to remove every lines before the first insert but it also generates comments everywhere on the file and I am sure there is a cleaner way to proceed but found nothing.
I don't think there is. I'd simply pipe through grep to filter out lines that start with the comment delimiter:
pg_dump --column-inserts --data-only my_db | grep -v "^--" > my_dump.sql

Bash script to send notification when a new data is added

I need to work on a bash script to monitor the user table and sends a notification email to the sales team containing the newly created username.
I am new to scripting and have a little idea on how to do that.
appreciate any help or instructions.
You tagged the question as DB2, in that RDBS you can create a trigger that send a message by email. You do not need bash in this case.
Let's suppose you have a table called users, and each time a new row is inserted, an email message will be sent.
CREATE or replace trigger t1
after insert on users
REFERENCING NEW AS N
FOR EACH ROW
BEGIN
DECLARE v_sender VARCHAR(30);
DECLARE v_recipients VARCHAR(60);
DECLARE v_subj VARCHAR(20);
DECLARE v_msg VARCHAR(200);
SET v_sender = 'kkent#mycorp.com';
SET v_recipients = 'bwayne#mycorp.com,pparker#mycorp.com';
SET v_subj = 'New user';
SET v_msg = 'There is a new user: ' || n.username;
CALL UTL_MAIL.SEND(v_sender, v_recipients, NULL, NULL, v_subj, v_msg);
END#
You have to configure DB2 with your SMTP server and other parameters: http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.apdv.sqlpl.doc/doc/r0055176.html
May be you can use the inotify tool to detect the file modification and use the mail tool to send the mail in you script, as for how to use the command, please refers to the specific documentation.
To send an email you could use gmail + mutt. Just follow this tutorial to see how to configure mutt.
Once mutt is configured, you can send email from a script using a command like this one:
echo "$BODY" | mutt -s "$SUBJECT" $EMAIL_ADDRESS
An example would be:
echo "User Bob has been added." | mutt -s "New User" example#example.com
You could also consider using prowl to send push notifications to an IOS device. Prowl has a perl script that can be executed from your bash script. This would provide near instant notification. Prowl can also prioritize notifications.

Optionally including scripts in SQL Server Projects 2012

I am building a SQL Publish Script that will be used to generate a database to our internal servers, and then used externally by our client.
The problem I have is that our internal script will automate quite a few things for us, in which the actual production environment will require these completed manually.
For example, internally we would use the following script
-- Global variables
:setvar EnvironmentName 'Local'
-- Script.PostDeployment.sql
:r .\PopulateDefaultValues.sql
IF ($(EnvironmentName) = 'Test')
BEGIN
:r .\GivePermissionsToDevelopmentTeam.sql
:r .\PopulateTestData.sql
:r .\RunETL.sql
END
ELSE IF ($(EnvironmentName) = 'Client_Dev')
BEGIN
:r .\GivePermissionsToDevWebsite.sql
END
This would generate a script like this:
-- (Ignore syntax correctness, its just the process I'm after)
IF($(EnvironmentName) = 'Test')
BEGIN
CREATE LOGIN [Developer1] AS USER [MyDomain\Developer1] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer2] AS USER [MyDomain\Developer2] WITH DEFAULT SCHEMA=[dbo];
CREATE LOGIN [Developer3] AS USER [MyDomain\Developer3] WITH DEFAULT SCHEMA=[dbo];
-- Populate entire database (10000's of rows over 100 tables)
INSERT INTO Products ( Name, Description, Price ) VALUES
( 'Cheese Balls', 'Cheesy Balls ... mm mm mmmm', 1.00),
( 'Cheese Balls +', 'Cheesy Balls with a caffeine kick', 2.00),
( 'Cheese Squares', 'Cheesy squares with a hint of ginger', 2.50);
EXEC spRunETL 'AUTO-DEPLOY';
END
ELSE IF($(EnvironmentName) = 'Client_Dev')
BEGIN
CREATE LOGIN [WebLogin] AS USER [FABRIKAM\AppPoolUser];
END
END IF
This works fine, for us. When this script is taken on site, the script fails because it cannot authenticate the users of our internal environment.
One item I thought about permissions was to just give our internal team sysadmin privileges, but the test data just fills the script up. When going on site, having all of this test data just bloats the published script and isn't used anyway.
Is there any way to exclude a section entirely from a published file, so that all of the test data and extraeous inserts are removed, without any manual intervention of the published file?
Unfortunately, there is currently no way to remove the contents of a referenced script from the generated file entirely.
The only way to achieve this is to post-process the generated script (Powershell/Ruby/scripting language of choice) to find and remove the parts you care about using some form of string and file manipulation.
Based on: My experience with doing this exact same thing to remove a development-environment-only script which was sizable and bloated the Production deployment script with a lot of 'noise', making it harder for DBA's to review the script sensibly.

utl_file.fopen without 'create directory ... as ...'

Hi, everybody.
I am new to PL/SQL and Oracle Databases.
I need to read/write file that exists on server so i'm using utl_file.fopen('/home/tmp/','text.txt','R') but Oracle shows error 'invalid directory path'.
Main problem is that i have only user privileges, so i cant use commands like create directory user_dir as '/home/temp/' or view utl_file_dir with just show parameter utl_file_dir;
I used this code to view utl_file_dir:
SQL> set serveroutput on;
SQL> Declare
2 Intval number;
3 Strval varchar2 (500);
4 Begin
5 If (dbms_utility.get_parameter_value('utl_file_dir', intval,strval)=0)
6 Then dbms_output.put_line('value ='||intval);
7 Else dbms_output.put_line('value = '||strval);
8 End if;
9 End;
10 /
and output was 'value = 0'.
I google'd much but didnt find any solution of this problem, so i'm asking help here.
To read file i used this code:
declare
f utl_file.file_type;
s varchar2(200);
begin
f := utl_file.fopen('/home/tmp/','text.txt','R');
loop
utl_file.get_line(f,s);
dbms_output.put_line(s);
end loop;
exception
when NO_DATA_FOUND then
utl_file.fclose(f);
end;
If you do not have permission to create the directory object (and assuming that the directory object does not already exist), you'll need to send a request to your DBA (or someone else that has the appropriate privileges) in order to create a directory for you and to grant you access to that directory.
utl_file_dir is an obsolete parameter that is much less flexible than directory objects and requires a reboot of the database to change-- unless you're using Oracle 8.1.x or you are dealing with a legacy process that was written back in the 8.1.x days and hasn't been updated to use directories, you ought to ignore it.

Resources