Salesforce CLI destructive changes (sfdx sgd:source:delta) not showing difference in branches using GItHub Actions - salesforce

I am using the Salesforce destructive changes mentioned here. However, sfdx sgd:source:delta --to "HEAD" --from "HEAD^" --output . --generate-delta command is not showing any destructive changes, it generates a destructiveChanges.xml without any deleted metadata.
--- destructiveChanges.xml generated with deleted metadata ---
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
<version>52.0</version>
</Package>
Here is the content of the yml file I am using:
name: sf-destructivechanges
on:
push:
branches:
- "test"
jobs:
sf-destructivechanges:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Salesforce CLI
run: |
wget https://developer.salesforce.com/media/salesforce-cli/sfdx-linux-amd64.tar.xz
mkdir sfdx-cli
tar xJf sfdx-linux-amd64.tar.xz -C sfdx-cli --strip-components 1
./sfdx-cli/install
sfdx update
- name: Install plugin
run: |
echo 'y' | sfdx plugins:install sfdx-git-delta
- name: Get delta files with SGD
run: sfdx sgd:source:delta --to "HEAD" --from "HEAD^" --output . --generate-delta
- name: Deploy changes
run: |
echo "--- package.xml generated with added and modified metadata ---"
cat package/package.xml
echo
echo "---- Deploying added and modified metadata ----"
sfdx force:source:deploy -x package/package.xml -u ${{ secrets.USERNAME}}
- name: Destructive Changes
run: |
echo "--- destructiveChanges.xml generated with deleted metadata ---"
cat destructiveChanges/destructiveChanges.xml
echo
echo "--- Deleting removed metadata ---"
sfdx force:mdapi:deploy -d destructiveChanges -u ${{ secrets.USERNAME}} -w -1

Related

Update Local SQL Server with a Bash Script

Question:
What is the correct format to use in my bash script to be able to run the -Q option?
Case: Update local database from S3 every night to run reports on our on-premise server
Code:
#!/bin/bash
#get latest file from S3
BACKUP_MARKETING=`aws s3 ls [some_folder]/[some_subfolder]/ --recursive | sort | tail -n 1 | awk '{print $4}'`
#download the file locally
aws s3 cp s3://[some_folder]/$BACKUP_MARKETING /var/opt/mssql/backup/marketing
#get the file name
BAK_MARKETING=`find [folder]/ -type f -name "*.bak"`
#drop the database to avoid conflicts from not backing it up
/opt/mssql-tools/bin/sqlcmd -S localhost -U [username] -P '[password]' -Q 'DROP DATABASE [db_name]'
#restore the database
/opt/mssql-tools/bin/sqlcmd -S localhost -U [username] -P '[password]' -Q RESTORE DATABASE "[db_name]" FROM DISK = "/var/opt/mssql/backup/$BAK_MARKETING" WITH MOVE "[db_name]" TO "/var/opt/mssql/data/[db_name].MDF", MOVE "[db_name]_log" TO "/var/opt/mssql/data/[db_name].LDF"
Error
Sqlcmd: 'DATABASE" "[db_name]" "FROM" "DISK" "=" "/var/opt/mssql/backup/marketing/[db_name].bak" "WITH" "MOVE" "[db_name]" "TO" "/var/opt/mssql/data/[db_name].MDF," "MOVE" "[db_name]_log" "TO" "/var/opt/mssql/data/[db_name].LDF': Unexpected argument. Enter '-?' for help.
Apparently I had to concatenate my variables on the SQL command. Here is the working version plus I added the REPLACE option to it
/opt/mssql-tools/bin/sqlcmd -S localhost -U [username] -P '[password]' -Q 'RESTORE DATABASE [db_name] FROM DISK = "/var/opt/mssql/backup/'**$BAK_FILE**'" WITH REPLACE, MOVE "[db_name]" TO "/var/opt/mssql/data/[db_name].MDF", MOVE "[db_name]_Log" TO "/var/opt/mssql/data/[db_name].LDF"'
Could you not use the -i Option instead?
I had some problems as well using Q, so i replaced it with -i and placed the code within a .sql file instead.
I ended up with;
SET SQLusername=sa
SET SQLpassword=password
SET SQLserver=dnsnameorIp
SET SQLdatabase=databasename
sqlcmd -U %SQLusername% -P %SQLpassword% -S %SQLserver% -d %SQLdatabase% -i mycode.sql -o outputResult.txt

Retrieve rows in DB corresponding to particular ID using kubectl

I am trying to fetch the no. of rows for a particular ID using kubectl but instead getting some extra data.
Command:
kubectl exec abc-db-0 -n cicd --kubeconfig /root/admin.conf -- bash -c "psql -U postgres -d db -f /tmp/queryInstanceId.sql -v v1=full_test | grep [0-9]"
Actual Output of above command:
Defaulting container name to abc-db.
Use 'kubectl describe pod/abc-db-0 -n cicd' to see all of the containers in this pod.
(0 rows)
Expected Output:
(0 rows)
Could anyone please let me know what I am doing wrong here?
Note:
The first 2 lines always comes when we login to the DB manually but in output I only want (0 rows)
The first two lines are output by kubectl exec because the Pod has multiple containers. It is sort of a warning that it picked the first one, which might not be the one you wanted use.
You can specify the target container in your command (-c containername):
kubectl exec abc-db-0 -n cicd --kubeconfig /root/admin.conf -c abc-db -- bash -c "psql -U postgres -d db -f /tmp/queryInstanceId.sql -v v1=full_test | grep [0-9]"
Or you can redirect the standard error with kubectl ... 2>/dev/null (os specific):
kubectl exec abc-db-0 -n cicd --kubeconfig /root/admin.conf -c -- bash -c "psql -U postgres -d db -f /tmp/queryInstanceId.sql -v v1=full_test | grep [0-9]" 2>/dev/null

SQLCipher executable not working on macOS

I try to compile and run SQLCipher on macOS, but it's not working.
What did I do wrong? And are there any useless actions?
1) OpenSSL build
Download and extract archive of OpenSSL v1.0.2 last version sources
(https://www.openssl.org/source)
Create folder "/usr/local/lib/openssl-1.0.2"
Build with commands:
-
$ ./Configure darwin64-x86_64-cc shared --openssldir=/usr/local/lib/openssl-1.0.2
$ make depend
$ sudo make install
Files are in folder "/usr/local/lib/openssl-1.0.2/lib"
2) SQLite build
Download and extract archive of SQLite last version sources with
script "configure" and makefiles TEA
(https://www.sqlite.org/download.html)
Create folder "/usr/local/lib/sqlite-3.22.0"
Build with commands:
-
$ ./configure --prefix=/usr/local/lib/sqlite-3.22.0
$ make
$ make install
Files are in folder "/usr/local/lib/sqlite-3.22.0/lib"
3) SQLCipher build
Download and extract archive of SQLCipher last version sources on
Github project releases
(https://github.com/sqlcipher/sqlcipher/releases)
Build with commands:
-
$ ./configure --prefix=/usr/local/lib/sqlcipher-3.4.2 --enable-tempstore=yes CFLAGS="-DSQLITE_HAS_CODEC" LDFLAGS="/usr/local/lib/openssl-1.0.2/lib/libcrypto.a"
$ make clean
$ make
$ make install
4) Run test
Copy build file "/usr/local/lib/sqlcipher-3.4.2/bin/sqlcipher" on
test folder "/Users/user/Documents/sqlcipher-test"
Copy files "/usr/local/lib/sqlcipher-3.4.2/lib/libsqlcipher*",
"/usr/local/lib/sqlite-3.22.0/lib/libsqlite3*" and
"/usr/local/lib/openssl-1.0.2/lib/libcrypto*" on test folder "/Users/user/Documents/sqlcipher-test"
Test folder "/Users/user/Documents/sqlcipher-test" contains: 'sqlcipher' (exec), 'libcrypto.1.0.0.dylib', 'libcrypto.a', 'libcrypto.dylib', 'libsqlcipher.0.dylib', 'libsqlcipher.a', 'libsqlcipher.dylib', 'libsqlcipher.la', 'libsqlite3.0.dylib', 'libsqlite3.a', 'libsqlite3.dylib' and 'libsqlite3.la'
Change command line folder to test folder (cd "/Users/user/Documents/sqlcipher-test")
Create new SQLite plaintext database
-
$ ./sqlcipher plaintext.db
$ sqlite> create table testtable (id integer, name text);
$ sqlite> insert into testtable (id,name) values(1,'Bob');
$ sqlite> insert into testtable (id,name) values(2,'Charlie');
$ sqlite> insert into testtable (id,name) values(3,'Daphne');
$ sqlite> select * from testtable;
$ sqlite> .exit
Open plaintext.db file with a standard text-editor: database scheme
and test data can be read in plaintext
Create new SQLCipher encrypted database
-
$ ./sqlcipher plaintext.db
$ sqlite> ATTACH DATABASE 'encrypted.db' AS encrypted KEY 'testkey';
"Error: unable to open database: encrypted.db"
"encrypted.db" file is empty
See https://www.zetetic.net/sqlcipher/sqlcipher-api/ for more informations

WildFly 10 rpmbuild

I am building a custom RPM for WildFly 10. I am stuck on deploying the systemd service. The spec file is able to deploy the code as well as create a user, however, no matter what avenue I try, I cannot get the RPM to create the service. I have tried install -m 644 but rpmbuild has tries finding the file, even if I specify full path:
e.g. install -m 644 %{buildroot}/opt/%{name}/docs/contrib/scripts/systemd/%{name}.service %{buildroot}/usr/lib/systemd/system/%{name}.service
I have also tried a series of systemd scriptlets as noted in https://fedoraproject.org/wiki/Packaging:Scriptlets, but that does nothing (the RPM will build with exit 0 status, but the service is never created). Any assistance would be appreciated.
$ cat SPECS/wildfly.spec
Name: wildfly
Version: 10.0.0.Final
Release: 1%{?dist}
Summary: WildFly (JBoss) Application Server
Group: System Environment/Daemons
License: LGPL 2.1
URL: http://wildfly.org
Source0: http://download.jboss.org/wildfly/%{version}/%{name}-%{version}.tar.gz
ExclusiveArch: x86_64 i686
ExclusiveOS: Linux
%{?systemd_requires}
Requires: systemd
Requires: shadow-utils
Requires: java >= 1.8.0
Requires: /etc/init.d/functions
Provides: %{name}
%description
WildFly Application Server packaged from the binary distribution.
%prep
%setup -q -n %{name}-%{version}
%install
mkdir -p %{buildroot}/opt/%{name}
mkdir -p %{buildroot}/var/log/%{name}
mkdir -p %{buildroot}/var/run/%{name}
cp -R . %{buildroot}/opt/%{name}
%pre
getent group %{name} >/dev/null || groupadd -r %{name}
getent passwd %{name} >/dev/null || \
useradd -r -g %{name} -d /opt/%{name} -s /sbin/nologin %{name}
%post
alternatives --install /etc/alternatives/%{name} %{name} /opt/%{name} 100
%systemd_post %{name}.service
%postun
alternatives --remove %{name} /opt/%{name}
%systemd_postun %{name}.service
userdel %{name}
%files
%defattr(-,root,root,0755)
%dir /opt/%{name}
/opt/%{name}/appclient
/opt/%{name}/bin
/opt/%{name}/docs
/opt/%{name}/domain
/opt/%{name}/jboss-modules.jar
/opt/%{name}/modules
%attr(-,%{name},%{name}) /opt/%{name}/standalone
/opt/%{name}/welcome-content
%dir /var/log/%{name}
%dir /var/run/%{name}
%doc /opt/%{name}/copyright.txt
%doc /opt/%{name}/LICENSE.txt
%doc /opt/%{name}/README.txt

Use GitLab CI to deploy app with ftp

I'm currently working on a little Angular Web project. And I found this great tool named Gitlab CI.
I read the docs and setup a node docker to build the webapp. Then I want to upload the builded app with ftp to my server. And this is where my trouble starts.
First here ist my gitlab-ci.yml
image: node:7.5.0
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- node_modules/
- dist/
stages:
- build
# - test
- deploy
- cleanup
# - deployProd
runBuild:
before_script:
- npm install -g angular-cli
- npm install
stage: build
script:
- ng build --target=production --environment=test
except:
- tags
runProdBuild:
before_script:
- npm install -g angular-cli
- npm install
stage: build
script:
- ng build --target=production --environment=prod
only:
- tags
runDeployTest:
before_script:
- apt-get install ftp
variables:
DATABASE: ""
URL: "http://test.domain.de"
stage: deploy
environment:
name: Entwicklungssystem
url: https://test.domain.de
artifacts:
name: "$CI_BUILD_NAME/$CI_BUILD_REF_NAME"
paths:
- dist/
expire_in: 2d
except:
- tags
script:
- echo '<?php ini_set("max_execution_time", 300); function rrmdir($dir) { if (is_dir($dir)) { $objects = scandir($dir); foreach ($objects as $object) { if ($object != "." && $object != "..") { if (is_dir($dir."/".$object)) { rrmdir($dir."/".$object); } else { echo "unlink :".$dir."/".$object; unlink($dir."/".$object); } } } rmdir($dir); } } rrmdir(__DIR__."."); ?>' > delete.php
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . delete.php"
- wget "$URL/delete.php"
- cd ./dist
- zip -r install.zip .
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . install.zip"
- echo "<?php \$dateiname = __DIR__.'/install.zip'; \$ofolder = str_replace('/public','',__DIR__); exec('unzip '.\$dateiname.' -d '.\$ofolder.' 2>&1', \$out); print(implode('<br>', \$out)); unlink(\$dateiname); unlink('entpacker.php'); unlink(__DIR__.'/../delete.php'); unlink(__DIR__.'/../delete.php.1'); ?>" > entpacker.php
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . entpacker.php"
# Install
- wget $URL/entpacker.php
runDeployProd:
before_script:
- apt-get install ftp
variables:
DATABASE: ""
URL: "http://test.domain.de"
stage: deploy
environment:
name: Produktivsystem
url: https://prod.domain.de
artifacts:
name: "$CI_BUILD_NAME/$CI_BUILD_REF_NAME"
paths:
- dist/
expire_in: 2d
script:
- echo '<?php ini_set("max_execution_time", 300); function rrmdir($dir) { if (is_dir($dir)) { $objects = scandir($dir); foreach ($objects as $object) { if ($object != "." && $object != "..") { if (is_dir($dir."/".$object)) { rrmdir($dir."/".$object); } else { echo "unlink :".$dir."/".$object; unlink($dir."/".$object); } } } rmdir($dir); } } rrmdir(__DIR__."."); ?>' > delete.php
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . delete.php"
- wget "$URL/delete.php"
- cd ./dist
- zip -r install.zip .
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . install.zip"
- echo "<?php \$dateiname = __DIR__.'/install.zip'; \$ofolder = str_replace('/public','',__DIR__); exec('unzip '.\$dateiname.' -d '.\$ofolder.' 2>&1', \$out); print(implode('<br>', \$out)); unlink(\$dateiname); unlink('entpacker.php'); unlink(__DIR__.'/../delete.php'); unlink(__DIR__.'/../delete.php.1'); ?>" > entpacker.php
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . entpacker.php"
# Install
- wget $URL/entpacker.php
only:
- tags
cleanup:
stage: cleanup
script:
- rm -rf ./dist
- rm -rf ./node_modules
when: manual
So it works fine until I want to install ftp to the docker image.
My question is now: Is it possible to install ftp to the image?
Or is there a other way to handle things like this? I can't use ssh because there is no ssh access to the webspace.
I got a solution. As suggested I tried to create a own docker image. There I noticed that I can't install lftp too. So at creating an docker image you have to run apt-get update first.
So I tried this inside my script, and it worked.
So you need to run apt-get update first, then install any package you want.
Use lftp instead of ftp
runDeployProd:
before_script:
- apt-get install lftp
https://forum.gitlab.com/t/deploy-via-ftp-via-ci/2631/2

Resources