tslint precommit hook shows all the linting errors before but also allows to commit the code - githooks

I am using angular-seed & husky to add a precommit hook for git. My package.json has
"scripts": {
"precommit": "npm test && npm run lint",
}
When I commit the code, husky runs "npm test" & "npm run lint" fine. When npm test fails, it shows me the errors on the console & doesn't allow me to commit. But when there are errors on "npm run lint" then the console displays all the error messages but also allows to commit. How can avoid the commit to when there are linting errors? Any help is appreciated. Thank you in advance!
This is how my .git\hooks\pre-commit looks:
#!/bin/sh
#husky 0.14.3
command_exists () {
command -v "$1" >/dev/null 2>&1
}
has_hook_script () {
[ -f package.json ] && cat package.json | grep -q "\"$1\"[[:space:]]*:"
}
cd "."
# Check if precommit script is defined, skip if not
has_hook_script precommit || exit 0
# Node standard installation
export PATH="$PATH:/c/Program Files/nodejs"
# Check that npm exists
command_exists npm || {
echo >&2 "husky > can't find npm in PATH, skipping precommit script in package.json"
exit 0
}
# Export Git hook params
export GIT_PARAMS="$*"
# Run npm script
echo "husky > npm run -s precommit (node `node -v`)"
echo
npm run -s precommit || {
echo
echo "husky > pre-commit hook failed (add --no-verify to bypass)"
exit 1
}

In your seed.config.ts, you should have a boolean called FORCE_TSLINT_EMIT_ERROR. Override the value of this variable explicitly in your project.config.ts to true.

Related

Getting permission denied when use makemigration in django

I am using a docker file to handle my django app. I added user to the docker file as following:
FROM python:3.9-alpine3.13
LABEL maintainer="H.Bazai"
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /tmp/
COPY requirements.dev.txt /tmp/
COPY app /app/
WORKDIR /app
EXPOSE 8000
ARG DEV=true
RUN rm -rf /var/cache/apk/*
RUN apk update && \
apk add --no-cache --virtual .build-deps \
build-base postgresql-dev musl-dev zlib-dev jpeg-dev && \
apk add --no-cache postgresql-client postgresql-dev jpeg && \
python -m venv /py && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /tmp/requirements.txt && \
if [ $DEV = "true" ]; then \
/py/bin/pip install -r /tmp/requirements.dev.txt ; \
fi && \
rm -rf /tmp/* && \
apk --purge del .build-deps && \
adduser \
--disabled-password \
--no-create-home \
hbazai && \
mkdir -p /vol/web/media && \
mkdir -p /vol/web/static && \
chown -R hbazai:users /vol && \
chmod -R 755 /vol
ENV PATH="/py/bin:$PATH"
USER hbazai
Then I build it. "docker-compose build".
Up to here everything is ok.
Then when I use the bellow command to makemigration, I got the error of ' permission denied '.
The command:
docker-compose run --rm app sh -c "python manage.py makemigrations"
The Error:
Creating recepie-api-django_app_run ... done
Migrations for 'core':
core/migrations/0005_recipe_image.py
- Add field image to recipe
Traceback (most recent call last):
File "/app/manage.py", line 22, in <module>
main()
File "/app/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/py/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/py/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/py/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/py/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/py/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/py/lib/python3.9/site-packages/django/core/management/commands/makemigrations.py", line 190, in handle
self.write_migration_files(changes)
File "/py/lib/python3.9/site-packages/django/core/management/commands/makemigrations.py", line 228, in write_migration_files
with open(writer.path, "w", encoding='utf-8') as fh:
PermissionError: [Errno 13] Permission denied: '/app/core/migrations/0005_recipe_image.py'
ERROR: 1
I would appreciate it if somebody can help me out.
I tried many things to give permission to the user of my ubuntu. (chown and chmode)
but I still get the error.

Salesforce CLI destructive changes (sfdx sgd:source:delta) not showing difference in branches using GItHub Actions

I am using the Salesforce destructive changes mentioned here. However, sfdx sgd:source:delta --to "HEAD" --from "HEAD^" --output . --generate-delta command is not showing any destructive changes, it generates a destructiveChanges.xml without any deleted metadata.
--- destructiveChanges.xml generated with deleted metadata ---
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
<version>52.0</version>
</Package>
Here is the content of the yml file I am using:
name: sf-destructivechanges
on:
push:
branches:
- "test"
jobs:
sf-destructivechanges:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Salesforce CLI
run: |
wget https://developer.salesforce.com/media/salesforce-cli/sfdx-linux-amd64.tar.xz
mkdir sfdx-cli
tar xJf sfdx-linux-amd64.tar.xz -C sfdx-cli --strip-components 1
./sfdx-cli/install
sfdx update
- name: Install plugin
run: |
echo 'y' | sfdx plugins:install sfdx-git-delta
- name: Get delta files with SGD
run: sfdx sgd:source:delta --to "HEAD" --from "HEAD^" --output . --generate-delta
- name: Deploy changes
run: |
echo "--- package.xml generated with added and modified metadata ---"
cat package/package.xml
echo
echo "---- Deploying added and modified metadata ----"
sfdx force:source:deploy -x package/package.xml -u ${{ secrets.USERNAME}}
- name: Destructive Changes
run: |
echo "--- destructiveChanges.xml generated with deleted metadata ---"
cat destructiveChanges/destructiveChanges.xml
echo
echo "--- Deleting removed metadata ---"
sfdx force:mdapi:deploy -d destructiveChanges -u ${{ secrets.USERNAME}} -w -1

Error at node_modules/#types/react-dom/.... Subsequent variable declarations must have the same type. Variable 'a'

I have installed #types/react-dom along with typescript and #types/react and #types/meteor but when I try to run the typechecker from command line I get the below error
You can reproduce the error and see all my configuration here: https://github.com/Falieson/react15-meteor1.5
Thanks for your help!
$ meteor npm run type:client
> react-meteor-example#0.1.0 type:client /Users/sjcfmett/Private/ReactMeteorExample
> tslint -p ./tsconfig.json --type-check './client/**/*.{ts,tsx}'
Error at node_modules/#types/react-dom/node_modules/#types/react/index.d.ts:3422:13: Subsequent variable declarations must have the same type. Variable 'a' must be of type 'DetailedHTMLProps<AnchorHTMLAttributes<HTMLAnchorElement>, HTMLAnchorElement>', but here has type 'DetailedHTMLProps<AnchorHTMLAttributes<HTMLAnchorElement>, HTMLAnchorElement>'.
Error at node_modules/#types/react-dom/node_modules/#types/react/index.d.ts:3423:13: Subsequent variable declarations must have the same type. Variable 'abbr' must be of type 'DetailedHTMLProps<HTMLAttributes<HTMLElement>, HTMLElement>', but here has type 'DetailedHTMLProps<HTMLAttributes<HTMLElement>, HTMLElement>'.
Error at node_modules/#types/react-dom/node_modules/#types/react/index.d.ts:3424:13: Subsequent variable declarations must have the same type. Variable 'address' must be of type 'DetailedHTMLProps<HTMLAttributes<HTMLElement>, HTMLElement>', but here has type 'DetailedHTMLProps<HTMLAttributes<HTMLElement>, HTMLElement>'.
Error at node_modules/#types/react-dom/node_modules/#types/react/index.d.ts:3425:13: Subsequent variable declarations must have the same type. Variable 'area' must be of type 'DetailedHTMLProps<AreaHTMLAttributes<HTMLAreaElement>, HTMLAreaElement>', but here has type 'DetailedHTMLProps<AreaHTMLAttributes<HTMLAreaElement>, HTMLAreaElement>'.
... (shortened)
package.json (for reference)
{
"name": "react-meteor-example",
"version": "0.1.0",
"private": true,
"scripts": {
"start": "meteor run",
"lint:client": "tslint --fix -c ./tslint.json -p ./tsconfig.json './client/**/*.{ts,tsx}'",
"lint:imports": "tslint --fix -c ./tslint.json -p ./tsconfig.json './imports/**/*.{ts,tsx}'",
"lint:server": "tslint --fix -c ./tslint.json -p ./tsconfig.json './server/**/*.ts'",
"lint": "npm run lint:client && npm run lint:server && npm run lint:imports",
"type:imports": "tslint -p ./tsconfig.json --type-check './imports/**/*.{ts,tsx}'",
"type:client": "tslint -p ./tsconfig.json --type-check './client/**/*.{ts,tsx}'",
"type:server": "tslint -p ./tsconfig.json --type-check './server/**/*.ts'",
"type": "npm run type:client && npm run type:server && npm run type:imports",
"precommit": "npm run lint && npm run type"
},
"dependencies": {
"babel-runtime": "^6.20.0",
"meteor-node-stubs": "~0.2.4",
"react": "^15.6.1",
"react-dom": "^15.6.1"
},
"devDependencies": {
"#types/meteor": "^1.4.2",
"#types/react": "^15.6.0",
"#types/react-dom": "^15.5.1",
"babel-preset-react": "^6.24.1",
"babel-preset-stage-1": "^6.24.1",
"husky": "^0.14.3",
"tslint": "^5.5.0",
"tslint-react": "^3.1.0",
"typescript": "^2.4.2"
}
}
The types for React 16 beta have been published as the 'latest' React types.
The new version removes the definitions for the parts of React that have been removed in React 16 (like React.DOM), which is expected.
Unfortunately, the publishing of these types for the React 16 beta were done to the #latest (default) tag in npm instead of #next (as React did).
I have an open issue (#18708) with DefinitelyTyped here: https://github.com/DefinitelyTyped/DefinitelyTyped/issues/18708
You can try specifically targeting a particular release (npm install --save #types/react#15.6.0) but the dependencies in #types/react-dom for #types/react is set to "*", which seems to cause #types/react#latest to still be downloaded, causing you to have multiple versions in various places of your node_modules directory.
We are having to do some manual work to sort this out. Hopefully the folks maintaining #types/react will fix this soon.
I am using yarn, and fixed this by running rm -rf node_modules && rm yarn.lock && yarn install

Use GitLab CI to deploy app with ftp

I'm currently working on a little Angular Web project. And I found this great tool named Gitlab CI.
I read the docs and setup a node docker to build the webapp. Then I want to upload the builded app with ftp to my server. And this is where my trouble starts.
First here ist my gitlab-ci.yml
image: node:7.5.0
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- node_modules/
- dist/
stages:
- build
# - test
- deploy
- cleanup
# - deployProd
runBuild:
before_script:
- npm install -g angular-cli
- npm install
stage: build
script:
- ng build --target=production --environment=test
except:
- tags
runProdBuild:
before_script:
- npm install -g angular-cli
- npm install
stage: build
script:
- ng build --target=production --environment=prod
only:
- tags
runDeployTest:
before_script:
- apt-get install ftp
variables:
DATABASE: ""
URL: "http://test.domain.de"
stage: deploy
environment:
name: Entwicklungssystem
url: https://test.domain.de
artifacts:
name: "$CI_BUILD_NAME/$CI_BUILD_REF_NAME"
paths:
- dist/
expire_in: 2d
except:
- tags
script:
- echo '<?php ini_set("max_execution_time", 300); function rrmdir($dir) { if (is_dir($dir)) { $objects = scandir($dir); foreach ($objects as $object) { if ($object != "." && $object != "..") { if (is_dir($dir."/".$object)) { rrmdir($dir."/".$object); } else { echo "unlink :".$dir."/".$object; unlink($dir."/".$object); } } } rmdir($dir); } } rrmdir(__DIR__."."); ?>' > delete.php
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . delete.php"
- wget "$URL/delete.php"
- cd ./dist
- zip -r install.zip .
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . install.zip"
- echo "<?php \$dateiname = __DIR__.'/install.zip'; \$ofolder = str_replace('/public','',__DIR__); exec('unzip '.\$dateiname.' -d '.\$ofolder.' 2>&1', \$out); print(implode('<br>', \$out)); unlink(\$dateiname); unlink('entpacker.php'); unlink(__DIR__.'/../delete.php'); unlink(__DIR__.'/../delete.php.1'); ?>" > entpacker.php
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . entpacker.php"
# Install
- wget $URL/entpacker.php
runDeployProd:
before_script:
- apt-get install ftp
variables:
DATABASE: ""
URL: "http://test.domain.de"
stage: deploy
environment:
name: Produktivsystem
url: https://prod.domain.de
artifacts:
name: "$CI_BUILD_NAME/$CI_BUILD_REF_NAME"
paths:
- dist/
expire_in: 2d
script:
- echo '<?php ini_set("max_execution_time", 300); function rrmdir($dir) { if (is_dir($dir)) { $objects = scandir($dir); foreach ($objects as $object) { if ($object != "." && $object != "..") { if (is_dir($dir."/".$object)) { rrmdir($dir."/".$object); } else { echo "unlink :".$dir."/".$object; unlink($dir."/".$object); } } } rmdir($dir); } } rrmdir(__DIR__."."); ?>' > delete.php
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . delete.php"
- wget "$URL/delete.php"
- cd ./dist
- zip -r install.zip .
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . install.zip"
- echo "<?php \$dateiname = __DIR__.'/install.zip'; \$ofolder = str_replace('/public','',__DIR__); exec('unzip '.\$dateiname.' -d '.\$ofolder.' 2>&1', \$out); print(implode('<br>', \$out)); unlink(\$dateiname); unlink('entpacker.php'); unlink(__DIR__.'/../delete.php'); unlink(__DIR__.'/../delete.php.1'); ?>" > entpacker.php
- lftp -d -c "set ftp:ssl-allow no; open -u $ftp_user,$ftp_password $ftp_server; cd $ftp_path; put -O . entpacker.php"
# Install
- wget $URL/entpacker.php
only:
- tags
cleanup:
stage: cleanup
script:
- rm -rf ./dist
- rm -rf ./node_modules
when: manual
So it works fine until I want to install ftp to the docker image.
My question is now: Is it possible to install ftp to the image?
Or is there a other way to handle things like this? I can't use ssh because there is no ssh access to the webspace.
I got a solution. As suggested I tried to create a own docker image. There I noticed that I can't install lftp too. So at creating an docker image you have to run apt-get update first.
So I tried this inside my script, and it worked.
So you need to run apt-get update first, then install any package you want.
Use lftp instead of ftp
runDeployProd:
before_script:
- apt-get install lftp
https://forum.gitlab.com/t/deploy-via-ftp-via-ci/2631/2

"no test history available" error when running cal

vogar --benchmark --stream --verbose --mode jvm ArraySortBenchmark.java
But it doesn't execute any benchmark, because "no test history available"
executing mkdir -p /tmp/vogar/573bd257-1b6e-4b91-92dd-1cdc6bdc491b
Actions: 1
skipped Users.louischiffre.projects.testing.ArraySortBenchmark.java
Task 0: prepare target
Task 1: rm /tmp/vogar/573bd257-1b6e-4b91-92dd-1cdc6bdc491b
depends on completed task: prepare target
Task 2: rm /tmp/vogar/run
depends on completed task: prepare target
running prepare target
executing rm -rf /tmp/vogar/run
executing mkdir -p /tmp/vogar/run
executing mkdir -p /tmp/vogar/run/tmp
executing mkdir -p /tmp/vogar/dalvik-cache
executing mkdir -p /tmp/vogar/run/user.home
running rm /tmp/vogar/run
running rm /tmp/vogar/573bd257-1b6e-4b91-92dd-1cdc6bdc491b
executing rm -rf /tmp/vogar/573bd257-1b6e-4b91-92dd-1cdc6bdc491b
executing rm -rf /tmp/vogar/run
parsing outcomes from 0 files
Skips summary:
Users.louischiffre.projects.testing.ArraySortBenchmark.java (no test history available)
Outcomes: 1. Passed: 0, Failed: 0, Skipped: 1. Took 129ms.
Interestingly, dn caliper example
EnumSetContainsBenchmark.java
do not have this problem.

Resources