Zap export report is not working in Jenkins - jenkins-plugins

Trying to export report using ZAP in Jenkins.
Getting below errors :-
[ZAP Jenkins Plugin] INITIALIZATION [ SUCCESSFUL ]
REQUIRED PLUGIN(S) ARE MISSING
[ZAP Jenkins Plugin] SHUTDOWN [ START ]
and in local OWASP ZAP/zap.log:-
2018-11-18 09:52:48,551 [main ] INFO Options ParamCertificate - Unsafe SSL renegotiation disabled.2018-11-18 09:52:49,684 [main ] INFO ENGINE - open start - state not modified 2018-11-18 09:52:50,085 [main ] INFO ENGINE - dataFileCache open start2018-11-18 09:52:50,134 [main ] INFO ENGINE - dataFileCache open end2018-11-18 09:52:50,498 [ZAP-daemon] INFO ExtensionFactory - Loading extensions2018-11-18 09:52:50,746 [ZAP-daemon] ERROR ExtensionAutoUpdate - Unable to load the configuration org.apache.commons.configuration.ConfigurationException: Unable to load the configuration

I resolved this issue:- Added add-on exportreport-alpha-5 plugin in the ZAP Home directory (in plugin directory) .

Related

Docker Deskop 2.3.0.2 for Windows - Host Volumes Stopped Working

I have a compose (yml) file that I use to bring servers up and down as I need them. I have just updated my Docker Desktop on Windows to version 2.3.0.2.
My Sql Server instance will not stay running now.
My yaml file is simply:
version: '3.2'
services:
dev-mssql-server:
image: mcr.microsoft.com/mssql/server
container_name: dev-mssql-server
volumes:
- type: bind
source: C:\Data\sqlserver-linux-container
target: /var/opt/mssql/data
networks:
dvlpnet:
ipv4_address: 172.22.0.20
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=ReallyBitchin#1Password
- MSSQL_PID=Developer
ports:
- "15785:1433"
networks:
dvlpnet:
ipam:
config:
- subnet: 172.22.0.0/24
volumes:
sqlserver-linux-container:
And the message from the Docker Dashboard is:
Attaching to dev-mssql-server
dev-mssql-server | SQL Server 2019 will run as non-root by default.
dev-mssql-server | This container is running as user mssql.
dev-mssql-server | Your master database file is owned by root.
dev-mssql-server | To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
dev-mssql-server | 2020-05-18 18:25:20.88 Server Setup step is FORCE copying system data file 'C:\templatedata\model_replicatedmaster.mdf' to '/var/opt/mssql/data/model_replicatedmaster.mdf'.
2020-05-18 18:25:21.13 Server ERROR: Setup FAILED copying system data file 'C:\templatedata\model_replicatedmaster.mdf' to '/var/opt/mssql/data/model_replicatedmaster.mdf': 1(Incorrect function.)
dev-mssql-server | ERROR: BootstrapSystemDataDirectories() failure (HRESULT 0x80070001)
dev-mssql-server exited with code 1
And some of the Docker log file is here:
[13:26:48.754][GoBackendProcess ][Info ] stopping accepting connections on docker-proxy-port-approver/approver-6129484611666145821 tcp forward from 0.0.0.0:15785 to 0.0.0.0:0
[13:26:48.754][GoBackendProcess ][Error ] wsl unexpose error:Post http://unix/forwards/unexpose/port: open \\\\.\\pipe\\dockerWsl2BootstrapExposePorts: The system cannot find the file specified.
[13:26:48.754][GoBackendProcess ][Info ] external: POST /forwards/unexpose/port 200 \"Go-http-client/1.1\" \"\
[13:26:49.255][ApiProxy ][Info ] proxy << POST /v1.25/containers/ab6315ce510c8143da6760f54425ba625a2acffb5e6fb387ca3120b992b7aef6/start (515.4339ms)\n
[13:26:49.257][GoBackendProcess ][Info ] vpnkitExposer.Add(ab6315ce510c8143da6760f54425ba625a2acffb5e6fb387ca3120b992b7aef6, [TCP 0.0.0.0:15785 -> 127.0.0.1:15785])
[13:26:49.257][GoBackendProcess ][Info ] adding docker-containers/ab6315ce510c8143da6760f54425ba625a2acffb5e6fb387ca3120b992b7aef6 tcp forward from 0.0.0.0:15785 to 127.0.0.1:15785
[13:26:49.258][GoBackendProcess ][Error ] wsl expose error:Post http://unix/forwards/expose/port: open \\\\.\\pipe\\dockerWsl2BootstrapExposePorts: The system cannot find the file specified.
[13:26:49.260][GoBackendProcess ][Info ] grpcfuseSharer.Add(ab6315ce510c8143da6760f54425ba625a2acffb5e6fb387ca3120b992b7aef6, [src=C:\\Data\\sqlserver-linux-container,dst=/var/opt/mssql/data,option=rw])
[13:26:49.260][GoBackendProcess ][Info ] lazily recomputing file watches
[13:26:49.260][GoBackendProcess ][Info ] watching path C:\\Data\\sqlserver-linux-container\\
[13:26:49.260][GoBackendProcess ][Info ] invalidating caches for C:\\Data\\sqlserver-linux-container\\
[13:26:49.261][ApiProxy ][Info ] proxy >> GET /containers/ab6315ce510c8143da6760f54425ba625a2acffb5e6fb387ca3120b992b7aef6/json\n
[13:26:49.266][ApiProxy ][Info ] proxy << GET /containers/ab6315ce510c8143da6760f54425ba625a2acffb5e6fb387ca3120b992b7aef6/json (5.0047ms)\n
[13:26:49.269][GoBackendProcess ][Info ] caches invalidated for C:\\Data\\sqlserver-linux-container\\
I made sure that my C drive was still shared properly in the new version of Docker Desktop. I even redundantly added c:\data and c:\data\sqlserver-linux-container directories as shared after I started seeing these errors.
I was not able to find much out on the interwebbies on how this new version of Docker Desktop handles Windows host volumes.
I removed version 2.3.0.2 and reinstalled the older version 2.2.0.5 and everything worked happily again.
Has anyone run into this? Is there a solution yet that I was not able to find?
We have same problem. Our scouting showed that:
Docker Desktop 2.3.0.2 (changelog)
Docker Desktop now allows sharing individual folders, rather than whole drives, giving more control to users over what is being shared.
Microsoft suggests mapping subdirectories separately:
https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-docker?view=sql-server-ver15#mount-a-host-directory-as-data-volume
mssql:
container_name: $PROJECTNAME-mssql
restart: ${DOCKER_RESTART}
image: "mcr.microsoft.com/mssql/server:2019-CU4-ubuntu-16.04"
volumes:
# konfiguracja dla lokalnej bazy mssql na dockerze
- ./var/data/mssql/data:/var/opt/mssql/data
- ./var/data/mssql/log:/var/opt/mssql/log
- ./var/data/mssql/secrets:/var/opt/mssql/secrets
Still doesn't work.
Edit:
There is an active issue registered on github related to this one:
https://github.com/docker/for-win/issues/6646

Symfony3 on Google App Engine Flexible can't connect to Google Cloud SQL MySQL

I can't get my Symfony 3 project on Google App Engine Flexible to connect to Google Cloud SQL (MySQL, 2. Generation).
Starting off with these examples
https://github.com/GoogleCloudPlatform/php-docs-samples/tree/master/appengine/flexible/symfony
https://github.com/GoogleCloudPlatform/php-docs-samples/tree/master/appengine/flexible/cloudsql-mysql
I created a Google App Engine and a Google Cloud SQL (MySQL, 2. Generation), both at europe-west3.
I tested out these examples on my GAE and both run fine.
Then I went on and created a new, very simple Symfony 3.3 project from scratch.
After creation (with only the welcome screen and "Your application is now ready. You can start working on it at: /app/") I deployed that to my GAEflex and it works fine.
After that I added one simple entity and an extra controller for that.
This works fine, as it should, when starting Googles cloud_sql_proxy and the Symfony built-in webserver "php bin/console server:run".
It reads and writes from and to the Cloud SQL, all easy peasy.
But, no matter what I tried I always get a "Connection Refused" on one (or the other, depending of my state of experimentation) of the commands from "symfony-scripts" "post-install-cmd" in my composer.json.
"$ gcloud beta app deploy" output (excerpt)
...
Step #1: Install PHP extensions...
Step #1: Running composer...
Step #1: Loading composer repositories with package information
...
Step #1: [Doctrine\DBAL\Exception\ConnectionException]
Step #1: An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
Step #1:
Step #1:
Step #1: [Doctrine\DBAL\Driver\PDOException]
Step #1: SQLSTATE[HY000] [2002] Connection refused
Step #1:
Step #1:
Step #1: [PDOException]
Step #1: SQLSTATE[HY000] [2002] Connection refused
...
parameters.yml (excerpt)
parameters:
database_host: 127.0.0.1
database_port: 3306
database_name: gae-test-project
database_user: root
database_password: secretpassword
config.yml (excerpt)
doctrine:
dbal:
driver: pdo_mysql
host: '%database_host%'
port: '%database_port%'
unix_socket: '/cloudsql/restof:europe-west3:connectionname'
dbname: '%database_name%'
user: '%database_user%'
password: '%database_password%'
charset: utf8mb4
default_table_options:
charset: utf8mb4
collate: utf8mb4_unicode_ci
composer.json (excerpt)
"require": {
"php": "^7.1",
"doctrine/doctrine-bundle": "^1.6",
"doctrine/orm": "^2.5",
"incenteev/composer-parameter-handler": "^2.0",
"sensio/distribution-bundle": "^5.0.19",
"sensio/framework-extra-bundle": "^3.0.2",
"sensio/generator-bundle": "^3.0",
"symfony/monolog-bundle": "^3.1.0",
"symfony/polyfill-apcu": "^1.0",
"symfony/swiftmailer-bundle": "^2.3.10",
"symfony/symfony": "3.3.*",
"twig/twig": "^1.0||^2.0"
},
"require-dev": {
"symfony/phpunit-bridge": "^3.0"
},
"scripts": {
"symfony-scripts": [
"Incenteev\\ParameterHandler\\ScriptHandler::buildParameters",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::buildBootstrap",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::clearCache",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installAssets",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installRequirementsFile",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::prepareDeploymentTarget"
],
"post-install-cmd": [
"chmod -R ug+w $APP_DIR/var",
"#symfony-scripts"
],
"post-update-cmd": [
"#symfony-scripts"
]
},
app.yml (complete)
service: default
runtime: php
env: flex
runtime_config:
document_root: web
env_variables:
SYMFONY_ENV: prod
MYSQL_USER: root
MYSQL_PASSWORD: secretpassword
MYSQL_DSN: mysql:unix_socket=/cloudsql/restof:europe-west3:connectionname;dbname=gae-test-project
WHITELIST_FUNCTIONS: libxml_disable_entity_loader
#[START cloudsql_settings]
# Use the connection name obtained when configuring your Cloud SQL instance.
beta_settings:
cloud_sql_instances: "restof:europe-west3:connectionname"
#[END cloudsql_settings]
Keep in mind that I added or altered only lines which examples and documentation advised me to.
What Do I do wrong?
The examples both work fine, a Symfony project connecting to the DB through a running cloud_sql_proxy works fine, when I try to get doctrine to connect to a CloudSQL from within an GAEflex I get no connection.
Can anyone spot the rookies mistake I might have made?
Has anyone had this problem and maybe I forgot one or the other setting with the GAE or CloudSQL?
Can anyone point me to a repository of a working Symfony3/Doctrine project that uses GAEFlex and CloudSQL?
I am stuck as I can't find any uptodate documentation about how to get this, rather obvious, combination to run.
Any help is greatly appreciated!
Cheers /Carsten
I found it.
I read this post Flexible env: post-deploy-cmd from composer.json it not being executed which yielded two interesting infos.
App Engine Flex for PHP, doesn't run post-deploy-cmdany more
just use post-install-cmd as a drop-in replacement, unless you need DB connection.
So, during deployment, there is simply no database connection / socket available!
That's why Google introduced the (non-standard) post-deploy-cmd which unfortunately seems to have caused some other trouble, so they removed it again. Bummer.
In effect, as of to date, one can't use anything that utilizes the database within post-install-cmd.
In the case above, I just changed
composer.json (excerpt)
"scripts": {
"symfony-scripts": [
"Incenteev\\ParameterHandler\\ScriptHandler::buildParameters",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::buildBootstrap",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::clearCache",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installAssets",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installRequirementsFile",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::prepareDeploymentTarget"
],
"pre-install-cmd": [
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installAssets",
"post-install-cmd": [
"chmod -R ug+w $APP_DIR/var",
"Incenteev\\ParameterHandler\\ScriptHandler::buildParameters",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::buildBootstrap",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::clearCache",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::installRequirementsFile",
"Sensio\\Bundle\\DistributionBundle\\Composer\\ScriptHandler::prepareDeploymentTarget"
],
"post-update-cmd": [
"#symfony-scripts"
]
},
And everything runs fine...
I'm not sure about the innards of installAssets and if I still achieve the originally intended effect when I move it to pre-install but, well, it works for a very simple example application...
In the next days I'm going to work on carrying this over to my more complex application and see if I can get that to work.
Further discussion will most likely be over at the Google Cloud Platform Community
Cheers everyone!

IBM watson Discovery + Conversation

I followed the demo shown here. Everything works fine in the demo. But when I tried to use my discovery collection, it gave the error “Service seems to be down. Please try again after sometime or Please check the logs” and the local (wlp) server stopped. See the console log below. Where is the problem?
I used the workspace as used in the demo and changed the “Out-of-scope” intent questions. It retrieves answers from the conversation but not from the discovery?
[AUDIT ] CWWKZ0058I: Monitoring dropins for applications.
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/
[AUDIT ] CWWKZ0001I: Application conversation-with-discovery-0.1-SNAPSHOT started in 10.257 seconds.
[AUDIT ] CWWKF0012I: The server installed the following features: [webProfile-7.0, jaxrs-2.0, json-1.0, appSecurity-2.0, jpa-2.1, cdi-1.2, jaxrsClient-2.0, distributedMap-1.0, websocket-1.1, el-3.0, beanValidation-1.1, ssl-1.0, jdbc-4.1, managedBeans-1.0, servlet-3.1, jsf-2.2, jsp-2.3, jndi-1.0, jsonp-1.0, ejbLite-3.2].
[AUDIT ] CWWKF0011I: The server defaultServer is ready to run a smarter planet.
2017-05-18 14:18:24,684 INFO Log4j appears to be running in a Servlet environment, but there's no log4j-web module available. If you want better web container support, please add the log4j-web JAR to your web archive or server lib directory.
14:18:24.817 [Default Executor-thread-13] INFO com.ibm.watson.apis.conversation_with_discovery.listener.AppServletContextListener - Deploying ServletContextListener
14:18:29.825 [Thread-12] INFO com.ibm.watson.apis.conversation_with_discovery.listener.SetupThread - Setup Complete
14:19:07.748 [Default Executor-thread-1] INFO com.ibm.watson.apis.conversation_with_discovery.discovery.DiscoveryClient - Creating Discovery Payload
[WARNING ] Application {http://rest.conversation_with_discovery.apis.watson.ibm.com/}ProxyResource has thrown exception, unwinding now
org.apache.cxf.interceptor.Fault
[WARNING ] Exception in handleFault on interceptor org.apache.cxf.jaxrs.interceptor.JAXRSDefaultFaultOutInterceptor#54b2f9b0
org.apache.cxf.interceptor.Fault
[ERROR ] Error occurred during error handling, give up!
org.apache.cxf.interceptor.Fault
[ERROR ] SRVE0777E: Exception thrown by application class 'com.ibm.watson.apis.conversation_with_discovery.rest.ProxyResource.postMessage:192'
java.lang.NullPointerException
at com.ibm.watson.apis.conversation_with_discovery.rest.ProxyResource.postMessage(ProxyResource.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.ibm.ws.jaxrs20.server.LibertyJaxRsServerFactoryBean.performInvocation(LibertyJaxRsServerFactoryBean.java:674)
at [internal classes]

gcloud app deploy give 400 / forbidden error / cannot push img to google container registry

I am trying to deploy .net core application from google compute vm to google app engine using gcloud app deploy. I get the following error
> WARNING: We couldn't validate that your project is ready to deploy to App Engine Flexible Environment. If deployment fails, please check the following mess
age and try again:
Server responded with code [400]:
Bad Request Unexpected HTTP status 400.
Failed Project Preparation (app_id='s~project-id'). Out of retries. Last error: Temporary error occurred while verifying project: TEMPORARY_ERROR: Unabl
e to check API status
Beginning deployment of service [default]...
WARNING: Deployment of App Engine Flexible Environment apps is currently in Beta
Building and pushing image for service [default]
Some files were skipped. Pass `--verbosity=info` to see which ones.
ERROR: (gcloud.app.deploy) Could not copy [/tmp/tmpLwvVOb/src.tgz] to [us.gcr.io/project-id/appengine/default.20170118t043919:latest]: HttpError accessing
<https://www.googleapis.com/resumable/upload/storage/v1/b/staging.project-id.appspot.com/o?uploadType=resumable&alt=json&name=us.gcr.io%2Fcasepro-v3%2Fappe
ngine%2Fdefault.20170118t043919%3Alatest>: response: <{'status': '403', 'content-length': '166', 'vary': 'Origin, X-Origin', 'server': 'UploadServer', 'x-g
uploader-uploadid': 'AEnB2UqprxH-2tIhsSZdGxDOtS8UnWSI29YTo4kaptNK67SWJpLVqR0zEtCAHgFyE64wj1HfCyUL5sy9z4AZkTRFYuxXfdw5TA', 'date': 'Wed, 18 Jan 2017 04:40:0
0 GMT', 'alt-svc': 'quic=":443"; ma=2592000; v="35,34"', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"errors": [
{
"domain": "global",
"reason": "forbidden",
"message": "Forbidden"
}
],
"code": 403,
"message": "Forbidden"
}
}
>. Please retry.
I have already enabled billing api, app engine admin api and storage api. Service a/c that is being used has editor rights. VM instance has been created using cloud launcher for Jenkins Bitnami package. I am trying to deploy app from command line from the vm before I configure Jenkins to do the same.
What to do to resolve this?
The problem is that gcloud app deploy is trying to deploy to the project id 'project-id', which cannot be your project id.
Try setting the project like this:
gcloud config set project MY-PROJECT-ID
Then, retry the gcloud app deploy command.
If this fails, please reply with your full gcloud command line, and the results of these two commands:
gcloud config list
gcloud version

How to create local copy of GAE datastore?

I want to make client version of GAE app that store exact data of online version.(myapp.appspot.com) If i can use sdk instead, is any library or tools to sync online and sdk version? I try using bulkloader but i can't load downloaded data to local SDK? Please help.
As explained in this article (link updated, thanks to Zied Hamdi)
You simply need to enable the remote api
builtins:
- remote_api: on
Update your application then run the following commands:
appcfg.py download_data -A s~YOUR_APP_NAME --url=http://YOUR_APP_NAME.appspot.com/_ah/remote_api/ --filename=data.csv
appcfg.py --url=http://localhost:8080/_ah/remote_api/ --filename=data.csv upload_data .
Edit for After April 12th 2016 on Latest AppEngine SDK:
The above works for SDK version 1.9.0 and before. However with the depreciation of ClientLogin, the above will cause an error of
03:13 PM Uploading data records.
[INFO ] Logging to bulkloader-log-20160909.151355
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20160909.151355.sql3
2016-09-09 15:13:55,175 INFO client.py:578 Refreshing due to a 401 (attempt 1/2)
2016-09-09 15:13:55,176 INFO client.py:804 Refreshing access_token
2016-09-09 15:13:55,312 INFO client.py:578 Refreshing due to a 401 (attempt 2/2)
Recommended by Anssi here, we can use the API server directly without running into this error. For a typical dev_appserver startup you get the following output
INFO 2016-09-09 19:27:11,662 sdk_update_checker.py:229] Checking for updates to the SDK.
INFO 2016-09-09 19:27:11,899 api_server.py:205] Starting API server at: http://localhost:52497
INFO 2016-09-09 19:27:11,905 dispatcher.py:197] Starting module "default" running at: http://localhost:8080
INFO 2016-09-09 19:27:11,918 admin_server.py:116] Starting admin server at: http://localhost:8000
instead of the above for upload use the API port, in this case
appcfg.py --url=http://localhost:52497/_ah/remote_api/ --filename=data.csv upload_data .
See the docs for details on how to download and upload your entire datastore. Simply bulk download from production, then bulk upload to your local datastore.
Bear in mind, however, that the local datastore is not designed to handle large volumes of data - you may run into performance or memory issues.

Resources