SonarQube server 7.9.1
SonarQube Scanner 3.2.0.1227
Java 1.8.0_121 Oracle Corporation (64-bit)
Linux 4.15.0-112-generic amd64
I'm using the sonar scanner to analyse my source code. I realised that for 2 working copies I get different results on the server and was wondering why. I compared the scanner logs for both runs and detected this info in the 2nd working copy:
INFO: SCM provider for this project is: git
The following directory structure should explain the differences between working copies, whereas the 2nd is a fork of the base repository:
└── Work1
├── .git
├── build
| └── config1
├── sonar-project.properties
└── src
└── Work2
├── .git
├── build
| └── config2
| └── .git
├── sonar-project.properties
└── src
I start the analysis of my 1st working copy from within the build folder (Work1/build/config1 $> make sonar -> cd Work1; sonar-scanner-3.2.0.1227-linux/bin/sonar-scanner...) where the scanner finds the sonar-project.properties. The analysis is executed without any issues and the report shows perfect results.
Starting the analysis from the fork - also from within its build folder - (Work2/build/config2 $> make sonar -> cd Work2; sonar-scanner-3.2.0.1227-linux/bin/sonar-scanner...) the analysis does not give any impression, that something goes wrong. The results are stored on the server, but the report contains suspicious results.
As an example the following image shows the differences in case of one source file:
Left side with a file from the base repository (Work1/src/...), right side same file but from the fork (Work2/src/...)
My impressions is, that since the 2nd analysis run log is lacking the INFO: SCM provider for this project is: git it cannot assign / associate the Work2/build/config2/.git to the sources taken from Work2/src.
Is my assumption correct ?
I was trying to set the options -Dsonar.scm.provider=git -Dsonar.projectBaseDir=Work2/ explicitely acc. the online documentation and here but with no luck.
How can change the base folder for the SCM Provider ?
in sonar-project.properties file add:
sonar.scm.disabled=true
or use
-Dsonar.scm.disabled=true
from cli
Related
I am trying to create a custom model/image/container for Amazon Sagemaker.
I had read all the basics tutorials, how to create an image with your requirements. Actually i have a properly set image which runs tensorflow , trains, deploy and serve the model locally.
The problems come when i am trying to run the container using sagemaker python SDK. more precisely, trying to use the framework module and Class to create my own custom estimator to run the custom image/container.
here i post the minimum code to explain my case:
File Structure:
.
├── Dockerfile
├── variables.env
├── requirements.txt
├── test_sagemaker.ipynb
├── src
| ├── train
| ├── serve
| ├── predict.py
| └── custom_code/my_model_functions
|
└── local_test
├── train_local.sh
├── serve_local.sh
├── predict.sh
└── test_dir
├── model/model.pkl
├── output/output.txt
└── input
├── data/data.pkl
└── config
├── hyperparameters.json
├── inputdataconfig.json
└── resourceconfig.json
dockerfile.
FROM ubuntu:16.04
MAINTAINER Amazon AI <sage-learner#amazon.com>
# Install python and other runtime dependencies
RUN apt-get update && \
apt-get -y install build-essential libatlas-dev git wget curl nginx jq && \
apt-get -y install python3-dev python3-setuptools
# Install pip
RUN cd /tmp && \
curl -O https://bootstrap.pypa.io/get-pip.py && \
python3 get-pip.py && \
rm get-pip.py
# Installing Requirements
COPY requirements.txt /requirements.txt
RUN pip3 install -r /requirements.txt
# Set SageMaker training environment variables
ENV SM_ENV_VARIABLES env_variables
COPY local_test/test_dir /opt/ml
# Set up the program in the image
COPY src /opt/program
WORKDIR /opt/program
Train
from __future__ import absolute_import
import json, sys, logging, os, subprocess, time, traceback
from pprint import pprint
# Custom Code Functions
from custom_code.custom_estimator import CustomEstimator
from custom_code.custom_dataset import create_dataset
# Important Seagemaker Modules
import sagemaker_containers.beta.framework as framework
from sagemaker_containers import _env
logger = logging.getLogger(__name__)
def run_algorithm_mode():
"""Run training in algorithm mode, which does not require a user entry point. """
train_config = os.environ.get('training_env_variables')
model_path = os.environ.get("model_path")
print("Downloading Dataset")
train_dataset, test_dataset = create_dataset(None)
print("Creating Model")
clf = CustomEstimator.create_model(train_config)
print("Starting Training")
clf = clf.train_model(train_dataset, test_dataset)
print("Saving Model")
module_name = 'classifier.pkl'
CustomEstimator.save_model(clf, model_path)
def train(training_environment):
"""Run Custom Model training in either 'algorithm mode' or using a user supplied module in local SageMaker environment.
The user supplied module and its dependencies are downloaded from S3.
Training is invoked by calling a "train" function in the user supplied module.
Args:
training_environment: training environment object containing environment variables,
training arguments and hyperparameters
"""
if training_environment.user_entry_point is not None:
print("Entry Point Receive")
framework.modules.run_module(training_environment.module_dir,
training_environment.to_cmd_args(),
training_environment.to_env_vars(),
training_environment.module_name,
capture_error=False)
print_directories()
else:
logger.info("Running Custom Model Sagemaker in 'algorithm mode'")
try:
_env.write_env_vars(training_environment.to_env_vars())
except Exception as error:
print(error)
run_algorithm_mode()
def main():
train(framework.training_env())
sys.exit(0)
if __name__ == '__main__':
main()
test_sagemaker.ipynb
I created this custom sagemaker estimator using the Framework class of the sagemaker estimator.
import boto3
from sagemaker.estimator import Framework
class ScriptModeTensorFlow(Framework):
"""This class is temporary until the final version of Script Mode is released.
"""
__framework_name__ = "tensorflow-scriptmode"
create_model = TensorFlow.create_model
def __init__(
self,
entry_point,
source_dir=None,
hyperparameters=None,
py_version="py3",
image_name=None,
**kwargs
):
super(ScriptModeTensorFlow, self).__init__(
entry_point, source_dir , hyperparameters, image_name=image_name, **kwargs
)
self.py_version = py_version
self.image_name = None
self.framework_version = '2.0.0'
self.user_entry_point = entry_point
print(self.user_entry_point)
Then create the estimator passing the entry_point and the images (all the others parameters the class needs to run.)
estimator = ScriptModeTensorFlow(entry_point='training_script_path/train_model.py',
image_name='sagemaker-custom-image:latest',
source_dir='source_dir_path/input/config',
train_instance_type='local', # Run in local mode
train_instance_count=1,
hyperparameters=hyperparameters,
py_version='py3',
role=role)
Finally, hitting training...
estimator.fit({"train": "s3://s3-bucket-path/training_data"})
but I get the following error:
Creating tmpm3ft7ijm_algo-1-mjqkd_1 ...
Attaching to tmpm3ft7ijm_algo-1-mjqkd_12mdone
algo-1-mjqkd_1 | Reporting training FAILURE
algo-1-mjqkd_1 | framework error:
algo-1-mjqkd_1 | Traceback (most recent call last):
algo-1-mjqkd_1 | File "/usr/local/lib/python3.6/dist-packages/sagemaker_containers/_trainer.py", line 65, in train
algo-1-mjqkd_1 | env = sagemaker_containers.training_env()
algo-1-mjqkd_1 | File "/usr/local/lib/python3.6/dist-packages/sagemaker_containers/__init__.py", line 27, in training_env
algo-1-mjqkd_1 | resource_config=_env.read_resource_config(),
algo-1-mjqkd_1 | File "/usr/local/lib/python3.6/dist-packages/sagemaker_containers/_env.py", line 240, in read_resource_config
algo-1-mjqkd_1 | return _read_json(resource_config_file_dir)
algo-1-mjqkd_1 | File "/usr/local/lib/python3.6/dist-packages/sagemaker_containers/_env.py", line 192, in _read_json
algo-1-mjqkd_1 | with open(path, "r") as f:
algo-1-mjqkd_1 | FileNotFoundError: [Errno 2] No such file or directory: '/opt/ml/input/config/resourceconfig.json'
algo-1-mjqkd_1 |
algo-1-mjqkd_1 | [Errno 2] No such file or directory: '/opt/ml/input/config/resourceconfig.json'
algo-1-mjqkd_1 | Traceback (most recent call last):
algo-1-mjqkd_1 | File "/usr/local/bin/dockerd-entrypoint.py", line 24, in <module>
algo-1-mjqkd_1 | subprocess.check_call(shlex.split(' '.join(sys.argv[1:])))
algo-1-mjqkd_1 | File "/usr/lib/python3.6/subprocess.py", line 311, in check_call
algo-1-mjqkd_1 | raise CalledProcessError(retcode, cmd)
algo-1-mjqkd_1 | subprocess.CalledProcessError: Command '['train']' returned non-zero exit status 2.
tmpm3ft7ijm_algo-1-mjqkd_1 exited with code 1
Aborting on container exit...
At first glance the error seems obvious, the file '/opt/ml/input/config/resourceconfig.json' is missing. The thing is I have no way of creating this file so that sagemaker framework can get the host for multiprocessing (whcih i don t need them yeet).
When I am creating the image 'sagemaker-custom-image:latest' following the folder structure show bellow, I already give the 'resoruceconfig.json' to the '/opt/ml/input/config/' folder inside the image.
/opt/ml
├── input
│ ├── config
│ │ ├── hyperparameters.json
│ │ ├── inputdataconfig.json
│ │ └── resourceConfig.json
│ └── data
│ └── <channel_name>
│ └── <input data>
├── model
│ └── <model files>
└── output
└── failure
Reading the documentation in AWS, when using sagemaker sdk to run your image, it says that all the files in the container in the folder 'opt/ml' may no longer be visibles during training.
/opt/ml and all sub-directories are reserved by Amazon SageMaker training. When building your algorithm’s docker image, please ensure you don't place any data required by your algorithm under them as the data may no longer be visible during training.How Amazon SageMaker Runs Your Training Image
This basically resumes my problem.
Yes, I know I can make use of the prebuilt estimators and images from sagemaker.
Yes, I know I can bypass the framwork library and run the image train from docker run.
But i have the need to implement a fully custom sagemaker sdk/image/container/model to use with entrypoint. I know is a bit ambitious.
So to Reformulate my question: How do I get Sagemaker Framework or SDK to create inside the image the require resourceconfig.json file?
Apparently, running the image remotely solved the problem.
I am using a remote aws machine 'ml.m5.large'.
Somewhere in the sagemaker sdk code, is creating and giving the files needed by the image. BUT only when running in a remote machine, not locally.
It seems that this file has been renamed from "resourceConfig.json" to "resourceconfig.json".
Building with Stack, I have a lib.hs in the src/ and a main.c in the app/. When building, the lib_stub.h is generated under .stack-work/dist/x86_64-linux/Cabal-2.2.0.1/build.
To include this file in main.c, I either write a complete absolute path behind the #include directive or, before a manual second pass, manually copy the lib_stub.h file to app/, which is kind of stupid.
Is there a better way?
More infomation:
My package.yaml looks like
name: mylib
version: 0.1.0.0
github: "gituser/mylib"
license: BSD3
author: "Author name here"
maintainer: "example#example.com"
copyright: "2018 Author name here"
extra-source-files:
- README.md
- ChangeLog.md
# Metadata used when publishing your package
# synopsis: Short description of your package
# category: Web
# To avoid duplicated efforts in documentation and dealing with the
# complications of embedding Haddock markup inside cabal files, it is
# common to point users to the README.md file.
description: Please see the README on GitHub at <https://github.com/gituser/mylib#readme>
dependencies:
- base >= 4.7 && < 5
library:
source-dirs: src
dependencies:
- free
- mtl
executables:
cont-demo:
main: main.c
source-dirs: app
ghc-options:
- -threaded
# - -rtsopts
# - -with-rtsopts=-N
dependencies:
- mylib
tests:
mylib-test:
main: Spec.hs
source-dirs: test
ghc-options:
- -threaded
- -rtsopts
- -with-rtsopts=-N
dependencies:
- mylib
And my path structure looks like
.
├── app
│ ├── MyLib_stub.h
│ └── main.c
├── ChangeLog.md
├── mylib.cabal
├── LICENSE
├── package.yaml
├── README.md
├── Setup.hs
├── src
│ └── MyLib.hs
├── stack.yaml
└── test
└── Spec.hs
where app/MyLib_stub.h is manually copied, not automatically placed there.
I'd really like it if Stack had some proper way of doing this, but AFAIK it doesn't.
What I currently do in a project with similar requirements is, instead of copying the _stub.h file to a more convenient location, I symlink it. This only needs to be done once, the symlink can be put under version control, and then updates to the LONG_PATH/....h file will automatically show up in the convenience one.
$ ln -s dist/build/bla/bla/long/path/MyLib_stub.h app/MyLib_stub.h
$ git add app/MyLib_stub.h
I'm afraid this will not work on Windows, but there's probably a similar alternative for that.
I know this topic has been referenced a few times already. Unfortunately I still wasn't able to find a working solution for my use case.
I can't seem to get vendoring working for my Go application on App Engine Standard. I'm using dep for vendoring.
I'm building a GraphQL API and here is my folder structure:
/GOPATH
└──/src
└──/acme
├──/app
| ├── app.yaml
| └── app.go
├──/src
| ├── /mutations/
| ├── /queries/
| └── /types/
└──/vendor/
Running goapp serve app/app.yaml on Cloud Shell fails with
INFO 2018-05-14 15:42:08,386 devappserver2.py:764] Skipping SDK update check.
INFO 2018-05-14 15:42:08,471 api_server.py:268] Starting API server at: http://0.0.0.0:47213
INFO 2018-05-14 15:42:08,600 dispatcher.py:199] Starting module "default" running at: http://0.0.0.0:8080
INFO 2018-05-14 15:42:08,601 admin_server.py:116] Starting admin server at: http://0.0.0.0:8000
ERROR 2018-05-14 15:42:13,983 go_runtime.py:181] Failed to build Go application: (Executed command: /google/go_appengine/goroot/bin/go-app-builder -app_base /home/xxx/gopath/src/acme/app -arch 6 -dynamic -goroot /google/go_appengine/goroot -gopath /home/xxx/gopath:/google/gopath -nobuild_files ^^$ -incremental_rebuild -unsafe -binary_name _go_app -extra_imports appengine_internal/init -work_dir /tmp/tmpbt8DA2appengine-go-bin -gcflags -I,/google/go_appengine/goroot/pkg/linux_amd64_appengine -ldflags -L,/google/go_appengine/goroot/pkg/linux_amd64_appengine app.go)
/home/xxx/gopath/src/acme/vendor/github.com/graphql-go/graphql/definition.go:4: can't find import: "context"
2018/05/14 15:42:09 Can't find package "context" in $GOPATH: cannot find package "context" in any of:
/home/xxx/gopath/src/acme/vendor/context (vendor tree)
/google/go_appengine/goroot/src/context (from $GOROOT)
/home/xxx/gopath/src/context (from $GOPATH)
/google/gopath/src/context
Looks like the problem might be that one vendor is not using a full dependency name for "context".
(EDIT: probably not the case though since I’m using 1.8)
Has anyone ever managed to successfully deploy on App Engine Standard using vendoring? Been pulling my hair all day on this.
Just in case anyone else struggles with this, this is the approach I've taken that seems to work for me.
Directory structure looks like this:
/GOPATH
├──/appengine
| ├──/.git/
| ├──/project1
| | ├── app.yaml
| | └── app.go
| └──/project2
| ├── app.yaml
| └── app.go
└──/src
├──/project1
| ├──/.git/
| ├──/mutations/
| ├──/queries/
| ├──/types/
| ├──/vendor/
| └──/main.go
└──/project2
├──/.git/
├──/foo/
├──/bar/
├──/vendor/
└──/main.go
Each app.go file below the appengine folder contains:
package projectX
import "projectX"
func init() {
projectX.Run()
}
Each main.go file below src/projectX contains:
package projectX
import (
// Import whatever you need
"google.golang.org/appengine"
)
func Run() {
// Do whatever you need
appengine.Main()
}
Seems that having the folder that contains app.yaml outside of $GOPATH/src is indeed necessary.
This is also not ideal for version control if you need to have each project versioned under their own git repo as opposed to having one monolyth repo. I solved this by versioning each project AND versioning the appengine folder as well separately.
I was having issues with, spent ages looking around trying to figure out why it wasn't working. This answer is a little late but hopefully will be useful for anyone else who has this issue.
I updated to use Go 1.11 which I thought wasn't supported (found one of the GCP examples on github using it here)
Set runtime: go111 in app.yaml and it will support vendoring and give you a link to a proper build log.
Now my directory structure is as follows.
/GOPATH
└──/src
├──/project1
| ├──/.git/
| ├──/whateverCode/
| ├──/vendor/
| └──/main.go
| └──/app.yaml
I assume if it supports Go 1.11 we could also use Modules for versioning but I haven't looked into that yet.
The context package will be inside $GOROOT ( not in vendor directory ). Probably your Go Appengine SDK is old and does not support Go 1.8.
Update your SDK to the latest version. The $GOROOT should be like /path/to/somewhere/goroot-1.8
I'm trying the following tutorial.
Automatic serverless deployments with Cloud Source Repositories and Container Builder
But I got the error below.
$ gcloud container builds submit --config deploy.yaml .
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
ERROR: (gcloud.beta.functions.deploy) Error creating a ZIP archive with the source code for directory .: ZIP does not support timestamps before 1980
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: exit status 1
I'm now trying to solve it. Do you have any idea? My gcloud is the latest version.
$ gcloud -v
Google Cloud SDK 193.0.0
app-engine-go
app-engine-python 1.9.67
beta 2017.09.15
bq 2.0.30
core 2018.03.09
gsutil 4.28
Sample google cloud function code on the tutorial.
#index.js
exports.f = function(req, res) {
res.send("hello, gcf!");
};
#deploy.yaml
steps:
- name: gcr.io/cloud-builders/gcloud
args:
- beta
- functions
- deploy
- --trigger-http
- --source=.
- --entry-point=f
- hello-gcf # Function name
#deploying without Cloud Container Builder is fine.
gcloud beta functions deploy --trigger-http --source=. --entry-point=f hello-gcf
Container Builder tars your source folder. Maybe something in your . directory has corrupted dates? That's why moving it to the source folder fixes it.
While I don't know the reason, I found a workaround.
(1) make src directory and move index.js into it.
├── deploy.yaml
└── src
└── index.js
(2) deploy via Cloud Container Builder.
$ gcloud container builds submit --config deploy.yaml ./src
I ran into the same issue now. I could not solve it but at least I found out where it comes from.
When you locally submit your build there is a tar created and uploaded to a bucket. In this tar the folders are created at 01.01.1970:
16777221 8683238 drwxr-xr-x 8 user staff 0 256 "Jan 1 01:00:00 1970" "Jan 1 01:00:00 1970" "May 15 12:42:04 2019" "Jan 1 01:00:00 1970" 4096 0 0 test
This issue only occurs locally. If you have a github build trigger it works
I recently came across the same issue using Cloud Build (the successor to Container Builder).
What helped was adding a step to list all the files/folders in the Cloud Build environment (default directory is /workspace) to identify the problematic file/folder. You can do this by overriding the gcloud container's entrypoint to execute the ls command.
steps
- name: gcr.io/cloud-builders/gcloud
entrypoint: "ls"
args: ["-la", "/workspace"]
I am working on a server application using BSD Sockets, its a C project and has been built on Heroku, using a custom buildpack.
I cant figure out how to execute the binary afterwards?
The buildpack contains:
bin/
detect.sh
compile.sh
release.sh
release.sh
#!/usr/bin/env bash
# bin/release <build-dir>
cat <<EOF
---
config_vars:
PATH: /app/bin:/usr/local/bin:/usr/bin:/bin
EOF
The binary builds fine using make as reported in the activity feed of the dashboard.
I need to run the server so I can connect to it using the client I have developed from my local machine.
EDIT: I have added a Procfile and to run the binary the procfile contents are:
spinup: bin/serverUDP 1071
serverUDP is the name of the binary file inside the bin/ folder of the application.
EDIT:
Build log:
-----> Fetching set buildpack https://github.com/damorton/heroku-buildpack-c.git... done
-----> C app detected
-----> Compiling with Make
make: `vendor/bin/busltee' is up to date.
-----> Discovering process types
Procfile declares types -> spinup
-----> Compressing... done, 4K
-----> Launching... done, v20
https://hangman-udp.herokuapp.com/ deployed to Heroku
Logs:
2015-12-04T10:45:25.977074+00:00 heroku[spinup.1]: Process exited with status 0
2015-12-04T10:45:25.992332+00:00 heroku[spinup.1]: State changed from up to crashed
2015-12-04T10:51:53.697297+00:00 heroku[api]: Deploy ebe93d8 by damorton#xmail.com
2015-12-04T10:51:53.697370+00:00 heroku[api]: Release v21 created by damorton#xmail.com
2015-12-04T10:51:55.209687+00:00 heroku[spinup.1]: Starting process with command `bin/serverUDP 1071`
2015-12-04T10:51:55.814271+00:00 heroku[spinup.1]: State changed from starting to up
2015-12-04T10:51:57.750368+00:00 heroku[spinup.1]: State changed from up to crashed
Command after deploy:
heroku ps:scale spinup=1
I found out that the Procfile is used to execute the binary after the build. The problem I was having then wasnt related to the binary being executed, it was that the binary wasnt being built. So I used a cmake build pack to install cmake. Then I used cmake to build my project. All worked out fine on the build side except for linking to a relative directory for the shared libs.
For anyone with the same problem:
Use buildpacks for cmake and then c
Use Procfile to execute binary with arguments after build
Assuming it built and deployed successfully a single dyno should load the slug and execute the command. To scale and/or change the dyno configuration, you'll need to issue a command with your chosen option. For example:
$ heroku ps:scale web=2 queue=1
This would start three dynos; two for web and a single one for queue processes. You can also scale the individual power of the dynos by increasing the RAM and CPU share using a similar command:
$heroku ps:scale web=2:standard-2x queue=1