I have a Jenkins pipeline job, which is parametrized. It will take a string input and pass that to a bat script.
Input : https://jazz-test3.com/web/projects/ABC
I wanted to include a config file, it should be placed in a location (may be in the Jenkins machine locally or what can be the best option?).
Sample config file: server_config.txt
test1 : https://jazz-test1.com
test2 : https://jazz-test2.com
test3 : https://jazz-test3.com
Once the user input is received , the job should access this file and check whether the server in the input link is present in the config file.
If present call the bat script if not present throw an error , the server is not supported.
Expected result:
Input 1 : https://jazz-test3.com/web/projects/ABC
Job should run
Input 2 : https://jazz-test4.com/web/projects/ABC
Job should fail with error message
What is the best way of achieving this ?
Can we do it directly from the pipeline or a separate script will be required to perform this ?
Thank you for the help!!
Jenkins supports configuration files stored on the controller via the config-file-provider plugin. See Manage Jenkins | Managed Files for the collection of managed configuration files.
You can add a JSON config file with id "test-servers" that stores your test server configuration:
{
"test1":"https://jazz-test-1.com",
"test2":"https://jazz-test-1.com"
}
Then the job pipeline would do something like:
servers = [:]
stage("Setup") {
steps {
configFileProvider([fileId:'test-servers', variable:'test_servers_json']) {
script {
// Read the json data from the file.
echo "Loading configured servers from file '${test_servers_json}' ..."
servers = readJSON(file:test_servers_json)
}
}
}
}
stage('Test') {
steps {
script {
// Check if 'server' is supported.
if (!servers.containsKey(params.server)) {
error "Unsupported server"
}
build job:'...', parameters:[...]
}
}
}
Related
I try to setup Xdebug for shopware-docker without success.
VHOST_[FOLDER_NAME_UPPER_CASE]_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php74-xdebug
After replacing your Folder Name and running swdc up Xdebug should be activated.
Which folder name should I place?
Using myname, the same name as in /var/www/html/myname, return error on swdc up myname:
swdc up myname
[+] Running 2/0
⠿ Network shopware-docker_default Created 0.0s
⠿ Container shopware-docker-mysql-1 Created 0.0s
[+] Running 1/1
⠿ Container shopware-docker-mysql-1 Started 0.3s
.database ready!
[+] Running 0/1
⠿ app_myname Error 1.7s
Error response from daemon: manifest unknown
EDIT #1
With this setup VHOST_MYNAME_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php81-xdebug (versioned Xdebug) the app started:
// $HOME/.config/swdc/env
...
VHOST_MYNAME_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php81-xdebug
But set a debug breakpoint (e.g. in index.php), nothing happens
EDIT #2
As #Alex recommend, i place xdebug_break() inside my code and it works.
Stopping on the breakpoint the debugger log aswers with hints/warnings like described in the manual:
...
Cannot find a local copy of the file on server /var/www/html/%my_path%
Local path is //var/www/html/%my_path%
...
click on Click to set up path mapping to open the modal
click inside modal select input Use path mapping (...)
input field File path in project response with undefined
But i have already set up the mapping like described in the manual, go to File | Settings | PHP | Servers:
Why does not work my mapping? Where failed my set up?
The path mapping needs to be between your local project path on your workstation and the path inside the docker containers. Without xDebug has a hard time mapping the breakpoints from PHPStorm to the actual code inside the container.
If mapping the path correctly does not work and if its a possibility for you, i can highly recommend switching to http://devenv.sh for your development enviroment. Shopware itself promotes this new enviroment in their documentation: https://developer.shopware.com/docs/guides/installation/devenv and provides an example on how to enable xdebug:
# devenv.local.nix File
{ pkgs, config, lib, ... }:
{
languages.php.package = pkgs.php.buildEnv {
extensions = { all, enabled }: with all; enabled ++ [ amqp redis blackfire grpc xdebug ];
extraConfig = ''
# Copy the config from devenv.nix and append the XDebug config
# [...]
xdebug.mode=debug
xdebug.discover_client_host=1
xdebug.client_host=127.0.0.1
'';
};
}
A correct path mapping should not be needed here, as your local file location is the same for XDebug and your PHPStorm.
I'm building a log analysis environment with the purpose of analyzing linux logs such as: /var/log/auth.log, /var/log/cron, /var/log/syslog, etc. The goal is to be able to upload such a log file and analyze it properly with Kibana/Elasticsearch. To do so, I created a .conf file as seen below, which includes the proper patterns to pars auth.log and the information needed in the input and output section. Unfortunately, when connecting to Kibana I cannot see any data in the "Discover" panel and cannot find the related "index pattern". I tested the grokk pattern and they works well.
input {
file {
type => "linux-auth"
path => [ "/home/ubuntu/logs/auth.log"]
}
filter {
if [type] == "linux-auth" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} %{WORD:method}\[%{POSINT:auth_pid}\]\: %{DATA:message} for %{DATA:user} from %{IPORHOST:IP_address} port %{POSINT:port}" }
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} %{WORD:method}\[%{POSINT:auth_pid}\]\:%{DATA:message} for %{GREEDYDATA:username}" }
}
}
}
output{
elasticsearch {
hosts => "elasticsearch:9200"
}
}
Example of auth.log:
2018-12-02T14:01:00Z sshd[0000001]: Accepted keyboard-interactive/pam for root from 185.118.167.241 port 64965 ssh2
2018-12-02T14:02:00Z sshd[0000002]: Failed keyboard-interactive/pam for invalid user ubuntu from 36.104.140.175 port 57512 ssh2
2018-12-02T14:03:00Z sshd[0000003]: pam_unix(sshd:session): session closed for user root
Here is the few recommendations which i would like to give:
You can run logstash on debug mode like below to check what is the exact error.
bin/logstash --debug -f file_path.conf
Check with stdout in output section which will print the incoming data. So that you will be sure that logstash reading the file correctly.
The most important as you mention you want to read system log and need to visualize the data, I would recommend to use filebeat with system modules. Filebeat is especially build for such use cases like reading from file.
It is simple setup where in filebeat under the system module you just need to specify which system log file you need read. Mention the Elasticsearch endpoint and run the filebeat.
It will start reading and pushing the data to the elasticsearch.
Also You don't need build the custom dashboard in kibana (As you going to build in case of logstash). Filebeat comes with pre configured dashboards for system logs.
You can check more on above official document.
I have a metadata.json file which includes some values:
{"build":{"major":0,"minor":88}}
In my create-react-app project, I need to run the script to upload sentry map files:
"sentry" : "sentry-cli releases files 0.88 upload-sourcemaps --validate ./build"
where the 0.88 should be pulled from the metadata.json file. I can then run it with:
npm run sentry
How can I pull the value 0.88 from the metadata.json file with build major/ minor and insert it into the sentry command?
I am not sure if there is a solution to do this in package.json itself.
This is how I would have solved this problem:
Create a new js file. Let's say the name is run-command.js.
Add a line node ./run-command.js inside the script object in package.json.
Import the metadata.json file in this newly created file and extract the necessary data
Execute your command
Example:
package.json
scripts: {
"sentry: "node ./run-command.js"
}
run-command.js
const metadata = require('./metadata.json');
const { exec } = require('child_process');
exec(`echo ${metadata.build.major}`, (err, stdout, stderr) => {
if (err) {
// node couldn't execute the command
return;
}
// the *entire* stdout and stderr (buffered)
console.log(`stdout: ${stdout}`);
console.log(`stderr: ${stderr}`);
});
Replace echo with your command. It would look something like ./node_modules/.bin/sentry ...
You can use a bash script like ./sentry.sh if you are comfortable with shell scripts.
I would like to stock my logs into files:
Here is how I declare my appenders in Config.groovy:
log4j = {
appenders {
// console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n')
file name: "scraperServiceDetailedLogger",
file: "target/scraperServiceDetailed.log"
file name: "scraperServiceLogger",
file: "target/scraperService.log"
file name: "filterLogger",
file: "target/filter.log"
}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
error scraperServiceDetailedLogger: "grails.app.service.personalcreditcomparator.ScraperService"
info scraperServiceLogger: "grails.app.jobs.personalcreditcomparator.ScraperJob"
info filterLogger: "grails.app.conf.personalcreditcomparator.AdministratorInterfaceProtectorFilters"
}
The 3 files are created properly but only scraperServiceDetailedLogger stores the logs properly. The other two files remain empty.
The level of logging is respected while calling the log.
What am I missing ?
Thank you for any help provided.
For quartz jobs try the Logger prefix of 'grails.app.task'
info scraperServiceLogger: "grails.app.task.personalcreditcomparator.ScraperJob"
And for filters try the Logger prefix of 'grails.app.filters'
info filterLogger: "grails.app.filters.personalcreditcomparator.AdministratorInterfaceProtectorFilters"
This post has NOT been accepted by the mailing list yet.
Hi everyone,
Thank you in advance for any help provided.
I encounter a problem trying to keep track of my logs into a file, and do not understand what is wrong in the syntax I am using.
Here are my settings for logging into Config.groovy:
[SNIP]
log4j = {
// Example of changing the log pattern for the default console
// appender:
appenders {
// console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n')
file name: "scraperServiceLogger",
file: "target/scraperService.log"
}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
'grails.app.'
error scraperServiceLogger: "grails.app.service.ScraperService"
warn 'org.mortbay.log'
debug 'grails.test.*'
}
[/SNIP]
Here is how I am trying to use it within my ScraperService.groovy:
[SNIP]
log.error "test"
[/SNIP]
The file I want to write in is created correctly but the logging in only displayed on console.
Any help greatly appreciated :)
All the best.
Try using the full package to your service
error scraperServiceLogger: "grails.app.service.com.whatever.ScraperService"