I'm using Jenksin v2.5.0 with Credentials plugin v 2.1.16 and CredentialsBinding plugin v1.13 (both latest available) and while it appears to work as intended, it exhibits and odd repeating behavior as I continue to re-run my pipeline.
The following pipeline syntax is in use:
withCredentials([file(credentialsId: '<credID>', variable: 'KEY_FILE')]) {
...steps here create ${workspace}/ssh script using KEY_FILE...
sh(script: "docker exec ${containerName} /bin/bash -c 'cd ${entryPoint} && GIT_SSH='${workspace}/ssh' git fetch --tags --progress git#gitserver.com:${group}/${project}.git +refs/h eads/${branch}:refs/remotes/origin/${branch}'")
} //credentials
It evaluates as expected and is functional, as shown here:
[Build] Running shell script
+ docker exec <container> /bin/bash -c 'cd /<buildRoot>/build && GIT_SSH=/<workspace>/ssh git fetch --tags --progress git#gitsserver.com:<group>/<project>.git +refs/heads/staging:refs/remotes/origin/staging'
Warning: Identity file /<workspace>/<job>#tmp/secretFiles/a36b7edb-2914-419a-8be0-478603d1b031/keyfile.txt not accessible: No such file or directory.
Permission denied, please try again.
Permission denied, please try again.
Received disconnect from gitserver.com port 22:2: Too many authentication failures for git
Connection to gitsserver.com closed by remote host.
Warning: Identity file /<workspace>/<job>#tmp/secretFiles/ccb1e11c-18f5-4697-b5c1-e4514c1ab1c7/keyfile.txt not accessible: No such file or directory.
*** The part where it states that it can't find the file during the git operation (that is using SSH underneath) is what continues to repeat, each time with a different secret files GUID in the path (shown above are two of the repeats). The underlying implementation appears to implement a loop of sorts over the 'git fetch' command, trying a new credentials path each time.
Both how/why Jenkins:
1) Creates these new paths each time
2) Knows to keep looping over the single failed git command until it finally delivers the secret file that enables the authentication and git succeeds
are mysteries to me.
Any insight would be appreciated.
PS> I'm already aware that newer versions of git not (yet) available in my environment have different methods for providing SSH options. I'd like this question to focus on the odd withCredentials behavior.
PPS> I've also already tried higher level constructs for the pipeline including at least the 'git' SCM plugin specialization and the 'docker' node type with it's "inside()" functionality, but many iterations of those constructs always left me with some oddity that, again, is not the focus of this question.
Turns out that this is not a problem with Jenkins or any of the plugins at all.
The temporary script that was written to the Jenkins workspace was continuously being appended to, meaning it was the source of all the invalid secret file paths and the "loop"...the last command in that script would always be the one with the correct path and it would succeed.
Related
Migrating from Standard to Flexible appengine and trying to get logs to show up the same way they did under Standard. Example:
That is one single log entry, expanded. Shows what the URL is at the top, the http method, status and latency of the request. When expanded like it is, it shows all logs that were created during the request, their level, and where in the code it is. This makes it very easy to see what all happened during a request.
Under Flexible, none of this seems to happen. Calling logging.info() creates its own distinct log entry in Logging, with no information about what request/route it was triggered under:
As you can see, each log entry (and in the case of a fatal error, tracebacks) get their own individual log entry per line. Some digging in their api and documentation I was able to get it to a point where I can at least group them together somewhat, but it's still not where it used to be.
I don't get severity level at the "group" level of the log, only when expanded (which means filtering by severity isn't possible) nor do I get what line the logging entry was called at. This also means a lot more individual log entries, and I don't even know how this will affect log exports.
To group the logs, I'm passing Pyramid a custom logging handler which is just google's AppEngineHandler but overriding get_gae_labels to provide it with the correct trace ID header (out of the box, it only supports django, flask, and webapp2)
def get_gae_labels(self):
"""Return the labels for GAE app.
If the trace ID can be detected, it will be included as a label.
Currently, no other labels are included.
:rtype: dict
:returns: Labels for GAE app.
"""
gae_labels = {}
request = pyramid.threadlocal.get_current_request()
header = request.headers.get('X-Cloud-Trace-Context')
if header:
gae_labels[_TRACE_ID_LABEL] = header.split("/", 1)[0]
return gae_labels
From what I can gather, appengine Flexible runs nginx in front of my application, and that passes stderr logs to Logging, and its own nginx_request logs. Then, when my application calls logging.info(), it matches up a trace ID to group them together. Because of this, a few things seem to be happening.
A. It doesn't show the highest severity level of related log entries
B. When you expand the log entry, the related log entries don't appear instantly like they do under appengine Standard, they take a second to load in as presumably Logging is looking for related logs via trace ID. Under Standard, appengine provides Logging with a line entry which has some meta data like the log message, line number, source code location etc, so it doesn't need to go look for related log entries, it's all there from the beginning. See below
I'm not sure of a solution here (hence the post) and I wonder if this ultimately would be solved by expanding google's Logging api. It seems to me that the solution really is to stop nginx from logging anything and let Pyramid handle logging exclusively, as well as allowing me to send up data within line so Logging doesn't have to try and group requests by trace ID.
Custom runtime under Flexible, adding this in the yaml file:
runtime_config:
python_version: 3.7
And the dockerfile:
FROM gcr.io/google-appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
RUN virtualenv /env -p python3
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Copy the application's requirements.txt and run pip to install all
# dependencies into the virtualenv.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add the application source code.
ADD . /app
# Run a WSGI server to serve the application. gunicorn must be declared as
# a dependency in requirements.txt.
RUN pip install -e .
CMD gunicorn -b :$PORT main:app
and requiresments.txt
pyramid
gunicorn
redis
google-cloud-tasks
googleapis-common-protos
google-cloud-ndb
google-cloud-logging
I have searched high and low for what I thought would be a common question but can only find answers regarding user confirmation, not system confirmation.
I would like the following commands to run in sequential order, waiting for a response before moving onto the next command:
npm config set https-proxy http://example.com:8080
npm config set proxy http://example.com:8080
npm config set sslVerify false
npm config set strict-ssl false
set HTTP_PROXY=http://example.com:8080
set HTTPS_PROXY=http://example.com:8080
I have added the commands to the batch file in sequential order on new lines, but when executing it does not pause on each command to wait for a response. How do I force the script to wait on each command until it’s confirmed by the system?
Unqualified names like npm or doSomething may map to scripts written in various languages, including batch files. Use the call command to invoke these and cmd.exe will always wait for whatever child process is started.
It's not uncommon for .exe's to be scattered across multiple directories that would bloat the path environment variable, so many installations will lay down alias scripts in a single directory added to the path and when you invoke the command, it figures out what executables to run and launches those.
It's also common to use wrapper scripts to simplify executable invocations, add some logging, or temporarily map the command to a different version (upgrades/testing).
In the case of npm, I believe it's mostly written in JavaScript, so an appropriate scripting engine has to be launched to run the npm commands. This may be boot-strapped from a batch script or it could be invoked automagically by the OS, based on whatever file extension it has. The details from one version or installation to the next may vary and usually don't matter to the casual user invoking them from the command line, but script behavior can vary noticeably.
Unless you use a fully qualified path/filename to launch something from a command script, and generally even if you do, you should simply default to using the call command to invoke it. Then all of the above circumstance are covered and your script will always behave as expected.
call npm config set https-proxy http://example.com:8080
call npm config set proxy http://example.com:8080
call npm config set sslVerify false
call npm config set strict-ssl false
set HTTP_PROXY=http://example.com:8080
set HTTPS_PROXY=http://example.com:8080
Note that it is still possible that a script or program could pass work along to another process and then return immediately, but that kind of behavior will normally be the same, whether launched interactively or from a script.
I am using custom Nagios plugins for the first time and am running into this error when I create a service for the plugin.
(No output on stdout) stderr: execvp(/usr/local/nagios/libexec/check_load.py, ...) failed. errno is 2: No such file or directory
The plugin works when I run it on the command line, however does not work when it runs within Nagios.
I followed these steps to get the plugin into Nagios
https://assets.nagios.com/downloads/nagiosxi/docs/Managing-Plugins-in-Nagios-XI.pdf
Here is what it looks like in the Nagios UI
The plugin is in the correct path: /usr/local/nagios/libexec and the resource.cfg file has the same path within it.
I tried two separate plugins, both which work on the command line, and the result is the same error.
The error indicates the file location is incorrect, however the plugin is in the specified directory and runs with no errors within that directory.
I am totally stumped and appreciate any help.
For anyone reading this, I solved the problem.
The first time I added the plugin, I forgot to add the python extension. When I updated the already created plugin, Nagios still threw the error.
Once I completely deleted the plugin and re-created it the 'file not found', error went away.
I faced a similar issue when I was trying to add a custom plugin ( I had custom plugins in ruby and python ).
The issue was the missing shebang line at the start of the script (which determines the script's ability to be executed like a standalone executable).
For example, if you have a python plugin custom-plugin.py then make sure this script has shebang at the start of script #!/usr/bin/env python3. Also if you have other scripts (ruby, bash etc.) make sure to add the appropriate path at the start of your scripts.
Also, check the path for plugins Nagios version. For my setup path was /usr/local/nagios/libexec/ and make sure your custom plugin is executable and has correct ownership permissions.
Sample custom template I used :
define command {
command_name check_switch_health
command_line /usr/local/nagios/libexec/check_snmp.rb --host $HOSTADDRESS$ --model "$ARG1$" --community "$ARG2$"
}
The above workaround worked for me.
I try to use Cakephp 3 upgrade tool.
I installed composer, I made this:
cd /path/to/upgrade
bin/cake upgrade all /home/mark/Sites/my-app
Then I get tons of error like this in windows command shell.
error: Could not access ''C:\mydir\upgrade\tmp\upgrade\a2d4223f62e3499a84b6ca30be24bfdb4cb6de40''
Update C:\mydir\myapp\lib\Cake\View\Helper\CacheHelper.php
error: Could not access ''C:\mydir\upgrade\tmp\upgrade\7fbe7651712387f351b1eb670b14c18e1161fcb8''
Update C:\mydir\myapp\lib\Cake\View\Helper\HtmlHelper.php
error: Could not access ''C:\mydir\upgrade\tmp\upgrade\2301f9bed1167ddb29ca4e06706d0d21bd015766''
Update C:\mydir\myapp\lib\Cake\View\Helper\NumberHelper.php
error: Could not access ''C:\mydir\upgrade\tmp\upgrade\e71af0cbc7df7ff76e801c5fb06ec07ee7f45233''
Update C:\mydir\myapp\lib\Cake\View\Helper\PaginatorHelper.php
error: Could not access ''C:\mydir\upgrade\tmp\upgrade\1b04b5a763ca4e798d1e176111e49008b7486724''
Update C:\mydir\myapp\lib\Cake\View\Helper\TextHelper.php
error: Could not access ''C:\mydir\upgrade\tmp\upgrade\f8ead667c131610c1f70f38d10c7122b34d9a7fc''
Update C:\mydir\myapp\lib\Cake\View\Helper\TimeHelper.php
error: Could not access ''C:\mydir\upgrade\tmp\upgrade\4a2c2e7f3f7e9faf744d10e6e1f3ff24bead7f08''
Update C:\mydir\myapp\lib\Cake\View\HelperCollection.php
I run command shell as administrator. What would be the problem ? When I check folder during operation, I can see that temporary file exists in folder.
I got the very same issue. Here's a description of the issue :
This is a Git error
Git command that fails is called by /upgradeTool/src/Shell/Task/StageTask.php on line 176 by exec() php function (seems different for you)
command to run git looks like this : git diff --no-index 'T:\Logiciels\CakePHP_3_upgradeTool\tmp\upgrade\c5d0aaadb3484d4bfe56bdfc4553b444f6789e66' 'T:\Logiciels\CakePHP_3_upgradeTool\tmp\upgrade\4a6662f82cd03d46b515c28f7d77ef8a64c08cfd'
EDIT (2015-07-12)
As ndm noted, "the source of the problem is the single quote usage in the arguments for the git command, the Windows CLI will pass them as if they belong to the file path"
I changed this line (/upgradeTool/src/Shell/Task/StageTask.php on line 176) from :
exec("git diff --no-index "$oPath" "$uPath"', $output);
to :
exec('git diff --no-index "'.$oPath.'" "'.$uPath.'"', $output);
Now the process seems complete. Many many thanks !
After tryouts I found solution. As DarXnake noted, problem is git.
When you install git it asks whether you want to use git from command line or not. Default option is Use Git Bash only. I had selected that option.
Now I updated git and when setup asked for install type, I selected Run Git from the Windows Command Prompt. Then I retried cakephp upgrade and I didn't get any access error.
I have a CakePHP application setup on Heroku using the Heroku PHP buildpack (https://github.com/heroku/heroku-buildpack-php).
With Debug set to 1, the application uses the file cache and reduces the lifespan of the cache. In addition, the DebugKit toolbar appears.
With Debug set to 0, the application uses APC.
When I have Debug set to 1, it works properly, but the DebugKit toolbar shows up and the caching is essentially shutoff. When I set Debug = 0 I get the standard "Internal Error" message. Running "heroku logs" only shows me errors relating to php not being able to write to the tmp directory (specifically for error logs). I attempted to have cakePHP write to stdout, but that didn't help.
To see what exactly was causing the problem, I removed DebugKit from the installation and made the caching for Debug=1 match Debug=0. I thought this would cause the app to error again, but it's still working. Is there anything else that happens when Debug is turned off that could be causing this, or did I miss something with the error logs error?
I managed to get this working eventually. The answer was to make sure the app/tmp directory and all of the children directories were being created by the buildpack. I was under the impression that cakePHP wouldn't worry about them if it didn't need them, but I was incorrect.
I wanted to keep them out of the repo, so in the buildpack compile file I added:
CAKEPHP_APP_TMP_PATH="www/app/tmp"
# make tmp dir
echo "-----> Creating CakePHP tmp directories"
mkdir -p $CAKEPHP_APP_TMP_PATH/cache/models
mkdir -p $CAKEPHP_APP_TMP_PATH/cache/persistent
mkdir -p $CAKEPHP_APP_TMP_PATH/cache/views
mkdir -p $CAKEPHP_APP_TMP_PATH/logs
mkdir -p $CAKEPHP_APP_TMP_PATH/sessions
mkdir -p $CAKEPHP_APP_TMP_PATH/tests
chmod -R 777 $CAKEPHP_APP_TMP_PATH
With that, the directories were in place, but they never appear to be used. The app is now properly running with Debug set to 0.
It would be ideal if you could get write access to the tmp folder so that you can see the logs.
These Internal Error with Cake are usually related to the caching of the models. So in APC you may have and old cache that does not really match up with your database.
Try clearing the APC cache and see if that helps.
PS: The cake app has a couple of caches, so you'll have to make sure what uses what... you have the default, _cake_core_ and _cake_model_ at least! The last two could be the source of your problems.