Jenkins Artifactory plug-in: Error occurred while requesting version information: Connection refused - jenkins-plugins

I get the error "Error occurred while requesting version information: Connection refused" when I test the connection in Jenkins configuration for Artifactory plug-in. I have tried it with Anonymous access enabled in Artifactory, with Anonymous access disabled, and tried all three options (Supported, Unsupported, Required) for Password Encryption in Artifactory. I have Default Deployer Credentials in my Jenkins Artifactory configuration, and I have tested the connection with 'Use Different Resolver Credentials' and without. I consistently get this error.
Any help/ideas would be greatly appreciated

I also ran in a similar problem yesterday.
Problem:
I was running jenkins and artifactory in two different docker containers on my local. I had exposed port 8086 for artifactory and could access it using http://localhost:8086/artifactory in my browser. But giving the same url for artifactory in jenkins produced the above reported error in question.
Solution:
For some unknown reasons, jenkins artifactory plugin couldn't resolve http://localhost:8086/artifactory even though the docker mappings was correct and it was possible to connect to artifactory web based console with the same URL.
Replacing "localhost" with docker container IP did the trick.
Name of my container in which artifactory was running was docker-plgr_artifactory_1
Admins-MacBook-Pro-2:~ prakash.tiwari$ docker exec -it docker-plgr_artifactory_1 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.2 08038bc9449b
The IP of container was 172.18.0.2. So I replaced http://localhost:8086/artifactory with http://172.18.0.2:8081/artifactory and jenkins was now able to connect to artifactory. (8081 is the port in docker container at which artifactory was running. You'd have given it at the time of running the container. Alternatively, you can find it by running docker ps and checking the value under PORTS field.)
Credit: https://www.arvinep.com/2016/04/jenkins-docker-container-problem.html
Note: I know this solution doesn't explain the cause and why it works, but I hope it at least helps some people and saves their time.

I see that you asked this question a while ago. I just had to deal with a very similar situation. I had loaded the root and intermediate certificates into the cacerts files found under the 4 version of Java on the build server. The problem was that Jenkins uses it's own cacerts file found in the Jenkins install folder. Once I loaded the certs there I was able to test the connection to artifactory and upload the build artifacts. I hope this helps

Related

Node.js application wont start on my public IP

I'm SSHing into a linux shell for a school project. Right now, we're trying to set up a react app for a web frontend. We were able to run the app on localhost easily enough, and all of the functionality seems to be good, but I can't figure out how to get this hosted on the public IP of the computer. We've been using yarn to do all of this, though i've tried other things, so here's some cli output.
path/to/thing# yarn start
yarn run v1.22.5
$ react-scripts start
Attempting to bind to HOST environment variable: public.facing.ip.address //This is a website name
If this was unintentional, check that you haven't mistakenly set it in your shell.
Learn more here: //There was a link here but SO formatting wouln't let me keep it.
Could not find an open port at public.facing.ip.address.
Network error message: listen EADDRNOTAVAIL: address not available public.facing.ip.address //numeric
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
root#computer:path/to/thing#
When I run hostname -I, public.facing.ip.address does not appear at all. So that seems like the obvious issue. The catch here is that we are also running jenkins on a separate port of public.facing.ip.address from this same computer. That was much easier to set up, it just came as something I could start as a service using 'systemctl start jenkins' and up it went onto the public facing IP. I set all that up and I can access it just fine, etc. The best I can do with this is modify the HOST variable either in the terminal or the .env file, then yarn starts a development server on localhost (which I can't access since I'm on a different network SSHing into this computer)
How do I make yarn host our webapp on the public facing IP?
open your router page, there should be dmz host option somewhere, point it to your local ip address
My networking inexperience was the culprit. Instead of using HOST=path.to.public.ip, the solution was to use HOST=0.0.0.0.

Mesosphere installation PermissionError:/genconf/config.yaml

I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com

Managed VM Deployment hangs on "Copying certificates for secure access..."

I'm running the following command to deploy my Managed VMs app (on Windows 10):
gcloud preview app deploy app.yaml --project=<PROJECT> --promote
The deployment starts bug hangs on the following line:
Copying certificates for secure access. You may be prompted to create an SSH keypair.
And after some time I get the error:
ERROR: (gcloud.preview.app.deploy) Unable to copy certificates.
I've already:
Made sure that there are SSH keys in ~\.ssh\google_compute_engine
Tried to run with --quiet - same results
Renamed ssh-term.exe to ssh.exe - same results
Run the command as an administrator.
Run the command with --verbosity debug, which prints the following line multiple times: DEBUG: File [f] does not exist locally.
Any help will be much appreciated!
Found the cause! It was the project's firewall that blocked SSH by default. Fixed that and it worked.
Glad you fixed it, I had the same problem and will use your fix. I did happen accros a work around. By using the Container Build API to perform the build.
enter the command
gcloud config set app/use_cloud_build true
Before you
gcloud preview app deploy
Cite: https://github.com/isusanin/google-cloud-sdk/issues/533

Timed out error on starting Webdriver server when connected to network via VPN

In my protractor config file, I had this line, seleniumAddress: 'http://localhost:4444/wd/hub'. On running Protractor I was getting an error "ECONNREFUSED connect ECONNREFUSED". After going through lot of other existing issues and solutions, I removed "seleniumAddress" property. That resolved the issue. Selenium standalone server gets started. "Selenium standalone server started at http://192.168.1.156:64477/wd/hub"
But when I turn on the VPN, then I get an error "Error: Timed out waiting for the WebDriver server at http://192.168.1.156:63199/wd/hub", which I have been not able to resolve.
I am on a HP laptop which has Windows 7 Professional and I am using Cisco VPN.
(Hi, so I can't comment yet (low rep)...)
Could you try running webdriver-manager start before running protractor? It will run in the address http://localhost:4444/wd/hub which is the seleniumAddress referred to in the protractor config. Does that change anything?
This might be related (VPN-workaround): protractor stand alone selenium fails: Error: Timed out waiting for the WebDriver server at
Check the settings of the firewall that stays between Selenium standalone server (which might run also on your local host) and your working station (usually your localhost).
In my case (running on local Linux station) I had a very restrictive iptables firewall rules such that the WebDriver process launched on localhost could not access the Selenium standalone server which also run on localhost at whatever TCP port.
Just try to turn it off and check if that is the case ; then accommodate your firewall settings such that the respective connection passes your firewall rules.
If you want your scripts communicate directly with the Firefox|Chrome Driver (bypassing the Selenium server entirely) then try adding the directConnect: true in your protractor.conf.js
Git and other tools, often use the git: protocol for accessing files
in remote repositories. Some firewall configurations are blocking
git:// URLs, which leads to errors when trying to clone repositories
or download dependencies. (For example corporate firewalls are
"notorious" for blocking git:.)
If you run into this issue, you can force the use of https: instead,
by running the following command: git config --global
url."https://".insteadOf git://
(see Common Issues on Angular tutorial)

Possible? How to setup VNC in a Google Managed VM Environment

I'm using Java but this isn't necessarily a Java question. Google's "java-compat" image is Debian (3.16.7-ckt20-1+deb8u3~bpo70+1 (2016-01-19)).
Here is my Dockerfile:
FROM gcr.io/google_appengine/java-compat
RUN apt-get -qqy update && apt-get qqy install curl xvfb x11vnc
RUN mkdir -p ~/.vnc
RUN x11vnc -storepasswd xxxxxxxx ~/.vnc/passwd
EXPOSE 5900
ADD . /app
And in the Admin Console I created a firewall rule to open up 5900. And lastly I am calling the vnc server itself in the "_ah/start" startup hook with this command:
x11vnc -forever -usepw -create
All seems to be setup correctly but I'm unable to connect with TightVNC. I use the public (ephemeral) IP address for the instance I find in the Admin Console followed by ::5900 (TightVNC requires two colons for some reason). I'm getting a message that the server refused the connection. And indeed when I try to telnet to port 5900 it's blocked.
Next I SSH into the container machine and when I test the port on the container with wget xxx.xxx.xxx.xxx:5900 I get a connection. So it seems to me the container is not accepting connections on port 5900. Am I getting this right? Is it possible to open up ports and route my VNC client into the docker container? Any help appreciated.
Why I can't use Compute Engine. Just to preempt some comments about using google's Compute Engine environment instead of Managed VMs. I make heavy use of the Datastore and Task Queues in my code. I don't think those can run (or run natively/efficiently) on Compute Engine. But I may pose that as a separate question.
Update: Per Paul in the comments... having learned some of the docker terminology: Can I publish a port on the container in Google's environment?
Out of curiosity - why are you trying to VNC into your instances? If it's just for management purposes, you can SSH into Managed VM instances.
That having been said - you can use the network/forwarded_ports config to route traffic from the VM to the application container:
network:
forwarded_ports:
- 5900
instance_tag: vnc
Put that in your app.yaml, and re-deploy your app. You'll also need to open the port in your firewall (if you intend on accessing this from the public internet):
gcloud compute firewall-rules create default-allow-vnc \
--allow tcp:5900 \
--target-tags vnc \
--description "Allow vnc traffic on port 5900"
Hope this helps!

Resources