As the title says i have got some issues with Cntlm. I'm working with the 0.92.3 version and launched from the source code. What i am trying to do is starting Cntlm as a standalone proxy with localhost configuration, to browse internet and launch applications (e.g. Skype).
I am working on Mint and the command uname -a gives:
Linux Jarvis 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Those are the steps i did before asking:
./configure
make
sudo make install
everything goes fine. I also export http, https and ftp proxy with:
export http_proxy = http://127.0.0.1:3128
export https_proxy = https://127.0.0.1:3128
export ftp_proxy = https://127.0.0.1:3128
and also everything goes fine. What remains is to launch cntlm, doing with:
sudo cntlm -v -f
it keeps information passed by cntlm.conf correctly, and display it's staying in the foreground.
I go in my browser (firefox) and configure it for the proxy, setting 127.0.0.1 as http proxy and 3128 as the port.
when i launch a browser tab a do a test research under the proxy, the terminal starts to elaborate data, but after a few seconds it keeps saying that:
cntlm[11605]: Serious error during accept: Too many open files
until i press ctrl+C.
this is the cntlm.conf i have:
#
# Cntlm Authentication Proxy Configuration
#
# NOTE: all values are parsed literally, do NOT escape spaces,
# do not quote. Use 0600 perms if you use plaintext password.
#
Username myUsername
Domain localhost
Password password
# NOTE: Use plaintext password only at your own risk
# Use hashes instead. You can use a "cntlm -M" and "cntlm -H"
# command sequence to get the right config for your environment.
# See cntlm man page
# Example secure config shown below.
# PassLM 1AD35398BE6565DDB5C4EF70C0593492
# PassNT 77B9081511704EE852F94227CF48A793
### Only for user 'testuser', domain 'corp-uk'
# PassNTLMv2 D5826E9C665C37C80B53397D5C07BBCB
# Specify the netbios hostname cntlm will send to the parent
# proxies. Normally the value is auto-guessed.
#
# Workstation netbios_hostname
# List of parent proxies to use. More proxies can be defined
# one per line in format <proxy_ip>:<proxy_port>
#
Listen 127.0.0.1:3128
#Listen 192.168.0.1:3128
#Proxy 10.0.0.41:8080
#Proxy 10.0.0.42:8080
Proxy 127.0.0.1:3128
# List addresses you do not want to pass to parent proxies
# * and ? wildcards can be used
#
NoProxy localhost, 127.0.0.*, 10.*, 192.168.*
# Specify the port cntlm will listen on
# You can bind cntlm to specific interface by specifying
# the appropriate IP address also in format <local_ip>:<local_port>
# Cntlm listens on 127.0.0.1:3128 by default
#
# If you wish to use the SOCKS5 proxy feature as well, uncomment
# the following option. It can be used several times
# to have SOCKS5 on more than one port or on different network
# interfaces (specify explicit source address for that).
#
# WARNING: The service accepts all requests, unless you use
# SOCKS5User and make authentication mandatory. SOCKS5User
# can be used repeatedly for a whole bunch of individual accounts.
#
SOCKS5Proxy 5000
#SOCKS5User username:password
# Use -M first to detect the best NTLM settings for your proxy.
# Default is to use the only secure hash, NTLMv2, but it is not
# as available as the older stuff.
#
# This example is the most universal setup known to man, but it
# uses the weakest hash ever. I won't have it's usage on my
# conscience. :) Really, try -M first.
#
#Auth LM
#Flags 0x06820000
# Enable to allow access from other computers
#
#Gateway yes
# Useful in Gateway mode to allow/restrict certain IPs
# Specifiy individual IPs or subnets one rule per line.
#
Allow 127.0.0.1
Deny 0/0
# GFI WebMonitor-handling plugin parameters, disabled by default
#
#ISAScannerSize 1024
#ISAScannerAgent Wget/
#ISAScannerAgent APT-HTTP/
#ISAScannerAgent Yum/
# Tunnels mapping local port to a machine behind the proxy.
# The format is <local_port>:<remote_host>:<remote_port>
#
#Tunnel 11443:remote.com:443
i tried many times to change configuration but it really doesn't change. If i put 127.0.0.1:3128 as default Proxy (what i am trying to do) it starts well but ends in a loop.
what should i do to make it works and where is the problem? Thanks in advance.
please refer this
It is obvious you will get loop with this configuration! you kept your Listen and proxy port the very same!So whatever it listens it proxies to self and it is cumulative!
Enter windows proxy server name and port in Proxy! not your local host!
Username Enter-your-username-here
Domain Enter-your-domain-here
Password Enter-your-password-here
Proxy proxyhost:proxyport
Proxy proxyhost:proxyport
NoProxy localhost,127.0.0.1
Listen 3128
i.e. if you want maven to use cntlm, put localhost:3128 in maven settings.xml so that it would get proxy to your proxyhost:proxyport with defined domain username and passowrd.
Related
I am following a tutorial that adds the following line to the hosts file:
127.0.0.1 posts.com
And uses this address to launch his React application instead of localhost:3000. But on my Windows machine it doesn't work at posts.com but works on posts.com:3000. Why this happens and how can I fix it?
The following is my hosts file content:
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
127.0.0.1 posts.com
# Added by Docker Desktop
10.24.153.39 host.docker.internal
10.24.153.39 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
By default, your dev server, whatever that is (I can help better if you post that info) is serving HTTP on port 3000. Hosts files can only fake the domain, and not the port.
The default HTTP port a web browser uses when you hit a domain is 80 (443 if you are using HTTPs) and if the server is hosting on a different port to 80, it has to be manually defined in the URL.
If you need it to not be manually defined in the URL under dev, you must reconfigure your dev server such that is serves on port 80 (or 443 if using HTTPS).
How to do this is dev-tool specific. For example, if you're using create-react-app you would change the start script in package.json to be this then reboot it:
"start": "set PORT=80 && react-scripts start"
But if its something like webpack dev server or vite, you'd change the port option in the webpack/vite config.
So I'm still in the process of updating a Drupal 7 site to 8 using drush and ddev.
After running the import, I get an error with upgrade_d7_file.
I've tried to install a certificate using this article:
https://www.ddev.com/ddev-local/ddev-local-trusted-https-certificates/
However still get the error, any ideas?
ddev exec drush migrate-import --all
ddev exec drush mmsg upgrade_d7_file
cURL error 60: SSL: no alternative certificate subject name matches target host name
'drupal7migration2.ddev.site'
(see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
(https://drupal7migration2.ddev.site//sites/default/files/Virtual%20Challenges%20%28Results%20and%2
0PBs%29%2020200709.xlsx)
When you want one DDEV-Local project to talk to another using https, curl on the client side has to trust the server side that you're talking to. There are two ways to do this:
(built-in, no changes needed): Use ddev-<projectname>-web (the container name) as the target hostname in the URL. For example in your case, use curl https://ddev-drupal7migration2-web. This hostname is already trusted among various ddev projects.
(requires docker-compose.*.yaml): If you want to use the real full FQDN of the target project (https://drupal7migration2.ddev.site in your case) then you'll need to add that as an external_link in the client project's .ddev. So add a file named .ddev/docker-compose.external_links.yaml in the client side (migration1?) project, with these contents:
version: '3.6'
services:
web:
external_links:
- "ddev-router:drupal7migration2.ddev.site"
That will tell Docker to route requests to "drupal7migration2.ddev.site" to the ddev-router, and your container and curl trust it (it has that name in its cert list).
Is there any way to achieve below scenario in Nagios using NRPE?
Nagios box will first check if NRPE on client box is up and if yes it wil check on other services configured for that client. If NRPE is down on client, it will throw notification for NRPE and will stop checking rest of the services configured for that client box until NRPE comes up.
This setting is what are you looking for. Look at your nagios.cfg
# DISABLE SERVICE CHECKS WHEN HOST DOWN
# This option will disable all service checks if the host is not in an UP state
#
# While desirable in some environments, enabling this value can distort report
# values as the expected quantity of checks will not have been performed
host_down_disable_service_checks=1
Check your hosts status via check_nrpe. Create new command in your config, if you don't have it:
define command{
command_name check-host-alive-nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$
}
Now, use this command in your host definition, something like that:
define host {
host_name your_server
address your_server
use generic-host
check_command check-host-alive-nrpe
}
When the NRPE on remote host stop responding due some problems, this host will be in CRITICAL state and remote check for services will be temporary disabled.
After you configure this don't forget restart your Nagios service.
PS: This setting works only with Nagios 4+
I achieved this via service dependency where all nrpe checks depend on nrpe service availability check.
define servicedependency{
hostgroup linux-servers
# host_name xyz.example.com
service_description check_nrpe_alive
dependent_service_description check_disk,check_mem,chech_load,check_time,check_disk
execution_failure_criteria w,c,u
notification_failure_criteria u,w,c,o
}
Below is check_nrpe_alive check command definition.
define command{
command_name check_nrpe_alive
command_line $USER1$/check_nrpe -H $HOSTADDRESS$
}
Also needed to set soft_service_dependencies=1 in nagios.cfg
# SOFT STATE DEPENDENCIES
# This option determines whether or not Nagios will use soft state
# information when checking host and service dependencies. Normally
# Nagios will only use the latest hard host or service state when
# checking dependencies. If you want it to use the latest state (regardless
# of whether its a soft or hard state type), enable this option.
# Values:
# 0 = Don't use soft state dependencies (default)
# 1 = Use soft state dependencies
# Changing fpr service dependency
#soft_state_dependencies=0
soft_state_dependencies=1
When nrpe service on client is in CRITCAL state Nagios will only send out notification for check_nrpe_alive but not any dependent service checks. This is tested on Nagios core 4.4.6
Apparently my initial question was to vague or was interpreted as a bad question.
I'll try again.
There is a file called volttron located at volttron/scripts/admin/ the contents indicate it is / was for a Volttron daemon to start from init. I notice that it refers to paths outside of the venv (/var/lib/volttron) Why is this file there? Are there plans to revise it? Have people modified this file to achieve start from init? Is there documentation in regards to this subject?
Auto initialization is an extremely important feature for any program that provides a service on a computer system.
I have provided a snippet of the code.
#! /bin/sh
### BEGIN INIT INFO
# Provides: volttron
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start: $network $named
# Should-Start: $network $named
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: VOLTTRON (TM) Daemon
# Description: VOLTTRON (TM) agent execution platform.
### END INIT INFO
# Author: Brandon Carpenter <brandon.carpenter#pnnl.gov>
# Do NOT "set -e"
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="VOLTTRON (TM) agent execution platform"
NAME=volttron
USER=volttron
VLHOME=/var/lib/volttron
DAEMON_ARGS="-v -l $VLHOME/volttron.log"
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
This script (scripts/admin/volttron) was setup assuming you had installed VOLTTRON in var/lib. To use it for your environment, edit VLHOME to where you installed it. For example: /home/volttronuser/git/volttron
Make the script executable: chmod +x scripts/admin/volttron, then copy it over to /etc/init.d/
To make it autostart with the OS:
sudo update-rc.d volttron defaults
To start and stop it manually:
sudo service volttron start
sudo service volttron stop
See the status with:
sudo service volttron status
If this is going to be used in a deployed situation, it's recommended that you edit the script to use a rotating log configuration (or using http://www.linuxcommand.org/man_pages/logrotate8.html). Edit the arguments in the script to use the -L option when starting VOLTTRON. This will use the rotatinglog configuration.
DAEMON_ARGS="-v -L $VLHOME/examples/rotatinglog.py"
You will also need to edit examples/rotatinglog.py to change the location of the log file. Edit "filename" to point to a location your user has permission to write to.
'handlers': {
'rotating': {
'class': 'logging.handlers.TimedRotatingFileHandler',
'level': 'DEBUG',
'formatter': 'agent',
'filename': '/home/myuser/git/volttron/volttron.log',
Note:
The cgroups portion of the script supports a VOLTTRON plugin for resource management and isn't needed without that. This is why it's commented out in the start method of the script.
i'm have a problem when intall vsftpd using mac port,when i wanna to start service of vsftp with
sudo /opt/local/sbin/vsftpd
i have the error like this
500 OOPS: vsftpd: not configured for standalone, must be started from inetd.
anyone can suggest me how must i can do it?
this is my vsftpd.config
# Example config file /opt/local/etc/vsftpd.conf
#
# The default compiled in settings are fairly paranoid. This sample file
# loosens things up a bit, to make the ftp daemon more usable.
# Please see vsftpd.conf.5 for all compiled in defaults.
#
# READ THIS: This example file is NOT an exhaustive list of vsftpd options.
# Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's
# capabilities.
#
# Allow anonymous FTP? (Beware - allowed by default if you comment this out).
anonymous_enable=YES
#
# Uncomment this to allow local users to log in.
#local_enable=YES
#
# Uncomment this to enable any form of FTP write command.
#write_enable=YES
#
# Default umask for local users is 077. You may wish to change this to 022,
# if your users expect that (022 is used by most other ftpd's)
#local_umask=022
#
# Uncomment this to allow the anonymous FTP user to upload files. This only
# has an effect if the above global write enable is activated. Also, you will
# obviously need to create a directory writable by the FTP user.
#anon_upload_enable=YES
#
# Uncomment this if you want the anonymous FTP user to be able to create
# new directories.
#anon_mkdir_write_enable=YES
#
# Activate directory messages - messages given to remote users when they
# go into a certain directory.
dirmessage_enable=YES
#
# Activate logging of uploads/downloads.
xferlog_enable=YES
#
# Make sure PORT transfer connections originate from port 20 (ftp-data).
connect_from_port_20=YES
#
# If you want, you can arrange for uploaded anonymous files to be owned by
# a different user. Note! Using "root" for uploaded files is not
# recommended!
#chown_uploads=YES
#chown_username=whoever
#
# You may override where the log file goes if you like. The default is shown
# below.
#xferlog_file=/opt/local/var/log/vsftpd.log
#
# If you want, you can have your log file in standard ftpd xferlog format.
# Note that the default log file location is /opt/local/var/log/xferlog in this case.
#xferlog_std_format=YES
#
# You may change the default value for timing out an idle session.
#idle_session_timeout=600
#
# You may change the default value for timing out a data connection.
#data_connection_timeout=120
#
# It is recommended that you define on your system a unique user which the
# ftp server can use as a totally isolated and unprivileged user.
#nopriv_user=ftpsecure
#
# Enable this and the server will recognise asynchronous ABOR requests. Not
# recommended for security (the code is non-trivial). Not enabling it,
# however, may confuse older FTP clients.
#async_abor_enable=YES
#
# By default the server will pretend to allow ASCII mode but in fact ignore
# the request. Turn on the below options to have the server actually do ASCII
# mangling on files when in ASCII mode.
# Beware that on some FTP servers, ASCII support allows a denial of service
# attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd
# predicted this attack and has always been safe, reporting the size of the
# raw file.
# ASCII mangling is a horrible feature of the protocol.
#ascii_upload_enable=YES
#ascii_download_enable=YES
#
# You may fully customise the login banner string:
#ftpd_banner=Welcome to blah FTP service.
#
# You may specify a file of disallowed anonymous e-mail addresses. Apparently
# useful for combatting certain DoS attacks.
#deny_email_enable=YES
# (default follows)
#banned_email_file=/opt/local/etc/vsftpd.banned_emails
#
# You may specify an explicit list of local users to chroot() to their home
# directory. If chroot_local_user is YES, then this list becomes a list of
# users to NOT chroot().
chroot_local_user=YES
#chroot_list_enable=YES
# (default follows)
#chroot_list_file=/opt/local/etc/vsftpd.chroot_list
#
# You may activate the "-R" option to the builtin ls. This is disabled by
# default to avoid remote users being able to cause excessive I/O on large
# sites. However, some broken FTP clients such as "ncftp" and "mirror" assume
# the presence of the "-R" option, so there is a strong case for enabling it.
#ls_recurse_enable=YES
#
# When "listen" directive is enabled, vsftpd runs in standalone mode and
# listens on IPv4 sockets. This directive cannot be used in conjunction
# with the listen_ipv6 directive.
listen=YES
#
# This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6
# sockets, you must run two copies of vsftpd with two configuration files.
# Make sure, that one of the listen options is commented !!
#listen_ipv6=YES
#
# Name of pam module
pam_service_name=ftpd`
Uncomment
listen=NO
to
listen=YES
in vsftpd.conf
I believe the build of vsftpd was not configured to be started the way you are trying to do. You'll either have to install a build that is configured to start from the command line, or start it instead from inetd, which starts services during initial boot. Look for a startup script in /etc/init.d. In some distros, there is a script that runs to manage what services start during boot. I don't have a MAC, so I can't give you more guidance, sorry.
i found my solution problem , just rename vsftpd.config to vsftpd.conf , i don't know why that is not read vsftpd.config when i rename to vsftpd.conf that work in my local mac.
In my case I had a vsftpd well configured and it was working, however after installing cloudpanel it stopeed working. After reinstalling it and rebooting it still didn't work.
The problem was a process that was still alive at port 21.
sudo kill -9 `sudo lsof -t -i:21`
sudo systemctl restart vsftpd
Killing that process worked for me and afther running the next command I can see it working again.
sudo systemctl status vsftpd