Error: Unable to establish IPMI v2 / RMCP+ session - ipmi

I installed the ipmitool 1.8.18 in the CentOS7.2 Dedicated Server, I can use it for check self own ipmi data:
# ipmitool -I open power status
Chassis Power is on
but I want to check other ipmi address' status, I will get this error:
# ipmitool -H 172.16.22.237 -U root -P mypassword -I lanplus chassis status -v
Get Auth Capabilities error
Error issuing Get Channel Authentication Capabilities request
Error: Unable to establish IPMI v2 / RMCP+ session
in the being controlled Server I use ipmitool checked the lan:
[root#localhost ~]# ipmitool -I open lan print 1
Set in Progress : Set Complete
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5 PASSWORD
: User : MD2 MD5 PASSWORD
: Operator : MD2 MD5 PASSWORD
: Admin : MD2 MD5 PASSWORD
: OEM : MD2 MD5 PASSWORD
IP Address Source : Static Address
IP Address : 172.16.22.237
Subnet Mask : 255.255.255.0
MAC Address : 00:25:90:a9:42:4a
SNMP Community String : public
IP Header : TTL=0x00 Flags=0x00 Precedence=0x00 TOS=0x00
BMC ARP Control : ARP Responses Enabled, Gratuitous ARP Disabled
Default Gateway IP : 0.0.0.0
Default Gateway MAC : 00:00:00:00:00:00
Backup Gateway IP : 0.0.0.0
Backup Gateway MAC : 00:00:00:00:00:00
802.1q VLAN ID : Disabled
802.1q VLAN Priority : 0
RMCP+ Cipher Suites : 1,2,3,6,7,8,11,12
Cipher Suite Priv Max : aaaaXXaaaXXaaXX
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
Bad Password Threshold : Not Available
EDIT-01
I use nmap get bellow information:
# nmap -p 623 -sU -P0 172.16.22.237
Starting Nmap 6.40 ( http://nmap.org ) at 2018-08-22 08:01 CST
Nmap scan report for 172.16.22.237
Host is up.
PORT STATE SERVICE
623/udp open|filtered asf-rmcp
Nmap done: 1 IP address (1 host up) scanned in 2.11 seconds

(SOLVED for a Dell machine)
I had exactly the same issue on a Dell Poweredge R430 machine after having changed the system motherboard:
although my credentials were restored in the new IDRAC board from the chassis flash backup, and despite the fact my credentials still allowed me to enter the IDRAC Web Interface, I was not able anymore to interact with the IDRAC board through IPMIv2/lanplus/SOL interface, facing the same problem of Error: Unable to establish IPMI v2 / RMCP+ session.
For me the solution was, as suggested by Rupeshrams herein https://stackoverflow.com/a/55615668/13646401, to "reset ipmi default password to the same old one" ("same" because I had system tools all hardcoded with old credentials) with the IDRAC Web Interface as this :
In your browser, enter the (static) ip address of the IDRAC : this should open an https web site.
Then :
Menu Overview -> IDRAC SETTINGS -> User Authentication
-> Click on the userID of your admin account -> Next
-> check "change your password" checkbox and enter the same (or new) password
-> Apply
Why : I understood that passwords were hashed/encrypted in my previous motherboard with a key specific to that old motherboard. By Changing the motherboard, and restoring a user database hashed by a from-now-on unknown key, my credentials became invalid, at least for using ipmitool and IPMIv2 interface. What was confusing me, but that finally helped me to solve the problem, was the fact that old credentials were still valid to enter the IDRAC web interface.
TIP: every advice to check for IPMIv2/UDP 623 service availability with nmap or to "activate SOL (Serial Over Lan)" are helpful: "SOL activate" can be easily performed on the IDRAC web interface:
Menu Overview -> IDRAC SETTINGS -> Network -> SerialOverLAN
Every advice to "unlock" credentials through ipmitool various commands just can not and must not work.
But, if web interface is not accessible, then you need to access the IDRAC through everything you can except ipmitools, at least in the case of a Dell machine (at first try BIOS "/IDRAC Settings/User configuration", or ssh, telnet, or anything else such as RACADM, or even the real serial interface with a db9 cable and a tty terminal (eg a PC with hyperterminal or anyother soft tty emulator).
Yours sincerely,
Pierre

To resolve ipmi issue, need to change the IPMI over LAN setting from Disabled to **Enabled** in the iDRAC/iLO.
Once after IPMI over LAN been enabled, below command provides power status.
#ipmitool -H <ipx.x.x.x> -U <username> -I lanplus power status

Reset ipmi default password to the same or different one using the racadm command that will resolve the issue
to install racadm you need few packages in local,
sudo apt install srvadmin-base srvadmin-storageservices srvadmin-idrac7 srvadmin-all*

As stated above, the issue is likely that IPMI ove LAN is off even tho DRAC is enabled.
You can fix this by rebooting and going into the DRAC settings, or you can use the following command on the server OS in OpenManage Server Administration (OMSA)
omconfig chassis remoteaccess config=nic enableipmi=true

For us on a Dell R740, we had to enable IPMI Over Lan via the iDrac --> iDRAC Settings --> IPMI Settings:
What was frustrating is that racadm commands were working. When this is enabled when you run ipmitool sel info you should see:
Version : 1.5 (v1.5, v2 compliant)

Related

Failed to load resource: net::ERR_CONNECTION_TIMED_OUT on remote but works fine on localhost

i have react with asp.net core website . it worked fine on localhost but when published on iis remote server the timeout error occurs.
the front-end (react client) and back-end(server) asp.netcore webapi work independently.
before uploading i changed the following in program.cs in webapi.
usUrl("https://localhost:4000")
to useUrl("https://www.virtualcollege.pk:4000")
i also changed the front-end baseurl similarly.
moreover, the connectionstrings in appsettings.json is correct for both databases.
i added migration and updated the databases successfully.
the website is live but timeout error occur :
virtualcollege.pk
i also tried the url with "https://myip-address:4000"
thanks in advance for help.
if i remove port number from url and publish on local folder than upload to remote server . the webapi.exe on local machine runs as follows:
You have to open incoming request for 4000 port. Try some methods below.
Windows Server
Please check this link or this one
Ubuntu/Debian
sudo ufw allow 4000/tcp
sudo ufw status // check status
CentOS
First, you should disable selinux, edit file /etc/sysconfig/selinux so it looks like this:
SELINUX=disabled
SELINUXTYPE=targeted
Save file and restart system.
Then you can add the new rule to iptables:
iptables -A INPUT -m state --state NEW -p tcp --dport 4000 -j ACCEPT
and restart iptables with /etc/init.d/iptables restart

Amazon DocumentDB fails to connect with error "SSL peer certificate validation failed"

I am trying to connect to our AWS DocumentDB, but it fails with the following error:
2019-12-04T17:46:52.551-0800 W CONTROL [main] Option: ssl is deprecated. Please use tls instead.
2019-12-04T17:46:52.551-0800 W CONTROL [main] Option: sslCAFile is deprecated. Please use tlsCAFile instead.
2019-12-04T17:46:52.551-0800 W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead.
MongoDB shell version v4.2.1
connecting to: mongodb://insights-db-2019-08-12-18-32-13.cih94xwdmniv.us-west-2.docdb.amazonaws.com:27017/?compressors=disabled&gssapiServiceName=mongodb
2019-12-04T17:46:52.684-0800 E NETWORK [js] SSL peer certificate validation failed: Certificate trust failure: CSSMERR_CSP_UNSUPPORTED_KEY_SIZE; connection rejected
2019-12-04T17:46:52.685-0800 E QUERY [js] Error: couldn't connect to server insights-db-2019-08-12-18-32-13.cih94xwdmniv.us-west-2.docdb.amazonaws.com:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: Certificate trust failure: CSSMERR_CSP_UNSUPPORTED_KEY_SIZE; connection rejected :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2019-12-04T17:46:52.687-0800 F - [main] exception: connect failed
2019-12-04T17:46:52.687-0800 E - [main] exiting with code 1
The command I use:
mongo --ssl --host MY_DOCUMENT_DB_HOST_AND_PORT --sslCAFile MY_KEY_PATH --username MY_USERNAME --password MY_PASSWORD
A couple troubleshooting I already tried:
Sent the exact same command and key to another Mac OS X machine on the same network --> worked fine
Uninstalled and reinstalled my mongo app mongodb-community#4.2
Try adding the rds-combined-ca-bundle.pem certificate to your Mac, I had a very similar error when trying to connect to DocumentDb using localhost through a forwarded port, the command I ran is
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain rds-combined-ca-bundle.pem
I got this command from this answer
For those hitting this issue post 2020, see the last reply in this thread: https://forums.aws.amazon.com/message.jspa?messageID=936916
Mac OS X Catalina has updated the requirements for trusted certificates. Trusted certificates must now be valid for 825 days or fewer (see https://support.apple.com/en-us/HT210176). Amazon DocumentDB instance certificates are valid for over four years, longer than the Mac OS X maximum. In order to connect directly to an Amazon DocumentDB cluster from a computer running Mac OS X Catalina, you must allow invalid certificates when creating the TLS connection. In this case, invalid certificates mean that the validity period is longer than 825 days. You should understand the risks before allowing invalid certificates when connecting to your Amazon DocumentDB cluster.
To connect to an Amazon DocumentDB cluster from OS X Catalina using the AWS CLI, use the tlsAllowInvalidCertificates parameter.
mongo --tls --host <hostname> --username <username> --password <password> --port 27017 --tlsAllowInvalidCertificates
Basically, just ignore invalid certificates.

SQL Server service breaks after adding SSL certificates in Linux

I have set up a SQL Server database server on my Ubuntu 16 machine. To make it secure over a host network I am working on adding an SSL encryption certificate on it.
I tried following the steps as mentioned on this link ssl-encryption-mssql
But after restarting the service of SQL Server, it breaks giving the below exit code status
code=exited, status=1/FAILURE
I even tried to check the logs using journalctl -u mssql-server.service -b but it is not helpful at all. For the referrence, I am adding the screenshot of journalctl command below:
My /var/opt/mssql/mssql.conf looks something like this after following the steps from official doc.
[sqlagent]
enabled = false
[EULA]
accepteula = Y
[network]
tlscert = /etc/ssl/certs/cert.pem
tlskey = /etc/ssl/private/privkey.pem
tlsprotocols = 1.2
forceencryption = 1
EDIT-1: I further checked out the logs from /var/log/syslog, it stated the following log-
Error: 49940, Severity: 16, State: 1.Unable to open one or more of the user-specified certificate file(s). Verify that the certificate file(s) exist with read permissions for the user and group running SQL Server and found this question which seems similar, I tried the approach as told by Charles but it doesn't seem to work. Even I am using the Let's Encrypt Certificates.
EDIT-2: It is not a licensed version, could this be the reason?
How to resolve this error?
I just faced the same problem even though I followed the same steps as mentioned in the microsoft documentation. The actual problem seems to be with the permissions on the folder paths where the certificate files are located.
You can verify whether mssql user is able to connect or not using the openssl commands.
This command will do a basic verification on whether the certificates are valid or not.
sudo su - mssql -c "openssl verify -verbose -CAfile /etc/ssl/certs/mssql_ca.pem /etc/ssl/certs/cert.pem"
If you wanted to see if the combination of certificates are actually working or not (with key), you can start a openssl server service and then connect to it with another openssl client connection.
sudo su - mssql -c "openssl s_server -accept 8443 -cert /etc/ssl/certs/cert.pem -key /etc/ssl/private/privkeyrsa.pem -CAfile /etc/ssl/certs/mssql_ca.pem"
openssl s_client -connect localhost:8443
Another small correction from the documentation (I am using CA provided certificate), had to convert the key file format (might not require for you).
openssl rsa -in /etc/ssl/private/key.pem -out /etc/ssl/private/privkeyrsa.pem

Reverse enginerring PostgreSQL database with SSL connection in SchemaSpy

When running SchemaSpy get error:
Connection failed because of the following error: "no pg_hba.conf entry for host "xxx.xxx.xxx.xxx", user "xxxx", database "xxx", SSL off"
The error occurs because the database does require an SSL connection.
Is there a way to turn on the SSL flag for a connection in SchemaSpy, I opened up the jar file but couldn't find anything. I know the PostgreSQL JDBC Driver supports SSL so this should be theoretically possible.
Otherwise if any one knows any opensource/freeware tools for reverse engineering a postgresql database with an SSL connections, that would help a lot.
Thanks.
Do it like this:
java -jar schemaSpy_5.0.0 -t pgsql -host your-host-url -db your-database-name -s your-database-schema -u your-username -p your-password -connprops "ssl\=true;sslfactory\=org.postgresql.ssl.NonValidatingFactory" -o path-to-your-output-directory -dp path-to-your-jdbc-driver-jar-file
The trick: adding some additional parameters using the -connprops option: we are setting SSL to true (ssl parameter) and we are asking the client (i.e., the driver) to unconditionally accept the SSL connection (sslfactory parameter).
Per the PgJDBC documentation, use the ssl=true option in your URL's parameters, e.g.
jdbc:postgresql://myhost/mydb?ssl=true
If the host doesn't have a valid certificate or the cert doesn't match its hostname you can disable SSL validation too.
SchemaSpy accepts a JDBC URL for the connection, so this will work fine.

HBase Error - assignment of -ROOT- failure

I've just installed hadoop and hbase from cloudera (3) but when I try to go to http://localhost:60010 it just sits there continually loading.
I can get to the regionserver fine - http://localhost:60030... Looking at the master hbase server logs I can see the following.
Looks like a problem with the root region.
All of this is installed on a ext4 1TB partition running Ubuntu (Natty) 11. No cluster/other boxes).
Any help would be great!
11/05/15 19:58:27 WARN master.AssignmentManager: Failed assignment of -ROOT-,,0.70236052 to serverName=localhost,60020,1305452402149, load=(requests=0, regions=0, usedHeap=24, maxHeap=995), trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to /127.0.0.1:60020 after attempts=1
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:355)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:957)
at org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:606)
at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:541)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:901)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:730)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:710)
at org.apache.hadoop.hbase.master.AssignmentManager$TimeoutMonitor.chore(AssignmentManager.java:1605)
at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:328)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:883)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:750)
at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
at $Proxy6.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:419)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:393)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:444)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:349)
... 8 more
11/05/15 19:58:27 WARN master.AssignmentManager: Unable to find a viable location to assign region -ROOT-,,0.70236052
Fixed this issue for anyone else who finds this. Was a problem with the host file (/etc/hosts). Need to remove entries relating to 127.0.1.1 COMPNAME - just put a hash (#) in front of this line and then restart all hadoop and hbase services.
More on the solution here: http://blog.nemccarthy.me/?p=110
As per #Manav:
If you find yourself in a situation wherein you can't edit /etc/hosts, the following >workaround will also work:
in conf/hadoop-env.sh add the following line:
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
I'm using Ubuntu 11.10 (Oneiric) and HBase 0.92.1. These steps fixed my issue for my single server install:
Edit the /etc/hosts: change the IP address associated with the hostname so that it uses your LAN IP instead of 127.0.0.1
Open <HBASE_DIR>/conf/hbase-env.sh
edit HBASE_OPTS, append -Djava.net.preferIPv4Stack=true. The line should look like this:
export HBASE_OPTS="-XX:+UseConcMarkSweepGC -Djava.net.preferIPv4Stack=true"
Restart HBase
If you find yourself in a situation wherein you can't edit /etc/hosts, the following
workaround will also work:
in conf/hadoop-env.sh add the following line:
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
(removed edit, moved as a separate answer)
your hosts file should look like this
#127.0.0.1 localhost
#127.0.1.1 ubuntu.ubuntu-domain ubuntu
192.168.2.100 ubuntu
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
This file can be found in /etc/hosts
Regards
Shuja
The trick with a subinterface worked for me, but i used the loopback interface rather than eth0 because eth0 is not always available on my machine (external adapter) and i want it managed by NetworkManager (which refuses to manage eth0 if eth0.1 is defined in /etc/network/interfaces on ubuntu 13.04)
Relevant snippet:
auto lo:0
iface lo:0 inet static
address 127.0.1.1
netmask 255.255.255.0
in addition to the regular
auto lo
iface lo inet loopback
of course
Here's another work-around that Works For Me, if you're unwilling to alter /etc/hosts (since Ubuntu put that entry there for a reason).
As this post explains, the core problem is that the loopback interface has multiple IPs bound to it while hbase assumes there will be only one. The resulting mismatch causes the master to think a region server has one IP (127.0.0.1), when it's really listening on another (127.0.1.1, the IP bound to the host's declared FQDN.)
Removing the /etc/hosts entry is one way to restore the one-interface-one-IP assumption. Replacing 127.0.1.1 in /etc/hosts with a "real" permanent IP is another. Finally, another is to create a new interface: drop this in the bottom of /etc/network/interfaces:
# Bind an interface solely for the default host FQDN IP, to fix reverse dns
auto eth0.1
iface eth0.1 inet static
pre-up ip link add eth0.1 name eth0.1 type bridge
address 127.0.1.1
netmask 255.255.255.0
You should then be able to sudo ifup eth0.1 and see it in ifconfig. Restart hbase & you should be good to go.
If you happen to already be using eth0.1 the pick another slot (i.e. eth0.2), it shouldn't matter.
EDIT: #bcolyn's use of lo:0 also works for me, and is superior since loopback will always be available. In that case the pre-up line also appears unnecessary.
In your hosts file to change host address from 127.0.1.1 to 127.0.0.1

Resources