I have been using flexlm's lmstat utility to get the license statistics on every 5min basis and so far i have observed that incorrect lmstat numbers on installed license counts and reservation counts as well! and such events occurs very intermittently :( we tried to upgrading lmstat and other stuffs like vendor daemons and so on! but nothing really helping
Can any one had this similar situation and good solution ?
It's hard to give you a response like 'you must do that' because there is no technical informations.
I try to propose you some ideas.
The lmutil lmstat command give a standard information. The problem is that the interpretation of the result is depending of the editor's license file, not from Flexnet.
For Matlab, you can have Name Networked User (NNU) and Concurent (CN) licenses. For the NNU, you have a login attached a each token. For CN, 'first arrived, first served'. If on the same server you have 10 NNU tokens and 10 CN token, lmstat -c <port#server> -a will report 30 tokens available.
It's only due to Mathworks. When you have 1 NNU token, you can use Matlab from 2 different hosts. So 10 NNU give 2 * 10 = 20 tokens, with the 10 CN tokens, it seems that you have 30 tokens. Very confusing for the users.
When you make a reservation, you consume the token when the license service start even if no one use the token. The number of available tokens is reduced.
[Update]
About of the version of 'lmgrd/lmutil', each vendor define a version to use, but often you can use a higher version.
I've checked Cadence, Comsol and other license services. The counts are good.
You must verify the counts for the lines like :
Users of <an increment>: (Total of 5 licenses issued; Total of 4 licenses in use)
After, you have the used token ('reserved' token are seen like 'used') :
1 RESERVATIONs for GROUP Better_Group (server/2700)
jason abc057 abc057 (v2015.0623) (shoe/28512 3886), start Fri 11/20 14:41
simon abc057 abc057 (v2014.1110) (shoe/28512 4166), start Fri 11/20 15:37, 2 licenses
When you manually check the count and if it is good, your license server is good. In the example : 2 real users but 3 tokens + 1 reservation = 4 token used. Be careful in your parsing, don't miss the , 2 licenses, I have a Awk script that miss that.
You must check in the same time the status of your license server, the logfile and the user's actions. To check the status, you can use :
lmutil lmstat -c <port>#<server> -a
When and how long a token is used is an software's property :
a token can be taken (OUT) when the software start and released (IN) when the software is stopped;
a token can be taken (OUT) only when a feature is called and released when the feature has finish his work;
a token can be taken (OUT) and released (IN) immediately to check if the software or the feature could be used.
So, if you check your licenses every fives minutes, many 'OUT' and 'IN' actions could be missing. But it's not a problem lmutil lmstat give only informations on the licenses at a specific instant.
If you want follow all usages, you must work with the logfiles like PHPlicensewatcher : http://phplicensewatch.sourceforge.net/. The tool make a 'scp' in a crontab to get the logfile on the licenses server.
Depending of the daemon vendor, when you update the license file, you could make a lmutil lmreread -c <file>, but some (like Matlab) don't accept this and you must make a restart. This could introduce a difference between the number of increments/tokens on the server and the resources available seen by a lmstat lmstatus -c <port#server> -i.
Related
I'm trying to use dbus/tools/GetAllMatchRules.py to get diagnostic information. When I run it without parameters as my regular user I get "GetConnectionMatchRules failed: did you enable the Stats interface?"
I modified GetAllMatchRules to print the specific exception details. It now says
GetConnectionMatchRules failed: did you enable the Stats interface?: org.freedesktop.DBus.Error.AccessDenied: The caller does not have the necessary privileged to call this method
So then I'm wondering, does it work at all? So I sudo su and run it again and it gives me the kind of information I'd expect to see, just not for the right bus. Oddly, if I use the --system parameter, even root gets org.freedesktop.DBus.Error.AccessDenied .
The repository claims, in bus/example-session-disable-stats.conf.in , that
"If the Stats interface was enabled at compile-time, users can use it on
the session bus by default. Systems providing isolation of processes
with LSMs might want to restrict this. This can be achieved by copying
this file in #EXPANDED_SYSCONFDIR#/dbus-1/session.d/
"
But that's clearly not the case because my user can NOT access this information.
I even tried a brute force approach to disabling (commenting out) ALL deny statements at /usr/share/dbus-1/system.conf and reloading and it still doesn't work. I also tried a full system restart in case I wasn't reloading correctly. I also did a system-wide search for system.conf in case it's actually using some other conf file that I'm not seeing, which would mean I'm modifying the wrong thing. I got a big hint that that's not the case when I had a typo (-- instead of --> for commenting out) and it failed to reload, but did reload once I fixed the typo.
I'm ok with the possibility that I can only do this signed in as root, so I also tried modifying GetAllMatchRules to use dbus.bus.BusConnection(), and force-feeding it the session address (unix:path=/run/user/1000/bus) which results in
"org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken."
Incidentally, this is the same issue that happens if I leave the code alone but use sudo -E su instead of just sudo su (the -E option in this case means that the $DBUS_SESSION_BUS_ADDRESS variable is retained)
I'm not sure what to try next...
Turns out there isn't currently a solution, the privilege error is simply the code that was chosen to indicate that the method is an unimplemented stub method
I'm trying to retrieve a file from an instance using libssh2 scp.
Just to make sure that my username, password, and keys are correct, I did:
sudo scp -v -P #port -i /home/username/.ssh/id_rsa username#XX.XX.XX.XX:/home/username/file .
Which asked me for the password, and then retrieved the file successfully.
In trying to accomplish the same thing with libssh2, I followed the example here:
http://www.libssh2.org/examples/scp.html
With superficial changes to variable types that seem to have since changed
(Not that it should matter, as those variables come after authentication).
However, on
libssh2_userauth_publickey_fromfile(session, username,"/home/username/.ssh/id_rsa.pub","/home/username/.ssh/id_rsa",password)
The program always exits with a LIBSSH2_ERROR_PUBLICKEY_UNVERIFIED.
Checking using gdb, I'm certain that the username and passwords being applied are correct.
What reasons might there be that are causing this problem?
Edit:
Further delving with GDB reveals that somewhere in the depth of libssh2_userauth_publickey_fromfile(), in _libssh2_userauth_publickey(session, username, username_len, pubkeydata, pubkeydata_len, sign_callback, abstract), it receives a LIBSSH2_ERROR_SOCKET_RECV.
The code behind that, however, is much too enigmatic for my untrained eye to make sense of.
One obvious thing I've missed is the error message, which comes out to be "Waiting for USERAUTH response"
Potentially relevant:
https://github.com/nodegit/nodegit/issues/553
After following what little advice I could gather from above link and removing a few keys from authorized_keys, the error remains the same but the message changed to "Callback returned error". Not sure if improvement or worse.
Checking server-side logs, I find the following:
Oct 20 06:53:51 testbed1 sshd[25837]: error: Could not load host key: /etc/ssh/keyname
Oct 20 06:53:52 testbed1 sshd[25837]: Connection closed by XX.XX.XX.XX [preauth]
Oct 20 06:54:48 testbed1 sshd[25839]: error: Could not load host key: /etc/ssh/keyname
Oct 20 06:54:51 testbed1 sshd[25839]: Accepted publickey for username from...
The first two lines are on a failed attempted from libssh2.
The next two lines are on a successful attempt from scp on commandline.
I'm still not absolutely sure what is the cause.
I can only speculate that I had fallen into the bug described here:
cURL sftp public key authentication fails "Callback Error"
The code ran fine on a key without passphrase.
I still have a hard time saying this is the exact solution, as when I had just started using libssh2, it ran fine with keys with passphrase.
Still, it "works" now.
my program get events from remote systems, every event contains an timestamp.
I want to log this events to syslog using the event timestamp instead of systemtime.
Is there any way to send a custom header to syslog deamon ?
I'm using rsyslog on debian
EDIT:
The "events" are generated by some "bare-metal" devices.
My application is a gateway between a realtime-ethernet (EthernetPOWERLINK) and a normal network.
I want to save them in micro-second precision, because its important to know in wich sequence they are occoured.
So i need the exact timestamp created by the bare-metal devices.
I'like to put this events into syslog.
I did not found any lib (except syslog.h) to write into syslog).
I really need to build the packages myself and send them to rsyslog deamon ?
No, don't open that can of worms.
If you allow the sender to specify the timestamp, you allow an attacker to spoof the timestamps of events they wish to hide. That kind of defeats the entire purpose (security-wise) of using a separate machine for logging.
What you can do, however, is compare the current time and the timestamp, and include that at the start of every logged message, using something like
struct timespec now;
struct timespec timestamp;
double delta;
int priority = facility | level;
const char *const message;
delta = difftime(timestamp.tv_sec, now.tv_sec)
+ ((double)timestamp.tv_nsec - now.tv_nsec) / 1000000000.0;
syslog(priority, "[%+.0fs] %s\n", delta, message);
On a typically configured Linux machine, that should produce something similar to
Jan 18 08:01:02 hostname service: [-1s] Original message
assuming the message took at least half a second to arrive. If hostname has its clock running fast, the delta would be positive. Normally, the delta is zero. In the case of a very slow network, the delta is negative, since the original event happened in the past relative to the timestamp shown.
If you already have infrastructure in place to monitor the logged messages, you can have a daemon or a cron script read the log files, and generate new log files (not via syslog(), but simply with string and file operations) with the timestamps adjusted by the specified delta. However, that must be done with extreme care, recognizing unacceptable or unexpectedly changing deltas, or maybe flagging them somehow.
If you write your log file monitoring/display widgets, then you can very easily let the user switch between "actual" (syslog) or "derived" (syslog + delta) timestamps, as the delta is trivial to extract from the logged lines if always present; even then, you must be careful to let the user know if a delta is out of bounds or changes unexpectedly, as such a change is most always informative to the user. (If it is not nefarious, it does mean there is something iffy with the machine timekeeping; time should not just jump around. Even NTP adjustments should be quite smooth.)
If you insist on opening that can of worms, just produce your own log files. Many applications do. It's not like syslog() was a magic bullet or a strict requirement for reliable logging, after all.
If your log-receiving application runs as a specific user and group, you can create /var/log/yourlogs/ owned by root user and that group, and save your log files there. Set the directory mode to 02770 (drwxrws--- or u=rwx,g=rwxs,o=), and all files created in that directory will automatically be owned by the same group (that's what the setgid bit, s, does for directories). You just need to make sure your service sets umask to 002 (and uses 0666 or 0660 mode flags when creating log files), so that they stay group-readable and group-writable.
Log rotation (archiving and/or deleting old log files, mailing logs) is usually a separate service, provided by the logrotate package, and configured by dropping a service-specific configuration file in /etc/logrotate.d/ at installation time. In other words, even if you write your own log files, do not rotate them; use the existing service for this. It makes life much easier for your users, us system administrators. (Note: Setting umask 002 at the start of the log rotate scripts is very useful in the above directory case; created files will then be group-writable. umask 022 will make them group-read-only.)
Ok've solved this, by enabling networking support (TCP) and micro seconds timer in rsyslog configuration.
Accroding to RFC 5424 my application build raw syslog messages and sends them via TCP (port 514) to the deamon.
Thanks to Nominal Animal, but i've no choice...
You can write a raw log message to the /dev/log file. This is a Unix domain socket from where the syslog server reads the messages, as they are written with the syslog() function.
I'm not sure about portability since the message format written by syslog() does not seem to follow the RFC 5424. I can only share my findings with busybox and its syslogd and nc utilities.
syslog() function writes messages as datagrams in the form <PRI>Mon DD HH:MM:SS message, where PRI is a priority, i.e. a decimal number computed as facility | severity, followed by a timestamp and a message.
With nc -u local:/dev/log, you can write UDP datagrams to the domain socket directly. For example, writing <84>Apr 3 07:27:20 hello world results in a Apr 3 07:27:20 hostname authpriv.warn hello world line in /var/log/messages.
Then you are free to extend the timestamp with the microseconds precision. Anyway, you need to make sure your syslog server implementation accepts such form. In case of busybox, I had to modify the source code.
Note: Busybox needs to be configured with enabled CONFIG_NC_EXTRA, CONFIG_NC_110_COMPAT and CONFIG_FEATURE_UNIX_LOCAL options to allow for opening /dev/log with nc.
Currently I am tring to write a program to monitor Tuxedo. from the official documents, I found MIB is suitable for writting program to monitor it. I have read a quite lot of document of here http://docs.oracle.com/cd/E13203_01/tuxedo/tux90/rf5/rf5.htm#998207. Although there are so many instructions of very class, there is no any guide to tell me how to use it from the beginning. I have tried to search on github however unfortuanately there is no any code relating to tuxedo mib. Does any one have some good sample code?
Thanks a lot.
Here a Shell-function that reads the blocktime from Tuxedo:
get_blocktime() {
TmpErr=/tmp/ud32err_$$
rtc=0
ud32 -Ctpsysadm <<EOF 2>$TmpErr | grep TA_BLOCKTIME | cut -f2
SRVCNM .TMIB
TA_CLASS T_DOMAIN
TA_OPERATION GET
EOF
# ud32 has no good error-handling
if [ -s $TmpErr ]; then
echo "$PRG: Error calling ud32:"
cat $TmpErr 1>&2
rtc=1
fi
rm $TmpErr
exit $rtc
}
There are several examples of accessing MIB with Python https://github.com/PacktPublishing/Modernizing-Oracle-Tuxedo-Applications-with-Python/tree/main/Chapter06. For example:
import tuxedo as t
t.tpinit(cltname="tpsysop")
machine = t.tpadmcall(
{
"TA_CLASS": "T_MACHINE",
"TA_OPERATION": "GET",
"TA_FLAGS": t.MIB_LOCAL,
}
).data
A couple of notes:
you will need the TA_FLAGS set to MIB_LOCAL to return statistics (not done by default)
you might want to use tpadmcall() function instead of calling the .TMIB service. The function is much lighter on the system and does not increase Tuxedo statistics (number of service calls). The main limitation of tpadmcall is the limited size of the response so you will need to call the .TMIB service for server and queue statistics if your application has tens of them.
If the code example is not enough, you can check the chapter 6 of the book Modernizing Oracle Tuxedo Applications with Python.
I have some C code for calling .TMIB to monitor Tuxedo application here: https://github.com/TuxSQL/tuxmon
That should get you started.
Summary: Unable to run any of the most simple “Hello World” FastCGI script, any request always terminating into a time out. Seems there is no communication at all between the server and the FastCGI scripts (using dynamic FastCGI scripts).
The environment
Ubuntu Precise (12.04)
Package apache2.2-bin
Package apache2-mpm-prefork
Package libapache2-mod-fastcgi
Package libfcgi-perl
Package python-flup
Multiple sites configured as virtual hosts on 127.0.0.1
There exists a /var/lib/apache2/fastcgi directory, owned by www-data, readable by all (owner, group and others)
There exists a /var/lib/apache2/fastcgi/dynamic directory, owned by www-data, which is restricted to the owner (readable, writable and accessible by www-data only)
There exists an inode/socket file in the /var/lib/apache2/fastcgi/ directory
The FastCGI relevant configurations:
The directory /etc/apache2/mods-enabled/ holds a reference to fastcgi.conf and fastcgi.load (mod_fastcgi is enabled).
The file fastcgi.conf contains the following (left untouched, I did not edit it):
<IfModule mod_fastcgi.c>
AddHandler fastcgi-script .fcgi
#FastCgiWrapper /usr/lib/apache2/suexec
FastCgiIpcDir /var/lib/apache2/fastcgi
</IfModule>
The relevant configuration file in /etc/apache2/sites-enabled/ contains the following (there is nothing more anywhere else about FastCGI specific configuration):
<DirectoryMatch /fcgi-bin>
Options +ExecCGI
<FilesMatch "^[^\.]+$">
SetHandler fastcgi-script
</FilesMatch>
</DirectoryMatch>
The test materials on the test virtual host:
There exist a fcgi-bin/test-perl.fcgi whose content is (the file is executable by all, and readable by owner and group):
#!/usr/bin/perl
use CGI::Fast qw(:standard);
$COUNTER = 0;
while (new CGI::Fast) {
print header;
print start_html("Fast CGI Rocks");
print
h1("Fast CGI Rocks"),
"Invocation number ",b($COUNTER++),
" PID ",b($$),".",
hr;
print end_html;
}
There exist a fcgi-bin/test-python.fcgi whose content is (the file is executable by all, and readable by owner and group):
#!/usr/bin/python
def myapp(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello World!\n']
try:
from flup.server.fcgi import WSGIServer
WSGIServer(myapp).run()
except:
import sys, traceback
traceback.print_exc(file=open("errlog.txt","a"))
The issue
Although both fcgi-bin/test-perl.fcgi and fcgi-bin/test-python.fcgi runs normally when executed from the command‑line, none seems to work when invoked, e.g. as http://test.loc/fcgi-bin/test-perl.fcgi or http://test.loc/fcgi-bin/test-python.fcgi.
Nothing at all happens, and after some delay, I get an Error 500, and Apache error logs contains multiple entries looking like:
[<date>] [error] [client <IP>] FastCGI: comm with (dynamic) server "/<…>/fcgi-bin/<script>.fcgi" aborted: (first read) idle timeout (30 sec), referer: <referrer>
[<date>] [error] [client <IP>] FastCGI: incomplete headers (0 bytes) received from server "<…>/fcgi-bin/<script>.fcgi", referer: <referrer>
I've spent hours and hours searching the web trying to understand why it does not work, and finally decided to give up and ask for some help here.
Any pointers and check list welcome. Feel free to ask for any missing details you may feel to be relevant or worth checking.
Enjoy a nice day.
-- edit --
Issue update
In my own reply to my own question, I mentioned a weird case where things were looking suddenly fine without reasons. I later discovered this was only partly fine.
In the same virtual host, so with the exact same server configuration, some scripts, which are exactly the same (and with exact same access rights), fails depending on their location.
As a remainder, here is what's in the site configuration:
<DirectoryMatch /fcgi-bin>
Options +ExecCGI
<FilesMatch "^[^\.]+$">
SetHandler fastcgi-script
</FilesMatch>
</DirectoryMatch>
With the above, only scripts in /fcgi-bin are handled as FastCGI script. But I also have some elsewhere (still for testing): one in /cgi-bin and one in / (i.e. in the public_html directory). For this purpose, .htaccess contains this entry:
Options +ExecCGI
AddHandler fastcgi-script .fcgi
So the two others FastCGI script should work the same as the one in /fcgi-bin, but they don't, and for the time, they invariably terminates with a connexion time‑out, just like the one /fcgi-bin first did.
This makes me feel something may be wrong with the mod_fastcgi module (known bug? else?). So far, this module seems to act rather randomly.
-- edit 2 --
The above in the first edit, was an error of mine: the group was wrong with the other scripts, it had to be www-data, but it was not. So is something is wrong, stick to the answer I gave, that is, try to look at the FastCgiConfig, and see if it solve anything or at least if it honours the time‑out options.
I will answer my own question, as it seems to be working now. However, the epilogue still looks weird.
Although the default configuration should be OK, I still wanted to review the “Module mod_fastcgi” document again. As I only wanted a dynamic FastCGI, I focused on the FastCgiConfig directive only, thus on purpose not going into FastCgiServer and FastCgiExternalServer directives.
As there was no FastCgiServer at all in the default fastcgi.conf file, I started to try to set‑up my own. For a first test, I wanted to use the -appConnTimeout option, at least to request the server to not wait so much long before it returns me an Error 500.
So I just added this in the site configuration (I did not touch fastcgi.cong), in the same file where virtual hosts are configured:
FastCgiConfig -appConnTimeout 2
This was to tell the server to wait no more than 2 seconds, instead of the 30 seconds it was waiting. I tried to invoked a FastCGI script to see if at least this configuration was working. I expected to get an error in a 2 seconds delay, but instead, the script ran without error.
What's weird, is that I then tried to remove this option, to check if it was just that addition which was just missing to make FastCGI scripts working. But after I commented‑out the option, it was still working, and the same after a full reboot.
Can't tell more, that looks weird, but this is the only thing I did, I did not edit anything else. I can just suggest people who may encounter a similar issue, to just try the above.
Sorry, if I can't explain what it did exactly. I really would like to know. It just working now, but I don't know why.
#############
fastcgi.conf
FastCgiWrapper Off
peng.rl 's answer solve my problem.
My ceph radosgw can't get apache's input at all. after set FastCgiWrapper Off, I can capture data in wireshark.