FreeBSD: Understanding /var/db/dhclient.leases.<interface_name> dhcp lease files - c

FreeBSD: network interface address: dhcp or static
Followup question now:
I've decided to go with looking at leases files: /var/db/dhclient.leases.. What does it tell me exactly? Existence of /var/db/dhclient.leases.em0 signifies em0 has address by DHCP? This file does not seem to go away with reboot.

You should read the manual page for dhclient. This will answer most of your questions. And if that fails, you can browse the source in /usr/src/sbin/dhclient.
Another possibility might be to to use devd(8). This is a daemon that can execulte a script or program if a certain event occurs. It can e.g. note when a network interface goes "up" or "down". From the default /etc/devd.conf (see also devd.conf(5)):
# Try to start dhclient on Ethernet-like interfaces when the link comes
# up. Only devices that are configured to support DHCP will actually
# run it. No link down rule exists because dhclient automatically exits
# when the link goes down.
#
notify 0 {
match "system" "IFNET";
match "type" "LINK_UP";
media-type "ethernet";
action "/etc/rc.d/dhclient quietstart $subsystem";
};

A client is supposed to remember a DHCP lease across reboots and is supposed to remember past leases on a particular network when requesting an address. Therefore, the file should not go away across boots.

Related

DBus : Can't get match rules for my user's session bus

I'm trying to use dbus/tools/GetAllMatchRules.py to get diagnostic information. When I run it without parameters as my regular user I get "GetConnectionMatchRules failed: did you enable the Stats interface?"
I modified GetAllMatchRules to print the specific exception details. It now says
GetConnectionMatchRules failed: did you enable the Stats interface?: org.freedesktop.DBus.Error.AccessDenied: The caller does not have the necessary privileged to call this method
So then I'm wondering, does it work at all? So I sudo su and run it again and it gives me the kind of information I'd expect to see, just not for the right bus. Oddly, if I use the --system parameter, even root gets org.freedesktop.DBus.Error.AccessDenied .
The repository claims, in bus/example-session-disable-stats.conf.in , that
"If the Stats interface was enabled at compile-time, users can use it on
the session bus by default. Systems providing isolation of processes
with LSMs might want to restrict this. This can be achieved by copying
this file in #EXPANDED_SYSCONFDIR#/dbus-1/session.d/
"
But that's clearly not the case because my user can NOT access this information.
I even tried a brute force approach to disabling (commenting out) ALL deny statements at /usr/share/dbus-1/system.conf and reloading and it still doesn't work. I also tried a full system restart in case I wasn't reloading correctly. I also did a system-wide search for system.conf in case it's actually using some other conf file that I'm not seeing, which would mean I'm modifying the wrong thing. I got a big hint that that's not the case when I had a typo (-- instead of --> for commenting out) and it failed to reload, but did reload once I fixed the typo.
I'm ok with the possibility that I can only do this signed in as root, so I also tried modifying GetAllMatchRules to use dbus.bus.BusConnection(), and force-feeding it the session address (unix:path=/run/user/1000/bus) which results in
"org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken."
Incidentally, this is the same issue that happens if I leave the code alone but use sudo -E su instead of just sudo su (the -E option in this case means that the $DBUS_SESSION_BUS_ADDRESS variable is retained)
I'm not sure what to try next...
Turns out there isn't currently a solution, the privilege error is simply the code that was chosen to indicate that the method is an unimplemented stub method

Get signal level of the connected WiFi network

Using wpa_supplicant 2.4 on ARM Debian.
Is there a way to get signal level, in decibels or percents, of the wireless network I’m currently connected to?
STATUS command only returns the following set of values: bssid, freq, ssid, id, mode, pairwise_cipher, group_cipher, key_mgmt, wpa_state, ip_address, p2p_device_address, address, uuid
I can run SCAN afterwards, wait for results and search by SSID. But that’s slow and error-prone, I'd like to do better.
The driver should already know that information (because connected, and adjusting transmit levels for energy saving), is there a way to just query for that?
This question is not about general computing hardware and software. I'm using wpa_supplicant through a C API defined in wpa_ctrl.h header, interacting with the service through a pair of unix domain sockets (one for commands, another one for unsolicited events).
One reason I don’t like my current SCAN + SCAN_RESULT solution, it doesn’t work for hidden SSID networks. Scan doesn’t find the network, therefore I’m not getting signal level this way. Another issue is minor visual glitch at application startup. My app is launched by systemd, After=multi-user.target. Unless it’s the very first launch, Linux is already connected to Wi-Fi by then. In my app’s GUI (the product will feature a touch screen), I render a phone-like status bar, that includes WiFi signal strength icon. Currently, it initially shows minimal level (I know it's connected because STATUS command shows SSID), only after ~1 second I’m getting CTRL-EVENT-SCAN-RESULTS event from wpa_supplicant, run SCAN_RESULT command and update signal strength to the correct value.
On the API level my code is straightforward. I have two threads for that, both call wpa_ctrl_open, the command thread calls wpa_ctrl_request, the event thread has an endless loop that calls poll passing wpa_ctrl_get_fd() descriptor and POLLIN event mask, followed by wpa_ctrl_pending and wpa_ctrl_recv.
And here's the list of files in /sys/class/net/wlan0:
./mtu
./type
./phys_port_name
./netdev_group
./flags
./power/control
./power/async
./power/runtime_enabled
./power/runtime_active_kids
./power/runtime_active_time
./power/autosuspend_delay_ms
./power/runtime_status
./power/runtime_usage
./power/runtime_suspended_time
./speed
./dormant
./name_assign_type
./proto_down
./addr_assign_type
./phys_switch_id
./dev_id
./duplex
./gro_flush_timeout
./iflink
./phys_port_id
./addr_len
./address
./operstate
./carrier_changes
./broadcast
./queues/rx-0/rps_flow_cnt
./queues/rx-0/rps_cpus
./queues/rx-1/rps_flow_cnt
./queues/rx-1/rps_cpus
./queues/rx-2/rps_flow_cnt
./queues/rx-2/rps_cpus
./queues/rx-3/rps_flow_cnt
./queues/rx-3/rps_cpus
./queues/tx-0/xps_cpus
./queues/tx-0/tx_maxrate
./queues/tx-0/tx_timeout
./queues/tx-0/byte_queue_limits/limit
./queues/tx-0/byte_queue_limits/limit_max
./queues/tx-0/byte_queue_limits/limit_min
./queues/tx-0/byte_queue_limits/hold_time
./queues/tx-0/byte_queue_limits/inflight
./queues/tx-1/xps_cpus
./queues/tx-1/tx_maxrate
./queues/tx-1/tx_timeout
./queues/tx-1/byte_queue_limits/limit
./queues/tx-1/byte_queue_limits/limit_max
./queues/tx-1/byte_queue_limits/limit_min
./queues/tx-1/byte_queue_limits/hold_time
./queues/tx-1/byte_queue_limits/inflight
./queues/tx-2/xps_cpus
./queues/tx-2/tx_maxrate
./queues/tx-2/tx_timeout
./queues/tx-2/byte_queue_limits/limit
./queues/tx-2/byte_queue_limits/limit_max
./queues/tx-2/byte_queue_limits/limit_min
./queues/tx-2/byte_queue_limits/hold_time
./queues/tx-2/byte_queue_limits/inflight
./queues/tx-3/xps_cpus
./queues/tx-3/tx_maxrate
./queues/tx-3/tx_timeout
./queues/tx-3/byte_queue_limits/limit
./queues/tx-3/byte_queue_limits/limit_max
./queues/tx-3/byte_queue_limits/limit_min
./queues/tx-3/byte_queue_limits/hold_time
./queues/tx-3/byte_queue_limits/inflight
./tx_queue_len
./uevent
./statistics/rx_fifo_errors
./statistics/collisions
./statistics/rx_errors
./statistics/rx_compressed
./statistics/rx_dropped
./statistics/tx_packets
./statistics/tx_errors
./statistics/rx_missed_errors
./statistics/rx_over_errors
./statistics/tx_carrier_errors
./statistics/tx_heartbeat_errors
./statistics/rx_crc_errors
./statistics/multicast
./statistics/tx_fifo_errors
./statistics/tx_aborted_errors
./statistics/rx_bytes
./statistics/tx_compressed
./statistics/tx_dropped
./statistics/rx_packets
./statistics/tx_bytes
./statistics/tx_window_errors
./statistics/rx_frame_errors
./statistics/rx_length_errors
./dev_port
./ifalias
./ifindex
./link_mode
./carrier
You can get the signal level of the connected wifi by wpa_supplicant cmd SIGNAL_POLL
The wpa_supplicant would return:
RSSI=-60
LINKSPEED=867
NOISE=9999
FREQUENCY=5745
The value of the RSSI is the signal level.
You can get the signal level of the connected wifi by wpa_supplicant cmd BSS <bssid>.
About the bssid of the connected wifi, you can get from wpa_supplicant cmd STATUS.
https://android.googlesource.com/platform/external/wpa_supplicant_8/+/622b66d6efd0cccfeb8623184fadf2f76e7e8206/wpa_supplicant/ctrl_iface.c#1986
For iw compatible devices:
Following command gives the current station(aka AP) signal strength:
iw dev wlp2s0 station dump -v
If you need C API, just dig the source code of iw.
After a quick glance, the function you need is here
For broadcom devices, try search broadcom wl. It is close source, don't know if C API is provided.

Can't Find "Syslog.conf" in linux kernal 2.6.37.6 created with BuysBox v1.19.3

I created a tiny OS for my controller with Linux kernel 2.6.37.6 with the help of BusyBox and tool chain. I am writing a logging module(C program) in it and i want customized logs(customized path for different logs) like in /log/.
I have syslogd in my machine and /etc/syslog.conf supposed to present in my machine but it's not it the place. I created new syslog.conf under /etc but still i can't find my logs in desired place.
But if i run command syslogd -O /log/Controller.log all logs started to redirect to this (specified file). So i want to know where is the configuration file for this syslogd i can't find the configuration file for it.
Is there any way that i can write a module(program) for LOGS without requiring syslog.conf and yes of course traditional printf way. Problem is that for customized paths for log we need to give keyname LOG_LOCAL1 in openlog() as a argument but it's not working
I followed procedure from this examples http://www.codealias.info/technotes/syslog_simple_example
If you are using Busybox's syslogd then there is no support of syslog.conf,all logs are written to /var/log/messages by default.
You can modify code of syslogd in busybox which is located in busybox/sysklogd/syslogd.c for your desire behaviour
You can change code of syslogd like this
static const struct init_globals init_data = {
.logFile = {
.path = "your desire path",
.fd = -1,
},

write custom timestamp into syslog using syslog.h

my program get events from remote systems, every event contains an timestamp.
I want to log this events to syslog using the event timestamp instead of systemtime.
Is there any way to send a custom header to syslog deamon ?
I'm using rsyslog on debian
EDIT:
The "events" are generated by some "bare-metal" devices.
My application is a gateway between a realtime-ethernet (EthernetPOWERLINK) and a normal network.
I want to save them in micro-second precision, because its important to know in wich sequence they are occoured.
So i need the exact timestamp created by the bare-metal devices.
I'like to put this events into syslog.
I did not found any lib (except syslog.h) to write into syslog).
I really need to build the packages myself and send them to rsyslog deamon ?
No, don't open that can of worms.
If you allow the sender to specify the timestamp, you allow an attacker to spoof the timestamps of events they wish to hide. That kind of defeats the entire purpose (security-wise) of using a separate machine for logging.
What you can do, however, is compare the current time and the timestamp, and include that at the start of every logged message, using something like
struct timespec now;
struct timespec timestamp;
double delta;
int priority = facility | level;
const char *const message;
delta = difftime(timestamp.tv_sec, now.tv_sec)
+ ((double)timestamp.tv_nsec - now.tv_nsec) / 1000000000.0;
syslog(priority, "[%+.0fs] %s\n", delta, message);
On a typically configured Linux machine, that should produce something similar to
Jan 18 08:01:02 hostname service: [-1s] Original message
assuming the message took at least half a second to arrive. If hostname has its clock running fast, the delta would be positive. Normally, the delta is zero. In the case of a very slow network, the delta is negative, since the original event happened in the past relative to the timestamp shown.
If you already have infrastructure in place to monitor the logged messages, you can have a daemon or a cron script read the log files, and generate new log files (not via syslog(), but simply with string and file operations) with the timestamps adjusted by the specified delta. However, that must be done with extreme care, recognizing unacceptable or unexpectedly changing deltas, or maybe flagging them somehow.
If you write your log file monitoring/display widgets, then you can very easily let the user switch between "actual" (syslog) or "derived" (syslog + delta) timestamps, as the delta is trivial to extract from the logged lines if always present; even then, you must be careful to let the user know if a delta is out of bounds or changes unexpectedly, as such a change is most always informative to the user. (If it is not nefarious, it does mean there is something iffy with the machine timekeeping; time should not just jump around. Even NTP adjustments should be quite smooth.)
If you insist on opening that can of worms, just produce your own log files. Many applications do. It's not like syslog() was a magic bullet or a strict requirement for reliable logging, after all.
If your log-receiving application runs as a specific user and group, you can create /var/log/yourlogs/ owned by root user and that group, and save your log files there. Set the directory mode to 02770 (drwxrws--- or u=rwx,g=rwxs,o=), and all files created in that directory will automatically be owned by the same group (that's what the setgid bit, s, does for directories). You just need to make sure your service sets umask to 002 (and uses 0666 or 0660 mode flags when creating log files), so that they stay group-readable and group-writable.
Log rotation (archiving and/or deleting old log files, mailing logs) is usually a separate service, provided by the logrotate package, and configured by dropping a service-specific configuration file in /etc/logrotate.d/ at installation time. In other words, even if you write your own log files, do not rotate them; use the existing service for this. It makes life much easier for your users, us system administrators. (Note: Setting umask 002 at the start of the log rotate scripts is very useful in the above directory case; created files will then be group-writable. umask 022 will make them group-read-only.)
Ok've solved this, by enabling networking support (TCP) and micro seconds timer in rsyslog configuration.
Accroding to RFC 5424 my application build raw syslog messages and sends them via TCP (port 514) to the deamon.
Thanks to Nominal Animal, but i've no choice...
You can write a raw log message to the /dev/log file. This is a Unix domain socket from where the syslog server reads the messages, as they are written with the syslog() function.
I'm not sure about portability since the message format written by syslog() does not seem to follow the RFC 5424. I can only share my findings with busybox and its syslogd and nc utilities.
syslog() function writes messages as datagrams in the form <PRI>Mon DD HH:MM:SS message, where PRI is a priority, i.e. a decimal number computed as facility | severity, followed by a timestamp and a message.
With nc -u local:/dev/log, you can write UDP datagrams to the domain socket directly. For example, writing <84>Apr 3 07:27:20 hello world results in a Apr 3 07:27:20 hostname authpriv.warn hello world line in /var/log/messages.
Then you are free to extend the timestamp with the microseconds precision. Anyway, you need to make sure your syslog server implementation accepts such form. In case of busybox, I had to modify the source code.
Note: Busybox needs to be configured with enabled CONFIG_NC_EXTRA, CONFIG_NC_110_COMPAT and CONFIG_FEATURE_UNIX_LOCAL options to allow for opening /dev/log with nc.

apache2 FastCGI comm with dynamic server aborted first read idle timeout

Summary: Unable to run any of the most simple “Hello World” FastCGI script, any request always terminating into a time out. Seems there is no communication at all between the server and the FastCGI scripts (using dynamic FastCGI scripts).
The environment
Ubuntu Precise (12.04)
Package apache2.2-bin
Package apache2-mpm-prefork
Package libapache2-mod-fastcgi
Package libfcgi-perl
Package python-flup
Multiple sites configured as virtual hosts on 127.0.0.1
There exists a /var/lib/apache2/fastcgi directory, owned by www-data, readable by all (owner, group and others)
There exists a /var/lib/apache2/fastcgi/dynamic directory, owned by www-data, which is restricted to the owner (readable, writable and accessible by www-data only)
There exists an inode/socket file in the /var/lib/apache2/fastcgi/ directory
The FastCGI relevant configurations:
The directory /etc/apache2/mods-enabled/ holds a reference to fastcgi.conf and fastcgi.load (mod_fastcgi is enabled).
The file fastcgi.conf contains the following (left untouched, I did not edit it):
<IfModule mod_fastcgi.c>
AddHandler fastcgi-script .fcgi
#FastCgiWrapper /usr/lib/apache2/suexec
FastCgiIpcDir /var/lib/apache2/fastcgi
</IfModule>
The relevant configuration file in /etc/apache2/sites-enabled/ contains the following (there is nothing more anywhere else about FastCGI specific configuration):
<DirectoryMatch /fcgi-bin>
Options +ExecCGI
<FilesMatch "^[^\.]+$">
SetHandler fastcgi-script
</FilesMatch>
</DirectoryMatch>
The test materials on the test virtual host:
There exist a fcgi-bin/test-perl.fcgi whose content is (the file is executable by all, and readable by owner and group):
#!/usr/bin/perl
use CGI::Fast qw(:standard);
$COUNTER = 0;
while (new CGI::Fast) {
print header;
print start_html("Fast CGI Rocks");
print
h1("Fast CGI Rocks"),
"Invocation number ",b($COUNTER++),
" PID ",b($$),".",
hr;
print end_html;
}
There exist a fcgi-bin/test-python.fcgi whose content is (the file is executable by all, and readable by owner and group):
#!/usr/bin/python
def myapp(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello World!\n']
try:
from flup.server.fcgi import WSGIServer
WSGIServer(myapp).run()
except:
import sys, traceback
traceback.print_exc(file=open("errlog.txt","a"))
The issue
Although both fcgi-bin/test-perl.fcgi and fcgi-bin/test-python.fcgi runs normally when executed from the command‑line, none seems to work when invoked, e.g. as http://test.loc/fcgi-bin/test-perl.fcgi or http://test.loc/fcgi-bin/test-python.fcgi.
Nothing at all happens, and after some delay, I get an Error 500, and Apache error logs contains multiple entries looking like:
[<date>] [error] [client <IP>] FastCGI: comm with (dynamic) server "/<…>/fcgi-bin/<script>.fcgi" aborted: (first read) idle timeout (30 sec), referer: <referrer>
[<date>] [error] [client <IP>] FastCGI: incomplete headers (0 bytes) received from server "<…>/fcgi-bin/<script>.fcgi", referer: <referrer>
I've spent hours and hours searching the web trying to understand why it does not work, and finally decided to give up and ask for some help here.
Any pointers and check list welcome. Feel free to ask for any missing details you may feel to be relevant or worth checking.
Enjoy a nice day.
-- edit --
Issue update
In my own reply to my own question, I mentioned a weird case where things were looking suddenly fine without reasons. I later discovered this was only partly fine.
In the same virtual host, so with the exact same server configuration, some scripts, which are exactly the same (and with exact same access rights), fails depending on their location.
As a remainder, here is what's in the site configuration:
<DirectoryMatch /fcgi-bin>
Options +ExecCGI
<FilesMatch "^[^\.]+$">
SetHandler fastcgi-script
</FilesMatch>
</DirectoryMatch>
With the above, only scripts in /fcgi-bin are handled as FastCGI script. But I also have some elsewhere (still for testing): one in /cgi-bin and one in / (i.e. in the public_html directory). For this purpose, .htaccess contains this entry:
Options +ExecCGI
AddHandler fastcgi-script .fcgi
So the two others FastCGI script should work the same as the one in /fcgi-bin, but they don't, and for the time, they invariably terminates with a connexion time‑out, just like the one /fcgi-bin first did.
This makes me feel something may be wrong with the mod_fastcgi module (known bug? else?). So far, this module seems to act rather randomly.
-- edit 2 --
The above in the first edit, was an error of mine: the group was wrong with the other scripts, it had to be www-data, but it was not. So is something is wrong, stick to the answer I gave, that is, try to look at the FastCgiConfig, and see if it solve anything or at least if it honours the time‑out options.
I will answer my own question, as it seems to be working now. However, the epilogue still looks weird.
Although the default configuration should be OK, I still wanted to review the “Module mod_fastcgi” document again. As I only wanted a dynamic FastCGI, I focused on the FastCgiConfig directive only, thus on purpose not going into FastCgiServer and FastCgiExternalServer directives.
As there was no FastCgiServer at all in the default fastcgi.conf file, I started to try to set‑up my own. For a first test, I wanted to use the -appConnTimeout option, at least to request the server to not wait so much long before it returns me an Error 500.
So I just added this in the site configuration (I did not touch fastcgi.cong), in the same file where virtual hosts are configured:
FastCgiConfig -appConnTimeout 2
This was to tell the server to wait no more than 2 seconds, instead of the 30 seconds it was waiting. I tried to invoked a FastCGI script to see if at least this configuration was working. I expected to get an error in a 2 seconds delay, but instead, the script ran without error.
What's weird, is that I then tried to remove this option, to check if it was just that addition which was just missing to make FastCGI scripts working. But after I commented‑out the option, it was still working, and the same after a full reboot.
Can't tell more, that looks weird, but this is the only thing I did, I did not edit anything else. I can just suggest people who may encounter a similar issue, to just try the above.
Sorry, if I can't explain what it did exactly. I really would like to know. It just working now, but I don't know why.
#############
fastcgi.conf
FastCgiWrapper Off
peng.rl 's answer solve my problem.
My ceph radosgw can't get apache's input at all. after set FastCgiWrapper Off, I can capture data in wireshark.

Resources