How to log to /var/log/mail.log using rsyslogd? - c

I am currently playing around with logging under Linux. I have made the following very simple test application (in plain C):
#include <syslog.h>
int main(int argc, char **argv)
{
openlog("mytest", (LOG_PID | LOG_NDELAY), LOG_MAIL);
syslog(LOG_MAKEPRI(LOG_MAIL, LOG_ERR), "Test");
return(0);
}
This "application" compiles, and when I execute it, it generates an entry in /var/log/syslog, but no entry in /var/log/mail.log and no entry in /var/log/mail.err.
Could somebody please explain why?
I am using rsyslogd on the test machine; this is the configuration from /etc/rsyslog.conf (please note that /etc/rsyslog.d is just empty, and that I have stripped out all comments and lines which clearly don't have anything to do with the problem):
:msg,startswith, " fetching user_deny" ~
:msg,startswith, " seen_db: user " ~
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
daemon.* -/var/log/daemon.log
kern.* -/var/log/kern.log
lpr.* -/var/log/lpr.log
mail.* -/var/log/mail.log
user.* -/var/log/user.log
mail.info -/var/log/mail.info
mail.warn -/var/log/mail.warn
mail.err /var/log/mail.err
*.=debug;\
auth,authpriv.none;\
news.none;mail.none -/var/log/debug
*.=info;*.=notice;*.=warn;\
auth,authpriv.none;\
cron,daemon.none;\
mail,news.none -/var/log/messages
*.emerg :omusrmsg:*
daemon.*;mail.*;\
news.err;\
*.=debug;*.=info;\
*.=notice;*.=warn |/dev/xconsole
As far as I have understood from reading man rsyslog.conf, that configuration should make rsyslogd write messages for LOG_MAIL with priority LOG_ERR to /var/log/mail.err. I am somewhat mistrustful regarding the lines where the file path has a - prepended, though. I don't know what this means because I could not find any hint in the manual.
So what is going wrong?

I hate answering my own question, but I think I have found a bug either in the documentation or in the source of glibc, and I'd like to have it documented for future visitors of this question.
From https://www.gnu.org/software/libc/manual/html_node/syslog_003b-vsyslog.html#syslog_003b-vsyslog (as per the time of this writing):
syslog submits the message with the facility and priority indicated by
facility_priority. The macro LOG_MAKEPRI generates a facility/priority
from a facility and a priority, as in the following example:
LOG_MAKEPRI(LOG_USER, LOG_WARNING)
Now look at some code from syslog.h as it resides on my test machine (Debian wheezy, up-to-date, no custom patches, non-relevant parts stripped out):
/*
* priorities/facilities are encoded into a single 32-bit quantity, where the
* bottom 3 bits are the priority (0-7) and the top 28 bits are the facility
* (0-big number). Both the priorities and the facilities map roughly
* one-to-one to strings in the syslogd(8) source code. This mapping is
* included in this file.
*
* priorities (these are ordered)
*/
#define LOG_EMERG 0 /* system is unusable */
#define LOG_ALERT 1 /* action must be taken immediately */
#define LOG_CRIT 2 /* critical conditions */
#define LOG_ERR 3 /* error conditions */
#define LOG_WARNING 4 /* warning conditions */
#define LOG_NOTICE 5 /* normal but significant condition */
#define LOG_INFO 6 /* informational */
#define LOG_DEBUG 7 /* debug-level messages */
#define LOG_MAKEPRI(fac, pri) (((fac) << 3) | (pri))
/* facility codes */
#define LOG_KERN (0<<3) /* kernel messages */
#define LOG_USER (1<<3) /* random user-level messages */
#define LOG_MAIL (2<<3) /* mail system */
#define LOG_DAEMON (3<<3) /* system daemons */
#define LOG_AUTH (4<<3) /* security/authorization messages */
#define LOG_SYSLOG (5<<3) /* messages generated internally by syslogd */
#define LOG_LPR (6<<3) /* line printer subsystem */
#define LOG_NEWS (7<<3) /* network news subsystem */
#define LOG_UUCP (8<<3) /* UUCP subsystem */
#define LOG_CRON (9<<3) /* clock daemon */
#define LOG_AUTHPRIV (10<<3) /* security/authorization messages (private) */
#define LOG_FTP (11<<3) /* ftp daemon */
/* other codes through 15 reserved for system use */
#define LOG_LOCAL0 (16<<3) /* reserved for local use */
#define LOG_LOCAL1 (17<<3) /* reserved for local use */
#define LOG_LOCAL2 (18<<3) /* reserved for local use */
#define LOG_LOCAL3 (19<<3) /* reserved for local use */
#define LOG_LOCAL4 (20<<3) /* reserved for local use */
#define LOG_LOCAL5 (21<<3) /* reserved for local use */
#define LOG_LOCAL6 (22<<3) /* reserved for local use */
#define LOG_LOCAL7 (23<<3) /* reserved for local use */
We are obviously having multiple problems here.
The comment at the top: If I have 3 bottom bits, then I have 29 top bits (and not 28). But this is a minor problem.
The facility codes are already defined as shifted-to-left by three bits. Using the macro LOG_MAKEPRI (as recommended by the manual page linked above) obviously shifts them to the left by three bits a second time, which clearly is wrong.
SOLUTION
The solution is simple: Don't use that macro; instead, just OR the facility code and the priority code. I have tried that, and it worked. Error messages from my test programs are now logged as expected, according to the configuration of rsyslogd in /etc/rsyslog.conf.
I am quite surprised about that very obvious bug in syslog.h ...

Related

Result of bitwise OR operator in c

I have a question about bitwise OR | operator from c to rust.
I have this lot of define to convert from c to rust, and i’m not sure of the flag SEFLG_ASTROMETRIC.
#define SEFLG_JPLEPH 1 /* use JPL ephemeris */
#define SEFLG_SWIEPH 2 /* use SWISSEPH ephemeris */
#define SEFLG_MOSEPH 4 /* use Moshier ephemeris */
#define SEFLG_HELCTR 8 /* heliocentric position */
#define SEFLG_TRUEPOS 16 /* true/geometric position, not apparent position */
#define SEFLG_J2000 32 /* no precession, i.e. give J2000 equinox */
#define SEFLG_NONUT 64 /* no nutation, i.e. mean equinox of date */
#define SEFLG_SPEED3 128 /* speed from 3 positions (do not use it,
* SEFLG_SPEED is faster and more precise.) */
#define SEFLG_SPEED 256 /* high precision speed */
#define SEFLG_NOGDEFL 512 /* turn off gravitational deflection */
#define SEFLG_NOABERR 1024 /* turn off 'annual' aberration of light */
#define SEFLG_ASTROMETRIC (SEFLG_NOABERR|SEFLG_NOGDEFL) /* astrometric position,
* i.e. with light-time, but without aberration and
* light deflection */
#define SEFLG_EQUATORIAL (2*1024) /* equatorial positions are wanted */
#define SEFLG_XYZ (4*1024) /* cartesian, not polar, coordinates */
#define SEFLG_RADIANS (8*1024) /* coordinates in radians, not degrees */
#define SEFLG_BARYCTR (16*1024) /* barycentric position */
#define SEFLG_TOPOCTR (32*1024) /* topocentric position */
#define SEFLG_ORBEL_AA SEFLG_TOPOCTR /* used for Astronomical Almanac mode in
* calculation of Kepler elipses */
#define SEFLG_SIDEREAL (64*1024) /* sidereal position */
#define SEFLG_ICRS (128*1024) /* ICRS (DE406 reference frame) */
#define SEFLG_DPSIDEPS_1980 (256*1024) /* reproduce JPL Horizons
* 1962 - today to 0.002 arcsec. */
#define SEFLG_JPLHOR SEFLG_DPSIDEPS_1980
#define SEFLG_JPLHOR_APPROX (512*1024) /* approximate JPL Horizons 1962 - today */
1024 | 512
I tried with the calculator of my mac 1024 OR 512 and the result is 1536. Is this result correct ?
Yes, the result is correct: in fact, OR-ing 1024 and 512 yields the same number as adding them.
This is a convenient "shortcut" for OR-ing powers of two: since their digits never occupy the same position, addition is the same as "OR".
The same principle is in play here as with adding decimal numbers with zeros in all but a single position, when non-zero positions do not overlap: adding, say, 6000+900+30+7 is the same as writing down the individual digits into 6937, because place values do not interact with each other. OR-ing distinct powers of two in binary is the same as writing down all ones in positions designated by numbers being OR-ed.
Note that the expression (SEFLG_NOABERR|SEFLG_NOGDEFL) is evaluated at compile time, so replacing it with 1536 is not going to yield any run-time improvement.
In a word - yes. Note that 1024 and 512 don't have "1" bits in the same positions, so the result of bitwise-oring them is the same as adding them.
Yes it is correct.
512 =01000000000
1024=10000000000
1536=11000000000
OR-Calculation:
10000000000
|01000000000
=11000000000

XErrorEvent structure field meaning

I'm currently having some issues with Xlib and CEF and I need to investigate the XErrorEvent that is sent to the function registered with XSetErrorHandler.
typedef struct {
int type;
Display *display; /* Display the event was read from */
XID resourceid; /* resource id */
unsigned long serial; /* serial number of failed request */
unsigned char error_code; /* error code of failed request */
unsigned char request_code; /* Major op-code of failed request */
unsigned char minor_code; /* Minor op-code of failed request */
} XErrorEvent;
I would like to know the meaning of the type, request_code, and minor_code fields. There is a book on C language interface for the X window system but I couldn't find anything about this field.
type is what identifies a typeless memory pointer as pointer to an XErrorEvent - its value is always X_Error.
request_code is a protocol request of the procedure that failed, as defined in X11/Xproto.h, basically what kind of request generated the error (line 2020 and forward):
/* Request codes */
#define X_CreateWindow 1
#define X_ChangeWindowAttributes 2
#define X_GetWindowAttributes 3
#define X_DestroyWindow 4
#define X_DestroySubwindows 5
#define X_ChangeSaveSet 6
#define X_ReparentWindow 7
#define X_MapWindow 8
...
minor_code is similar to request_code except being used by extensions. Each extension gets its own request_code in the range 128-255. The minor_code identifies a particular request defined by that extension. So, X11 support up to 127 extensions, and each extension may define up to 255 requests. The exact paragraph:
Each extension is assigned a single opcode from that range, also known
as it's “major opcode.” For each operation provided by that extension,
typically a second byte is used as a “minor opcode.” Minor opcodes for
each extension are defined by the extension.

DT_USED entry in .dynamic section of ELF file

I am curious about DT_USED entry in .dynamic section. However, I could only find two code examples that describe this entry.
1.
#define DT_USED 0x7ffffffe /* ignored - same as needed */
in https://github.com/switchbrew/switch-tools/blob/master/src/elf_common.h
2.
case DT_USED:
case DT_INIT_ARRAY:
case DT_FINI_ARRAY:
if (do_dynamic)
{
if (entry->d_tag == DT_USED
&& VALID_DYNAMIC_NAME (entry->d_un.d_val))
{
char *name = GET_DYNAMIC_NAME (entry->d_un.d_val);
if (*name)
{
printf (_("Not needed object: [%s]\n"), name);
break;
}
}
print_vma (entry->d_un.d_val, PREFIX_HEX);
putchar ('\n');
}
break;
in http://web.mit.edu/freebsd/head/contrib/binutils/binutils/readelf.c
I want to know, what's the meaning of "Not needed object"? Does it mean that file names listed here are not needed?
In general, when looking at Solaris dynamic linker features, it is possible to find more information in the public Illumos sources (which were once derived from OpenSolaris). In this case, it seems that DT_USED is always treated like DT_NEEDED, so they are the essentially same thing. One of the header files, usr/src/uts/common/sys/link.h also contains this:
/*
* DT_* entries between DT_HIPROC and DT_LOPROC are reserved for processor
* specific semantics.
*
* DT_* encoding rules apply to all tag values larger than DT_LOPROC.
*/
#define DT_LOPROC 0x70000000 /* processor specific range */
#define DT_AUXILIARY 0x7ffffffd /* shared library auxiliary name */
#define DT_USED 0x7ffffffe /* ignored - same as needed */
#define DT_FILTER 0x7fffffff /* shared library filter name */
#define DT_HIPROC 0x7fffffff
There may have been planned something here, but it doesn't seem to be implemented (or it used to be and no longer is).

uC/OS hardware register read

I am trying to use the low power timer on the Freescale FRDM K64F, with IAR as design framework and uC/OS as operating system. I read the manual and I found that the timer is controlled by means of a bunch of registers, but I am getting in trouble with the register reading and writing.
At first I am just trying to read a register with the code attached below. The code stucks as soon as the read line is reached: the line “after read” is not printed on terminal. Searching through the debugger, I found out that the hardfault_handler exception is raised. I am not a software expert, so: does anyone have an idea about what is the problem here? It should be something related with the OS, but I can’t understand what. I said I am not a software expert, so, should I have forgot to tell something important, please let me know. Thanks in advance.
#include "fsl_interrupt_manager.h"
#include <math.h>
#include <lib_math.h>
#include <cpu_core.h>
#include <app_cfg.h>
#include <os.h>
#include <fsl_os_abstraction.h>
#include <system_MK64F12.h>
#include <board.h>
#include <bsp_ser.h>
/*
*********************************************************************************************************
* LOCAL DEFINES
*********************************************************************************************************
*/
/*
*********************************************************************************************************
* LOCAL GLOBAL VARIABLES
*********************************************************************************************************
*/
static OS_TCB AppTaskStartTCB;
static CPU_STK AppTaskStartStk[APP_CFG_TASK_START_STK_SIZE];
/*
*********************************************************************************************************
* LOCAL FUNCTION PROTOTYPES
*********************************************************************************************************
*/
static void AppTaskStart (void *p_arg);
int main (void)
{
OS_ERR err;
#if (CPU_CFG_NAME_EN == DEF_ENABLED)
CPU_ERR cpu_err;
#endif
hardware_init();
GPIO_DRV_Init(switchPins, ledPins);
#if (CPU_CFG_NAME_EN == DEF_ENABLED)
CPU_NameSet((CPU_CHAR *)"MK64FN1M0VMD12",
(CPU_ERR *)&cpu_err);
#endif
OSA_Init(); /* Init uC/OS-III. */
OSTaskCreate(&AppTaskStartTCB, /* Create the start task */
"App Task Start",
AppTaskStart,
0u,
APP_CFG_TASK_START_PRIO,
&AppTaskStartStk[0u],
(APP_CFG_TASK_START_STK_SIZE / 10u),
APP_CFG_TASK_START_STK_SIZE,
0u,
0u,
0u,
(OS_OPT_TASK_STK_CHK | OS_OPT_TASK_STK_CLR | OS_OPT_TASK_SAVE_FP),
&err);
OSA_Start(); /* Start multitasking (i.e. give control to uC/OS-III). */
while (DEF_ON) { /* Should Never Get Here */
;
}
}
static void AppTaskStart (void *p_arg)
{
OS_ERR os_err;
(void)p_arg; /* See Note #1 */
char string[800];
CPU_Init(); /* Initialize the uC/CPU Services. */
Mem_Init(); /* Initialize the Memory Management Module */
Math_Init(); /* Initialize the Mathematical Module */
int * PSR = (int *) 0x42040004;
BSP_Ser_Init(115200u);
APP_TRACE_DBG(("Blinking RGB LED...\n\r"));
int value;
APP_TRACE_DBG(("Before read\n"));
value=*PSR;
APP_TRACE_DBG(("After read\n"));
APP_TRACE_DBG(("Before sprintf\n"));
sprintf(string,"Value is %d\n",value);
APP_TRACE_DBG(("After sprintf\n"));
}

Using ENUMs as bitmaps, how to validate in C

I am developing firmware for an embedded application with memory constraints. I have a set of commands that need to processed as they are received. Each command falls under different 'buckets' and each 'bucket' gets a range of valid command numbers. I created two ENUMs as shown below to achieve this.
enum
{
BUCKET_1 = 0x100, // Range of 0x100 to 0x1FF
BUCKET_2 = 0x200, // Range of 0x200 to 0x2FF
BUCKET_3 = 0x300, // Range of 0x300 to 0x3FF
...
...
BUCKET_N = 0xN00 // Range of 0xN00 to 0xNFF
} cmd_buckets;
enum
{
//BUCKET_1 commands
CMD_BUCKET_1_START = BUCKET_1,
BUCKET_1_CMD_1,
BUCKET_1_CMD_2,
BUCKET_1_CMD_3,
BUCKET_1_CMD_4,
//Add new commands above this line
BUCKET_1_CMD_MAX,
//BUCKET_2 commands
CMD_BUCKET_2_START = BUCKET_2,
BUCKET_2_CMD_1,
BUCKET_2_CMD_2,
BUCKET_2_CMD_3,
//Add new commands above this line
BUCKET_2_CMD_MAX,
//BUCKET_3 commands
...
...
...
//BUCKET_N commands
CMD_BUCKET_N_START = BUCKET_N
BUCKET_N_CMD_1,
BUCKET_N_CMD_2,
BUCKET_N_CMD_3,
BUCKET_N_CMD_4,
//Add new commands above this line
BUCKET_N_CMD_MAX,
}cmd_codes
When my command handler function receives a command code, it needs to check if the command is enabled before processing it. I plan to use a bitmap for this. Commands can be enabled or disabled from processing during run-time. I can use an int for each group (giving me 32 commands per group, I realize that 0xN00 to 0xN20 are valid command codes and that others codes in the range are wasted). Even though commands codes are wasted, the design choice has the benefit of easily telling the group of the command code when seeing raw data on a console.
Since many developers can add commands to the 'cmd_codes' enum (even new buckets may be added as needed to the 'cmd_buckets' enum), I want to make sure that the number of command codes in each bucket does not exceed 32 (bitmap is int). I want to catch this at compile time rather than run time. Other than checking each BUCKET_N_CMD_MAX value as below and throwing a compile time error, is there a better solution?
#if (BUCKET_1_CMD_MAX > 0x20)
#error ("Number of commands in BUCKET_1 exceeded 32")
#endif
#if (BUCKET_2_CMD_MAX > 0x20)
#error ("Number of commands in BUCKET_2 exceeded 32")
#endif
#if (BUCKET_3_CMD_MAX > 0x20)
#error ("Number of commands in BUCKET_3 exceeded 32")
#endif
...
...
...
#if (BUCKET_N_CMD_MAX > 0x20)
#error ("Number of commands in BUCKET_N exceeded 32")
#endif
Please also suggest if there is a more elegant way to design this.
Thanks, I appreciate your time and patience.
First fix the bug in the code. As mentioned in comments, you have a constant BUCKET_1 = 0x100 which you then assign CMD_BUCKET_1_START = BUCKET_1. The trailing enums will therefore get values 0x101, 0x102, ... and BUCKET_1_CMD_MAX will be 0x106. Since 0x106 is always larger than 0x20, your static assert will always trigger.
Fix that so that it actually checks the total number of items in the enum instead, like this:
#define BUCKET_1_CMD_N (BUCKET_1_CMD_MAX - CMD_BUCKET_1_START)
#define BUCKET_2_CMD_N (BUCKET_2_CMD_MAX - CMD_BUCKET_2_START)
...
Assuming the above is fixed, then you can replace the numerous checks with a single macro. Not a great improvement, but at least it reduces code repetition:
#define BUCKET_MAX 32 // use a defined constant instead of a magic number
// some helper macros:
#define CHECK(n) BUCKET_ ## n ## _CMD_N
#define STRINGIFY(n) #n
// the actual macro:
#define BUCKET_CHECK(n) \
_Static_assert(CHECK(n) <= BUCKET_MAX, \
"Number of commands in BUCKET_" STRINGIFY(n) "_CMD_N exceeds BUCKET_MAX.");
// usage:
int main (void)
{
BUCKET_CHECK(1);
BUCKET_CHECK(2);
}
Output from gcc in case one constant is too large:
error: static assertion failed: "Number of commands in BUCKET_1_CMD_N exceeds BUCKET_MAX."
note: in expansion of macro 'BUCKET_CHECK'
EDIT
If combining the bug fix with the check macro, you would get this:
#define BUCKET_MAX 32
#define CHECK(n) (BUCKET_##n##_CMD_MAX - CMD_BUCKET_##n##_START)
#define STRINGIFY(n) #n
#define BUCKET_CHECK(n) \
_Static_assert(CHECK(n) <= BUCKET_MAX, \
"Number of commands in BUCKET " STRINGIFY(n) " exceeds BUCKET_MAX.");
int main (void)
{
BUCKET_CHECK(1);
BUCKET_CHECK(2);
}
First of all, preprocessor commands do not work that way. The C preprocessor is able to "see" only names instruced by the #define statement or passes as compiler flags. It is not able to see constants defined as part of an enum or with the const keyword. You should use _Static_assert to validate the commands instead of the preprocessor.
As for the commands, I would suggest to have all the commands numbered in the range 0..0x20:
enum {
BUCKET_1_CMD_1,
BUCKET_1_CMD_2,
...
BUCKET_1_CMD_MAX,
};
enum {
BUCKET_2_CMD_1,
BUCKET_2_CMD_2,
...
BUCKET_2_CMD_MAX,
};
Then you need only a single guard value to check if all the commands are in valid range:
#define MAX_COMMAND 0x20
_Static_assert(BUCKET_1_CMD_MAX <= MAX_COMMAND, "too many bucket 1 commands");
_Static_assert(BUCKET_2_CMD_MAX <= MAX_COMMAND, "too many bucket 2 commands");
To use the commands, bitwise-or them together with the bucket "offset":
enum {
BUCKET_1 = 0x100,
BUCKET_2 = 0x200,
};
...
int cmd = BUCKET_2 | BUCKET_2_CMD_1;

Resources