I was looking at the header of ATmega2560 registers mapping (iom2560.h) where there are all the definitions of the registers. For example:
#define PINA _SFR_IO8(0X00)
//Macro definition:
#define _SFR_IO8(io_addr) ((io_addr) + 0X20)
So PINA is a hexadecimal value 8-bit corresponding to the address of the 8-bit microcontroller register. When I'm writing code I can change the value inside the register by just by typing the following code:
PINA |= (1 << 3); // Setting the third bit.
Here is the question: Why can I write the register value ("pointed by his address _SFR_IO8(0X00)") by just assign the value to PINA? Isn't it the address to the register that pointed to? How does the compiler works?
Thank you very much in advance
Short answer - hidden away in the included headers from Atmel are a collection of macros that create pointers to the register locations. Here's a brief overview of the process:
Your Makefile defines the device to be used, and then passes the definition to the compiler.
DEVICE = atmega2560
...
-D__$(DEVICE)__
You then include io.h, which automatically includes the neccessary headers based on your device:
// In main source file
#include <io.h>
// In io.h
#include <avr/sfr_defs.h>
// ...
#elif defined (__AVR_ATmega2560__)
# include <avr/iom2560.h>
// In sfr_defs.h
#define _MMIO_BYTE(mem_addr) (*(volatile uint8_t *)(mem_addr))
#define __SFR_OFFSET 0x20
#define _SFR_IO8(io_addr) _MMIO_BYTE((io_addr) + __SFR_OFFSET)
// In iom2560.h
#include <avr/iomxx0_1.h>
// Other device specific definitions
// Om iomxx0_1.h
#define PINA _SFR_IO8(0X00)
// Other device family shared definitions
So if you unroll all of that, what you get is a volatile pointer to the register address. When ever you use PINA in your code, the precompiler replaces it with all of the expanded macros:
PINA
_SFR_IO8(0X00)
_MMIO_BYTE((0X00) + __SFR_OFFSET)
(*(volatile uint8_t *)((0X00) + 0x20))
Which specifies that PINA is a pointer to a volatile 8-bit memory address of 0x20. The internal chip architecture then maps that address to the appropriate peripheral register whenever it is accessed.
From this post I realized that:
the smallest allocation that kmalloc can handle is as big as 32 or 64 bytes
and
The actual memory you get back is dependent on the system's architecture
But also memory page size is mentioned there and on other sites. I can't figure out how the page size is related to smallest kmalloc() allocation? The page size is usually 4096 bytes, but the smallest allocation is 32 or 64 bytes (depending on arch).
So what is the relation between the smallest kmalloc() allocation and page size? And why is the smallest allocation is 32 or 64 bytes but not 16 (e.g.)?
So what is the relation between the smallest kmalloc() allocation and page size?
None. They are unrelated.
As you can see from the source code:
#ifdef CONFIG_SLAB
// ...
#ifndef KMALLOC_SHIFT_LOW
#define KMALLOC_SHIFT_LOW 5
#endif
#ifdef CONFIG_SLUB
// ...
#ifndef KMALLOC_SHIFT_LOW
#define KMALLOC_SHIFT_LOW 3
#endif
#endif
#ifdef CONFIG_SLOB
// ...
#ifndef KMALLOC_SHIFT_LOW
#define KMALLOC_SHIFT_LOW 3
#endif
#endif
// ...
#ifndef KMALLOC_MIN_SIZE
#define KMALLOC_MIN_SIZE (1 << KMALLOC_SHIFT_LOW)
#endif
None of the macros that end up defining KMALLOC_MIN_SIZE for different allocators depends on the page size, so there's no relation between page size and kmalloc() minimum allocation size.
On some architectures though, the minimum size can be different if kmalloc() memory is also used for direct memory access. That's why you see the various #ifndef above. It's still not related to page size though.
/*
* Some archs want to perform DMA into kmalloc caches and need a guaranteed
* alignment larger than the alignment of a 64-bit integer.
* Setting ARCH_KMALLOC_MINALIGN in arch headers allows that.
*/
#if defined(ARCH_DMA_MINALIGN) && ARCH_DMA_MINALIGN > 8
#define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN
#define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN
#define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN)
#else
#define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long)
#endif
The value of ARCH_DMA_MINALIGN is architecture-specific and is usually related to the processor L1 cache size, as you can see for example for ARM:
#define L1_CACHE_SHIFT CONFIG_ARM_L1_CACHE_SHIFT
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
/*
* Memory returned by kmalloc() may be used for DMA, so we must make
* sure that all such allocations are cache aligned. Otherwise,
* unrelated code may cause parts of the buffer to be read into the
* cache before the transfer is done, causing old data to be seen by
* the CPU.
*/
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
I was astonished to find differences between macro definition of termios.h
On RHEL or Centos 7 I have the (also confusing octal) values
#define PARENB 0000400
#define PARODD 0001000
On other sources we see
#define PARENB 0x00001000 /* parity enable */
#define PARODD 0x00002000 /* odd parity, else even */
For PARENB 0x1000 = 4096 = 10000octal which is not 0000400
I thought it would be the same on all platforms/distributions (apart maybe for 32 bits/64bits distinction).
Why such difference ? Can we by mistake use one instead of the other ? is there an historical reason ?
I'm trying to write a portable program that deals with ustar archives. For device files, these archives store the major and minor device numbers. However, the struct stat as laid out in POSIX only contains a single st_rdev member of type dev_t described with “Device ID (if file is character or block special).”
How can I convert between a pair of major and minor device numbers and a single st_rdev member as returned by stat() in a portable manner?
While all POSIX programming interfaces use the device number (of type dev_t) as is, FUZxxl pointed out in a comment to this answer that the common UStar file format -- most common tar archive format -- does split the device number into major and minor. (They are typically encoded as seven octal digits each, so for compatibility reasons one should limit to 21-bit unsigned major and 21-bit unsigned minor. This also means that mapping the device number to just the major or just the minor is not a reliable approach.)
The following include file expanding on Jonathon Reinhart's answer, after digging on the web for the various systems man pages and documentation (for makedev(), major(), and minor()), plus comments to this question.
#if defined(custom_makedev) && defined(custom_major) && defined(custom_minor)
/* Already defined */
#else
#undef custom_makedev
#undef custom_major
#undef custom_minor
#if defined(__linux__) || defined(__GLIBC__)
/* Linux, Android, and other systems using GNU C library */
#ifndef _BSD_SOURCE
#define _BSD_SOURCE 1
#endif
#include <sys/types.h>
#define custom_makedev(dmajor, dminor) makedev(dmajor, dminor)
#define custom_major(devnum) major(devnum)
#define custom_minor(devnum) minor(devnum)
#elif defined(_WIN32)
/* 32- and 64-bit Windows. VERIFY: These are just a guess! */
#define custom_makedev(dmajor, dminor) ((((unsigned int)dmajor << 8) & 0xFF00U) | ((unsigned int)dminor & 0xFFFF00FFU))
#define custom_major(devnum) (((unsigned int)devnum & 0xFF00U) >> 8)
#define custom_minor(devnum) ((unsigned int)devnum & 0xFFFF00FFU)
#elif defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__) || defined(__DragonFly__)
/* FreeBSD, OpenBSD, NetBSD, and DragonFlyBSD */
#include <sys/types.h>
#define custom_makedev(dmajor, dminor) makedev(dmajor, dminor)
#define custom_major(devnum) major(devnum)
#define custom_minor(devnum) minor(devnum)
#elif defined(__APPLE__) && defined(__MACH__)
/* Mac OS X */
#include <sys/types.h>
#define custom_makedev(dmajor, dminor) makedev(dmajor, dminor)
#define custom_major(devnum) major(devnum)
#define custom_minor(devnum) minor(devnum)
#elif defined(_AIX) || defined (__osf__)
/* AIX, OSF/1, Tru64 Unix */
#include <sys/types.h>
#define custom_makedev(dmajor, dminor) makedev(dmajor, dminor)
#define custom_major(devnum) major(devnum)
#define custom_minor(devnum) minor(devnum)
#elif defined(hpux)
/* HP-UX */
#include <sys/sysmacros.h>
#define custom_makedev(dmajor, dminor) makedev(dmajor, dminor)
#define custom_major(devnum) major(devnum)
#define custom_minor(devnum) minor(devnum)
#elif defined(sun)
/* Solaris */
#include <sys/types.h>
#include <sys/mkdev.h>
#define custom_makedev(dmajor, dminor) makedev(dmajor, dminor)
#define custom_major(devnum) major(devnum)
#define custom_minor(devnum) minor(devnum)
#else
/* Unknown OS. Try a the BSD approach. */
#ifndef _BSD_SOURCE
#define _BSD_SOURCE 1
#endif
#include <sys/types.h>
#if defined(makedev) && defined(major) && defined(minor)
#define custom_makedev(dmajor, dminor) makedev(dmajor, dminor)
#define custom_major(devnum) major(devnum)
#define custom_minor(devnum) minor(devnum)
#endif
#endif
#if !defined(custom_makedev) || !defined(custom_major) || !defined(custom_minor)
#error Unknown OS: please add definitions for custom_makedev(), custom_major(), and custom_minor(), for device number major/minor handling.
#endif
#endif
One could glean additional definitions from existing UStar-format -capable archivers. Compatibility with existing implementations on each OS/architecture is, in my opinion, the most important thing here.
The above should cover all systems using GNU C library, Linux (including Android), FreeBSD, OpenBSD, NetBSD, DragonFlyBSD, Mac OS X, AIX, Tru64, HP-UX, and Solaris, plus any that define the macros when <sys/types.h> is included. Of the Windows part, I'm not sure.
As I understand it, Windows uses device 0 for all normal files, and a HANDLE (a void pointer type) for devices. I am not at all sure whether the above logic is sane on Windows, but many older systems put the 8 least significant bits of the device number into minor, and the next 8 bits into major, and the convention seems to be that any leftover bits would be put (without shifting) into minor, too. Examining existing UStar-format tar archives with references to devices would be useful, but I personally do not use Windows at all.
If a system is not detected, and the system does not use the BSD-style inclusion for defining the macros, the above will error out stopping the compilation. (I would personally add compile-time machinery that could help finding the correct header definitions, using e.g. find, xargs, and grep, in case that happens, with a suggestion to send the addition upstream, too. touch empty.h ; cpp -dM empty.h ; rm -f empty.h should show all predefined macros, to help with identifying the OS and/or C library.)
Originally, POSIX stated that dev_t must be an arithmetic type (thus, theoretically, it might have been some variant of float or double on some systems), but IEEE Std 1003.1, 2013 Edition says it must be an integer type. I would wager that means no known POSIX-y system ever used a floating-point dev_t type. It would seem that Windows uses a void pointer, or HANDLE type, but Windows is not POSIX-compliant anyway.
Use the major() and minor() macros after defining BSD_SOURCE.
The makedev(), major(), and minor() functions are not specified in
POSIX.1, but are present on many other systems.
http://man7.org/linux/man-pages/man3/major.3.html
I have a program based on an antique version of ls for Minix, but much mangled modified by me since then. It has the following code to detect the major and minor macros — and some comments about (now) antique systems where it has worked in the past. It assumes a sufficiently recent version of GCC is available to support #pragma GCC diagnostic ignored etc. You have to be trying pretty hard (e.g. clang -Weverything) to get the -Wunused-macros option in effect unless you include it explicitly.
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-macros"
/* Defines to ensure major and minor macros are available */
#define _DARWIN_C_SOURCE /* In <sys/types.h> on MacOS X */
#define _BSD_SOURCE /* In <sys/sysmacros.h> via <sys/types.h> on Linux (Ubuntu 12.0.4) */
#define __EXTENSIONS__ /* Maybe beneficial on Solaris */
#pragma GCC diagnostic pop
/* From Solaris 2.6 sys/sysmacros.h
**
** WARNING: The device number macros defined here should not be used by
** device drivers or user software. [...] Application software should make
** use of the library routines available in makedev(3). [...] Macro
** routines bmajor(), major(), minor(), emajor(), eminor(), and makedev()
** will be removed or their definitions changed at the next major release
** following SVR4.
**
** #define O_BITSMAJOR 7 -- # of SVR3 major device bits
** #define O_BITSMINOR 8 -- # of SVR3 minor device bits
** #define O_MAXMAJ 0x7f -- SVR3 max major value
** #define O_MAXMIN 0xff -- SVR3 max major value
**
** #define L_BITSMAJOR 14 -- # of SVR4 major device bits
** #define L_BITSMINOR 18 -- # of SVR4 minor device bits
** #define L_MAXMAJ 0x3fff -- SVR4 max major value
** #define L_MAXMIN 0x3ffff -- MAX minor for 3b2 software drivers.
** -- For 3b2 hardware devices the minor is restricted to 256 (0-255)
*/
/* AC_HEADER_MAJOR:
** - defines MAJOR_IN_MKDEV if found in sys/mkdev.h
** - defines MAJOR_IN_SYSMACROS if found in sys/macros.h
** - otherwise, hope they are in sys/types.h
*/
#if defined MAJOR_IN_MKDEV
#include <sys/mkdev.h>
#elif defined MAJOR_IN_SYSMACROS
#include <sys/sysmacros.h>
#elif defined(MAJOR_MINOR_MACROS_IN_SYS_TYPES_H)
/* MacOS X 10.2 - for example */
/* MacOS X 10.5 requires -D_DARWIN_C_SOURCE or -U_POSIX_C_SOURCE - see above */
#elif defined(USE_CLASSIC_MAJOR_MINOR_MACROS)
#define major(x) ((x>>8) & 0x7F)
#define minor(x) (x & 0xFF)
#else
/* Hope the macros are in <sys/types.h> or otherwise magically visible */
#endif
#define MAJOR(x) ((long)major(x))
#define MINOR(x) ((long)minor(x))
You will justifiably not be all that keen on the 'hope the macros are … magically visible' part of the code.
The reference to AC_HEADER_MAJOR is to the macro in the autoconf that deduces this information. It would be relevant if you have a config.h file generated by autoconf.
POSIX
Note that the POSIX pax command defines the ustar format and specifies that it includes devmajor and devminor in the information, but adds:
… Represent character special files and block special files respectively. In this case the devmajor and devminor fields shall contain information defining the device, the format of which is unspecified by this volume of POSIX.1-2008. Implementations may map the device specifications to their own local specification or may ignore the entry.
This means that there isn't a fully portable way to represent the numbers. This is not wholly unreasonable (but it is a nuisance); the meanings of the major and minor device numbers varies across platforms and is unspecified too. Any attempt to create block or character devices via ustar format will only work reasonably reliably if the source and target machines are running the same (version of the same) operating system — though usually they're portable across versions of the same operating system.
I keep having a strange issue lately.
Depending on how I set up my audio configuration in windows ( stereo/quad/5.1 ), a ffmpeg call to avcodec_open2() fails with error -22 or just works.
Not being able to find much about that error, I thought I should ask about it here.
The main flow goes like this:
c = st->codec;
avformat_alloc_output_context2(&oc, NULL, NULL, "video.mpeg");
oc->fmt->audio_codec = AV_CODEC_ID_MP2;
AVDictionary* dict = NULL;
ret = av_dict_set(&dict, "ac", "2", 0);
c->request_channels = 2;
ret = avcodec_open2(c, codec, &dict); //HERE IT FAILS WITH -22 if speaker configuration is not stereo
The codec context 'c' is set up like this in a stream:
st = avformat_new_stream(oc, *codec);
c = st->codec;
c->channels = 2;
c->channel_layout = AV_CH_LAYOUT_STEREO;
c->sample_fmt = AV_SAMPLE_FMT_S16;
c->codec_id = codec_id;
Most of it is copied from their one of the muxing examples found in the documentation.
Everything works as expected if in windows I have set the output to stereo.
If I set my speaker configuration to 5.1 ( 6 channels ), avcodec_open2 fails with error -22.
So I have a hard time understanding what am I doing wrong. Normally it should not be any relationship between my speaker configuration and the result of avcodec_open2.
Are there some other parameters that I need to set ?
Here is the header for file libavcodec\avcodec.h taken from How can I find out what this ffmpeg error code means?
#if EINVAL > 0
#define AVERROR(e) (-(e)) /**< Returns a negative error code from a POSIX error code, to return from library functions. */
#define AVUNERROR(e) (-(e)) /**< Returns a POSIX error code from a library function error return value. */
#else
/* Some platforms have E* and errno already negated. */
#define AVERROR(e) (e)
#define AVUNERROR(e) (e)
#endif
#define AVERROR_UNKNOWN AVERROR(EINVAL) /**< unknown error */
#define AVERROR_IO AVERROR(EIO) /**< I/O error */
#define AVERROR_NUMEXPECTED AVERROR(EDOM) /**< Number syntax expected in filename. */
#define AVERROR_INVALIDDATA AVERROR(EINVAL) /**< invalid data found */
#define AVERROR_NOMEM AVERROR(ENOMEM) /**< not enough memory */
#define AVERROR_NOFMT AVERROR(EILSEQ) /**< unknown format */
#define AVERROR_NOTSUPP AVERROR(ENOSYS) /**< Operation not supported. */
#define AVERROR_NOENT AVERROR(ENOENT) /**< No such file or directory. */
#define AVERROR_EOF AVERROR(EPIPE) /**< End of file. */
#define AVERROR_PATCHWELCOME -MKTAG('P','A','W','E') /**< Not yet implemented in FFmpeg. Patches welcome. */
then
errno.h header file for the EINVAL
#define EINVAL 22 /* Invalid argument */
P.S. AVERROR means (-(-22)) = 22
LAYOUT for channels header channel_layout.h header file
/**
* #file
* audio conversion routines
*/
/* Audio channel masks */
#define AV_CH_FRONT_LEFT 0x00000001
#define AV_CH_FRONT_RIGHT 0x00000002
#define AV_CH_FRONT_CENTER 0x00000004
#define AV_CH_LOW_FREQUENCY 0x00000008
#define AV_CH_BACK_LEFT 0x00000010
#define AV_CH_BACK_RIGHT 0x00000020
#define AV_CH_FRONT_LEFT_OF_CENTER 0x00000040
#define AV_CH_FRONT_RIGHT_OF_CENTER 0x00000080
#define AV_CH_BACK_CENTER 0x00000100
#define AV_CH_SIDE_LEFT 0x00000200
#define AV_CH_SIDE_RIGHT 0x00000400
#define AV_CH_TOP_CENTER 0x00000800
#define AV_CH_TOP_FRONT_LEFT 0x00001000
#define AV_CH_TOP_FRONT_CENTER 0x00002000
#define AV_CH_TOP_FRONT_RIGHT 0x00004000
#define AV_CH_TOP_BACK_LEFT 0x00008000
#define AV_CH_TOP_BACK_CENTER 0x00010000
#define AV_CH_TOP_BACK_RIGHT 0x00020000
#define AV_CH_STEREO_LEFT 0x20000000 ///< Stereo downmix.
#define AV_CH_STEREO_RIGHT 0x40000000 ///< See AV_CH_STEREO_LEFT.
/** Channel mask value used for AVCodecContext.request_channel_layout
to indicate that the user requests the channel order of the decoder output
to be the native codec channel order. */
#define AV_CH_LAYOUT_NATIVE 0x8000000000000000LL
/* Audio channel convenience macros */
#define AV_CH_LAYOUT_MONO (AV_CH_FRONT_CENTER)
#define AV_CH_LAYOUT_STEREO (AV_CH_FRONT_LEFT|AV_CH_FRONT_RIGHT)
#define AV_CH_LAYOUT_2_1 (AV_CH_LAYOUT_STEREO|AV_CH_BACK_CENTER)
#define AV_CH_LAYOUT_SURROUND (AV_CH_LAYOUT_STEREO|AV_CH_FRONT_CENTER)
#define AV_CH_LAYOUT_4POINT0 (AV_CH_LAYOUT_SURROUND|AV_CH_BACK_CENTER)
#define AV_CH_LAYOUT_2_2 (AV_CH_LAYOUT_STEREO|AV_CH_SIDE_LEFT|AV_CH_SIDE_RIGHT)
#define AV_CH_LAYOUT_QUAD (AV_CH_LAYOUT_STEREO|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)
#define AV_CH_LAYOUT_5POINT0 (AV_CH_LAYOUT_SURROUND|AV_CH_SIDE_LEFT|AV_CH_SIDE_RIGHT)
#define AV_CH_LAYOUT_5POINT1 (AV_CH_LAYOUT_5POINT0|AV_CH_LOW_FREQUENCY)
#define AV_CH_LAYOUT_5POINT0_BACK (AV_CH_LAYOUT_SURROUND|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)
#define AV_CH_LAYOUT_5POINT1_BACK (AV_CH_LAYOUT_5POINT0_BACK|AV_CH_LOW_FREQUENCY)
#define AV_CH_LAYOUT_7POINT0 (AV_CH_LAYOUT_5POINT0|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)
#define AV_CH_LAYOUT_7POINT1 (AV_CH_LAYOUT_5POINT1|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)
#define AV_CH_LAYOUT_7POINT1_WIDE (AV_CH_LAYOUT_5POINT1_BACK|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER)
#define AV_CH_LAYOUT_STEREO_DOWNMIX (AV_CH_STEREO_LEFT|AV_CH_STEREO_RIGHT)