How to get RGB pixel values from Linux framebuffer? - c

I want to get RGB values of screen pixels in the most efficient way, using Linux. So I decided to use the framebuffer library in C (fb.h) to access the framebuffer device (/dev/fb0) and read from it directly.
This is the code:
#include <stdint.h>
#include <linux/fb.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
int main() {
int fb_fd;
struct fb_fix_screeninfo finfo;
struct fb_var_screeninfo vinfo;
uint8_t *fb_p;
/* Open the frame buffer device */
fb_fd = open("/dev/fb0", O_RDWR);
if (fb_fd < 0) {
perror("Can't open /dev/fb0\n");
exit(EXIT_FAILURE);
}
/* Get fixed info */
if (ioctl(fb_fd, FBIOGET_FSCREENINFO, &finfo) < 0) {
perror("Can't get fixed info\n");
exit(EXIT_FAILURE);
}
/* Get variable info */
if (ioctl(fb_fd, FBIOGET_VSCREENINFO, &vinfo) < 0) {
perror("Can't get variable info\n");
exit(EXIT_FAILURE);
}
/* To access to the memory, it can be mapped*/
fb_p = (uint8_t *) mmap(0, finfo.smem_len, PROT_READ | PROT_WRITE, MAP_SHARED, fb_fd, 0);
if (fb_p == MAP_FAILED) {
perror("Can't map memory\n");
exit(EXIT_FAILURE);
}
/* Print each byte of the frame buffer */
for (int i = 0; i < finfo.smem_len; i++) {
printf("%d\n", *(fb_p + i));
// for (int j = 0; j < 500000000; j++); /* Delay */
}
munmap(fb_p, 0);
close(fb_fd);
return 0;
}
But when I print the values, I don't get what I was expecting...
If I pick the RGB value of the pixel (0, 0) with a tool like grabc, I get this:
#85377e
133,55,126
But the first printings with my code are:
126
145
198
...
It looks like I am obtaining well the first value of the first pixel, corresponding to blue, but the rest is wrong.

If you look at the grabc source code (https://www.muquit.com/muquit/software/grabc/grabc.html), you'll see that he's querying X Windows (and NOT the hardware frame buffer per se).
root_window=XRootWindow(display,XDefaultScreen(display));
target_window=selectWindow(display,&x,&y);
...
ximage=XGetImage(display,target_window,x,y,1,1,AllPlanes,ZPixmap);
...
color->pixel=XGetPixel(ximage,0,0);
XDestroyImage(ximage);
For whatever it's worth - and for several different reasons - I'd strongly encourage you to consider doing the same.
You might also be interested in:
Intro to Low-Level Graphics on Linux
DirectFB

In some rare buggy device using fbdev you may need to align the mmap returned pointer to the next system page in order to get the first pixel correctly, and by doing so you may also need to mmap one more page.
I don't think that's your case, it seems like you're trying to get the xserver desktop pixels by using linux fbdev, unless your xserver is configured to use the fbdev driver (and even so I'm not sure would work) isn't gonna work.
If reading the desktop pixels is your target you should look into some screenshot tool.
Beside, fbdev drivers are usually really basic, is a low-level access but there's nothing efficient about mmapping and reading pixels, the Xlib or any higher level graphic system may be aware and support your graphic card accelerations and be far more efficient (perhaps using DMA to read the whole framebuffer to your system memory without loading the cpu).

Working on iOS there was a private framework iOMobileFrameBuffer and created a lovely recording software for the screen. I've dabbled in this before so I will give you what I know. iOS is obviously different than Linux FrameBuffer (lack of documentation is pretty similar...) but I would personally take a look at this answer. Although the question doesn't fit what you ask, that answer explains the PixelBuffer along with a small example of exactly what you asked. Give it a read and reply with any questions, I will answer to the best of my ability.

Related

Memcpy Complete After Segfault

I have a PCIe endpoint device connected to the host. The ep's (endpoints) 512MB BAR is mmapped and memcpy is used to transfer data. Memcpy is quite slow (~2.5s). When I don't map all of the BAR (100bytes), but run memcpy for the full 512MB, I get a segfault within 0.5s, however when reading back the end of the BAR, the data shows the correct data. Meaning that the data reads the same as if I did mmap the whole BAR space.
How is the data being written and why is it so much faster than doing it the correct way (without the segfault)?
Code to map the whole BAR (takes 2.5s):
fd = open(filename, O_RDWR | O_SYNC)
map_base = mmap(NULL, 536870912, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
int rand_fd = open(infile, O_RDONLY);
rand_base = mmap(0, 536870912, PROT_READ, MAP_SHARED, rand_fd, 0);
memcpy(map_base, rand_base, 536870912);
if(munmap(map_base, map_size) == -1)
{
PRINT_ERROR;
}
close(fd);
Code to map only 100 bytes (takes 0.5s):
fd = open(filename, O_RDWR | O_SYNC)
map_base = mmap(NULL, 100, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
int rand_fd = open(infile, O_RDONLY);
rand_base = mmap(0, 536870912, PROT_READ, MAP_SHARED, rand_fd, 0);
memcpy(map_base, rand_base, 536870912);
if(munmap(map_base, map_size) == -1)
{
PRINT_ERROR;
}
close(fd);
To check the written data, I am using pcimem
https://github.com/billfarrow/pcimem
Edit: I was being dumb while consistent data was being 'written' after the segfault, it was not the data that it should have been. Therefore my conclusion that memcpy was completing after the segfault was false. I am accepting the answer as it provided me useful information.
Assuming filename is just an ordinary file (to save the data), leave off O_SYNC. It will just slow things down [possibly, a lot].
When opening the BAR device, consider using O_DIRECT. This may minimize caching effects. That is, if the BAR device does its own caching, eliminate caching by the kernel, if possible.
How is the data being written and why is it so much faster than doing it the correct way (without the segfault)?
The "short" mmap/read is not working. The extra data comes from the prior "full" mapping. So, your test isn't valid.
To ensure consistent results, do unlink on the output file. Do open with O_CREAT. Then, use ftruncate to extend the file to the full size.
Here is some code to try:
#define SIZE (512 * 1024 * 1024)
// remove the output file
unlink(filename);
// open output file (create it)
int ofile_fd = open(filename, O_RDWR | O_CREAT,0644)
// prevent segfault by providing space in the file
ftruncate(ofile_fd,SIZE);
map_base = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, ofile_fd, 0);
// use O_DIRECT to minimize caching effects when accessing the BAR device
#if 0
int rand_fd = open(infile, O_RDONLY);
#else
int rand_fd = open(infile, O_RDONLY | O_DIRECT);
#endif
rand_base = mmap(0, SIZE, PROT_READ, MAP_SHARED, rand_fd, 0);
memcpy(map_base, rand_base, SIZE);
if (munmap(map_base, map_size) == -1) {
PRINT_ERROR;
}
// close the output file
close(ofile_fd);
Depending upon the characteristics of the BAR device, to minimize the number of PCIe read/fetch/transaction requests, it may be helpful to ensure that it is being accessed as 32 bit (or 64 bit) elements.
Does the BAR space allow/support/encourage access as "ordinary" memory?
Usually, memcpy is smart enough to switch to "wide" memory access automatically (if memory addresses are aligned--which they are here). That is, memcpy will automatically use 64 bit fetches, with movq or possibly by using some XMM instructions, such as movdqa
It would help to know exactly which BAR device(s) you have. The datasheet/appnote should give enough information.
UPDATE:
Thanks for the sample code. Unfortunately, aarch64-gcc gives 'O_DIRECT undeclared' for some reason. Without using that flag, the speed is the same as my original code.
Add #define _GNU_SOURCE above any #include to resolve O_DIRECT
The PCIe device is an FPGA that we are developing. The bitstream is currently the Xilinx DMA example code. The BAR is just 512MB of memory for the system to R/W to. –
userYou
Serendipitously, my answer was based on my experience with access to the BAR space of a Xilinx FPGA device (it's been a while, circa 2010).
When we were diagnosing speed issues, we used a PCIe bus analyzer. This can show the byte width of the bus requests the CPU has requested. It also shows the turnaround time (e.g. Bus read request time until data packet from device is returned).
We also had to adjust the parameters in the PCIe config registers (e.g. transfer size, transaction replay) for the device/BAR. This was trial-and-error and we (I) tried some 27 different combinations before deciding on the optimum config
On an unrelated arm system (e.g. nVidia Jetson) about 3 years ago, I had to do memcpy to/from the GPU memory. It may have just been the particular cross-compiler I was using, but the disassembly of memcpy showed that it only used bytewide transfers. That is, it wasn't as smart as its x86 counterpart. I wrote/rewrote a version that used unsigned long long [and/or unsigned __int128] transfers. This sped things up considerably. See below.
So, you may wish to disassemble the generated memcpy code. Either the library function and/or code that it may inline into your function.
Just a thought ... If you're just wanting a bulk transfer, you may wish to have the device driver for the device program the DMA engine on the FPGA. This might be handled more effectively with a custom ioctl call to the device driver that accepts a custom struct describing the desired transfer (vs. read or mmap from userspace).
Are you writing a custom device driver for the device? Or, are you just using some generic device driver?
Here's what I had to do to get a fast memcpy on arm. It generates ldp/stp asm instructions.
// qcpy.c -- fast memcpy
#include <string.h>
#include <stddef.h>
#ifndef OPT_QMEMCPY
#define OPT_QMEMCPY 128
#endif
#ifndef OPT_QCPYIDX
#define OPT_QCPYIDX 1
#endif
// atomic type for qmemcpy
#if OPT_QMEMCPY == 32
typedef unsigned int qmemcpy_t;
#elif OPT_QMEMCPY == 64
typedef unsigned long long qmemcpy_t;
#elif OPT_QMEMCPY == 128
typedef unsigned __int128 qmemcpy_t;
#else
#error qmemcpy.c: unknown/unsupported OPT_QMEMCPY
#endif
typedef qmemcpy_t *qmemcpy_p;
typedef const qmemcpy_t *qmemcpy_pc;
// _qmemcpy -- fast memcpy
// RETURNS: number of bytes transferred
size_t
_qmemcpy(qmemcpy_p dst,qmemcpy_pc src,size_t size)
{
size_t cnt;
size_t idx;
cnt = size / sizeof(qmemcpy_t);
size = cnt * sizeof(qmemcpy_t);
if (OPT_QCPYIDX) {
for (idx = 0; idx < cnt; ++idx)
dst[idx] = src[idx];
}
else {
for (; cnt > 0; --cnt, ++dst, ++src)
*dst = *src;
}
return size;
}
// qmemcpy -- fast memcpy
void
qmemcpy(void *dst,const void *src,size_t size)
{
size_t xlen;
// use fast memcpy for aligned size
if (OPT_QMEMCPY > 0) {
xlen = _qmemcpy(dst,src,size);
src += xlen;
dst += xlen;
size -= xlen;
}
// copy remainder with ordinary memcpy
if (size > 0)
memcpy(dst,src,size);
}
UPDATE #2:
Speaking of serendipity, I am using a Jetson Orin. That is very interesting about the byte-wise behavior.
Just a thought ... If you have a Jetson in the same system as the FPGA, you might get DMA action by judicious use of cuda
Due to requirements, I cannot use any custom kernel modules so I am trying to do it all in userspace.
That is a harsh mistress to serve ... With custom H/W, it is almost axiomatic that you can have a custom device driver. So, the requirement sounds like a marketing/executive one rather than a technical one. If it's something like not being able to ship a .ko file because you don't know the target kernel version, it is possible to ship the driver as a .o and defer the .ko creation to the install script.
We want to use the DMA engine, but I am hiking up the learning curve on this one. We are using DMA in the FPGA, but I thought that as long as we could write to the address specified in the dtb, that meant the DMA engine was set up and working. Now I'm wondering if I have completely misunderstood that part. –
userYou
You probably will not get DMA doing that. If you start the memcpy, how does the DMA engine know the transfer length?
You might have better luck using read/write vs mmap to get DMA going, depending upon the driver.
But, if it were me, I'd keep the custom driver option open:
If you have to tweak/modify the BAR config registers on driver/system startup, I can't recall if it's even possible to map the config registers to userspace.
When doing mmap, the device may be treated as the "backing store" for the mapping. That is, there is still an extra layer of kernel buffering [just like there is when mapping an ordinary file]. The device memory is only updated periodically from the kernel [buffer] memory.
A custom driver can set up a [guaranteed] direct mapping, using some trickery that only the kernel/driver has access to.
Historical note:
When I last worked with the Xilinx FPGA (12 years ago), the firmware loader utility (provided by Xilinx in both binary and source form), would read in bytes from the firmware/microcode .xsvf file used (e.g.) fscanf(fi,"%c",&myint) to get the bytes.
This was horrible. I refactored the utility to fix that and the processing of the state machine and reduced the load time from 15 minutes to 45 seconds.
Hopefully, Xilinx has fixed the utility by now.

Is it possible to "punch holes" through mmap'ed anonymous memory?

Consider a program which uses a large number of roughly page-sized memory regions (say 64 kB or so), each of which is rather short-lived. (In my particular case, these are alternate stacks for green threads.)
How would one best do to allocate these regions, such that their pages can be returned to the kernel once the region isn't in use anymore? The naïve solution would clearly be to simply mmap each of the regions individually, and munmap them again as soon as I'm done with them. I feel this is a bad idea, though, since there are so many of them. I suspect that the VMM may start scaling badly after a while; but even if it doesn't, I'm still interested in the theoretical case.
If I instead just mmap myself a huge anonymous mapping from which I allocate the regions on demand, is there a way to "punch holes" through that mapping for a region that I'm done with? Kind of like madvise(MADV_DONTNEED), but with the difference that the pages should be considered deleted, so that the kernel doesn't actually need to keep their contents anywhere but can just reuse zeroed pages whenever they are faulted again.
I'm using Linux, and in this case I'm not bothered by using Linux-specific calls.
I did a lot of research into this topic (for a different use) at some point. In my case I needed a large hashmap that was very sparsely populated + the ability to zero it every now and then.
mmap solution:
The easiest solution (which is portable, madvise(MADV_DONTNEED) is linux specific) to zero a mapping like this is to mmap a new mapping above it.
void * mapping = mmap(MAP_ANONYMOUS);
// use the mapping
// zero certain pages
mmap(mapping + page_aligned_offset, length, MAP_FIXED | MAP_ANONYMOUS);
The last call is performance wise equivalent to subsequent munmap/mmap/MAP_FIXED, but is thread safe.
Performance wise the problem with this solution is that the pages have to be faulted in again on a subsequence write access which issues an interrupt and a context change. This is only efficient if very few pages were faulted in in the first place.
memset solution:
After having such crap performance if most of the mapping has to be unmapped I decided to zero the memory manually with memset. If roughly over 70% of the pages are already faulted in (and if not they are after the first round of memset) then this is faster then remapping those pages.
mincore solution:
My next idea was to actually only memset on those pages that have been faulted in before. This solution is NOT thread-safe. Calling mincore to determine if a page is faulted in and then selectively memset them to zero was a significant performance improvement until over 50% of the mapping was faulted in, at which point memsetting the entire mapping became simpler (mincore is a system call and requires one context change).
incore table solution:
My final approach which I then took was having my own in-core table (one bit per page) that says if it has been used since the last wipe. This is by far the most efficient way since you will only be actually zeroing the pages in each round that you actually used. It obviously also is not thread safe and requires you to track which pages have been written to in user space, but if you need this performance then this is by far the most efficient approach.
I don't see why doing lots of calls to mmap/munmap should be that bad. The lookup performance in the kernel for mappings should be O(log n).
Your only options as it seems to be implemented in Linux right now is to punch holes in the mappings to do what you want is mprotect(PROT_NONE) and that is still fragmenting the mappings in the kernel so it's mostly equivalent to mmap/munmap except that something else won't be able to steal that VM range from you. You'd probably want madvise(MADV_REMOVE) work or as it's called in BSD - madvise(MADV_FREE). That is explicitly designed to do exactly what you want - the cheapest way to reclaim pages without fragmenting the mappings. But at least according to the man page on my two flavors of Linux it's not fully implemented for all kinds of mappings.
Disclaimer: I'm mostly familiar with the internals of BSD VM systems, but this should be quite similar on Linux.
As in the discussion in comments below, surprisingly enough MADV_DONTNEED seems to do the trick:
#include <sys/types.h>
#include <sys/mman.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>
#include <unistd.h>
#include <err.h>
int
main(int argc, char **argv)
{
int ps = getpagesize();
struct rusage ru = {0};
char *map;
int n = 15;
int i;
if ((map = mmap(NULL, ps * n, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)) == MAP_FAILED)
err(1, "mmap");
for (i = 0; i < n; i++) {
map[ps * i] = i + 10;
}
printf("unnecessary printf to fault stuff in: %d %ld\n", map[0], ru.ru_minflt);
/* Unnecessary call to madvise to fault in that part of libc. */
if (madvise(&map[ps], ps, MADV_NORMAL) == -1)
err(1, "madvise");
if (getrusage(RUSAGE_SELF, &ru) == -1)
err(1, "getrusage");
printf("after MADV_NORMAL, before touching pages: %d %ld\n", map[0], ru.ru_minflt);
for (i = 0; i < n; i++) {
map[ps * i] = i + 10;
}
if (getrusage(RUSAGE_SELF, &ru) == -1)
err(1, "getrusage");
printf("after MADV_NORMAL, after touching pages: %d %ld\n", map[0], ru.ru_minflt);
if (madvise(map, ps * n, MADV_DONTNEED) == -1)
err(1, "madvise");
if (getrusage(RUSAGE_SELF, &ru) == -1)
err(1, "getrusage");
printf("after MADV_DONTNEED, before touching pages: %d %ld\n", map[0], ru.ru_minflt);
for (i = 0; i < n; i++) {
map[ps * i] = i + 10;
}
if (getrusage(RUSAGE_SELF, &ru) == -1)
err(1, "getrusage");
printf("after MADV_DONTNEED, after touching pages: %d %ld\n", map[0], ru.ru_minflt);
return 0;
}
I'm measuring ru_minflt as a proxy to see how many pages we needed to allocate (this isn't exactly true, but the next sentence makes it more likely). We can see that we get new pages in the third printf because the contents of map[0] are 0.

Linux :Identifying pages in memory

I want to know what part of a huge file are cached in memory. I'm using some code from fincore for that, which works this way: the file is mmaped, then fincore loops over the address space and check pages with mincore, but it's very long (several minutes) because of the file size (several TB).
Is there a way to loop on used RAM pages instead? It would be much faster, but that means I should get the list of used pages from somewhere... However I can't find a convenient system call that would allow that.
Here comes the code:
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
/* } */
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/sysinfo.h>
void
fincore(char *filename) {
int fd;
struct stat st;
struct sysinfo info;
if (sysinfo(& info)) {
perror("sysinfo");
return;
}
void *pa = (char *)0;
char *vec = (char *)0;
size_t pageSize = getpagesize();
register size_t pageIndex;
fd = open(filename, 0);
if (0 > fd) {
perror("open");
return;
}
if (0 != fstat(fd, &st)) {
perror("fstat");
close(fd);
return;
}
pa = mmap((void *)0, st.st_size, PROT_NONE, MAP_SHARED, fd, 0);
if (MAP_FAILED == pa) {
perror("mmap");
close(fd);
return;
}
/* vec = calloc(1, 1+st.st_size/pageSize); */
/* 2.2 sec for 8 TB */
vec = calloc(1, (st.st_size+pageSize-1)/pageSize);
if ((void *)0 == vec) {
perror("calloc");
close(fd);
return;
}
/* 48 sec for 8 TB */
if (0 != mincore(pa, st.st_size, vec)) {
fprintf(stderr, "mincore(%p, %lu, %p): %s\n",
pa, (unsigned long)st.st_size, vec, strerror(errno));
free(vec);
close(fd);
return;
}
/* handle the results */
/* 2m45s for 8 TB */
for (pageIndex = 0; pageIndex <= st.st_size/pageSize; pageIndex++) {
if (vec[pageIndex]&1) {
printf("%zd\n", pageIndex);
}
}
free(vec);
vec = (char *)0;
munmap(pa, st.st_size);
close(fd);
return;
}
int main(int argc, char *argv[]) {
fincore(argv[1]);
return 0;
}
The amount of information needed to represent a list is, for the pessimistic case when all or almost all pages are indeed in RAM, much higher than the bitmap - at least 64 vs 1 bits per entry. If there was such an API, when querying it about your 2 billion pages, you would have to be prepared to get 16 GB of data in the reply. Additionally, handling variable-length structures such as lists is more complex than handling a fixed-length array, so library functions, especially low-level system ones, tend to avoid the hassle.
I am also not quite sure about the implementation (how the OS interacts with the TLB and Co in this case), but it may well be that (even size difference aside) filling out the bitmap can be performed faster than creating a list due to the OS- and hardware-level structures the information is extracted from.
If you are not concerned about very fine granularity, you could have a look at /proc/<PID>/smaps. For each mapped region it shows some stats, including how much is loaded into memory (Rss field). If for the purpose of debugging you map some regions of a file with a separate mmap() call (in addition to the main mapping used for performing the actual task), you will probably get separate entries in smaps and thus see separate statistics for these regions. You almost certainly can't make billions of mappings without killing your system, but if the file is structured well, maybe having separate statistics for just a few dozen well-chosen regions can help you find the answers you are looking for.
Cached by whom?
Consider after a boot the file sits on disk. No part of it is in memory.
Now the file is opened and random reads are performed.
The file system (e.g. the kernel) will be caching.
The C standard library will be caching.
The kernel will be caching in kernel-mode memory, the C standard library in user-mode memory.
If you could issue a query, it could also be that instantly after the query - before it returns to you - the cached data in question is removed from the cached.

mmap, msync and linux process termination

I want to use mmap to implement persistence of certain portions of program state in a C program running under Linux by associating a fixed-size struct with a well known file name using mmap() with the MAP_SHARED flag set. For performance reasons, I would prefer not to call msync() at all, and no other programs will be accessing this file. When my program terminates and is restarted, it will map the same file again and do some processing on it to recover the state that it was in before the termination. My question is this: if I never call msync() on the file descriptor, will the kernel guarantee that all updates to the memory will get written to disk and be subsequently recoverable even if my process is terminated with SIGKILL? Also, will there be general system overhead from the kernel periodically writing the pages to disk even if my program never calls msync()?
EDIT: I've settled the problem of whether the data is written, but I'm still not sure about whether this will cause some unexpected system loading over trying to handle this problem with open()/write()/fsync() and taking the risk that some data might be lost if the process gets hit by KILL/SEGV/ABRT/etc. Added a 'linux-kernel' tag in hopes that some knowledgeable person might chime in.
I found a comment from Linus Torvalds that answers this question
http://www.realworldtech.com/forum/?threadid=113923&curpostid=114068
The mapped pages are part of the filesystem cache, which means that even if the user process that made a change to that page dies, the page is still managed by the kernel and as all concurrent accesses to that file will go through the kernel, other processes will get served from that cache.
In some old Linux kernels it was different, that's the reason why some kernel documents still tell to force msync.
EDIT: Thanks RobH corrected the link.
EDIT:
A new flag, MAP_SYNC, is introduced since Linux 4.15, which can guarantee the coherence.
Shared file mappings with this flag provide the guarantee that
while some memory is writably mapped in the address space of
the process, it will be visible in the same file at the same
offset even after the system crashes or is rebooted.
references:
http://man7.org/linux/man-pages/man2/mmap.2.html search MAP_SYNC in the page
https://lwn.net/Articles/731706/
I decided to be less lazy and answer the question of whether the data is written to disk definitively by writing some code. The answer is that it will be written.
Here is a program that kills itself abruptly after writing some data to an mmap'd file:
#include <stdint.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
typedef struct {
char data[100];
uint16_t count;
} state_data;
const char *test_data = "test";
int main(int argc, const char *argv[]) {
int fd = open("test.mm", O_RDWR|O_CREAT|O_TRUNC, (mode_t)0700);
if (fd < 0) {
perror("Unable to open file 'test.mm'");
exit(1);
}
size_t data_length = sizeof(state_data);
if (ftruncate(fd, data_length) < 0) {
perror("Unable to truncate file 'test.mm'");
exit(1);
}
state_data *data = (state_data *)mmap(NULL, data_length, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, fd, 0);
if (MAP_FAILED == data) {
perror("Unable to mmap file 'test.mm'");
close(fd);
exit(1);
}
memset(data, 0, data_length);
for (data->count = 0; data->count < 5; ++data->count) {
data->data[data->count] = test_data[data->count];
}
kill(getpid(), 9);
}
Here is a program that validates the resulting file after the previous program is dead:
#include <stdint.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <assert.h>
typedef struct {
char data[100];
uint16_t count;
} state_data;
const char *test_data = "test";
int main(int argc, const char *argv[]) {
int fd = open("test.mm", O_RDONLY);
if (fd < 0) {
perror("Unable to open file 'test.mm'");
exit(1);
}
size_t data_length = sizeof(state_data);
state_data *data = (state_data *)mmap(NULL, data_length, PROT_READ, MAP_SHARED|MAP_POPULATE, fd, 0);
if (MAP_FAILED == data) {
perror("Unable to mmap file 'test.mm'");
close(fd);
exit(1);
}
assert(5 == data->count);
unsigned index;
for (index = 0; index < 4; ++index) {
assert(test_data[index] == data->data[index]);
}
printf("Validated\n");
}
I found something adding to my confusion:
munmap does not affect the object that was mappedthat is, the call to munmap
does not cause the contents of the mapped region to be written
to the disk file. The updating of the disk file for a MAP_SHARED
region happens automatically by the kernel's virtual memory algorithm
as we store into the memory-mapped region.
this is excerpted from Advanced Programming in the UNIX® Environment.
from the linux manpage:
MAP_SHARED Share this mapping with all other processes that map this
object. Storing to the region is equiva-lent to writing to the
file. The file may not actually be updated until msync(2) or
munmap(2) are called.
the two seem contradictory. is APUE wrong?
I didnot find a very precise answer to your question so decided add one more:
Firstly about losing data, using write or mmap/memcpy mechanisms both writes to page cache and are synced to underlying storage in background by OS based on its page replacement settings/algo. For example linux has vm.dirty_writeback_centisecs which determines which pages are considered "old" to be flushed to disk. Now even if your process dies after the write call has succeeded, the data would not be lost as the data is already present in kernel pages which will eventually be written to storage. The only case you would lose data is if OS itself crashes (kernel panic, power off etc). The way to absolutely make sure your data has reached storage would be call fsync or msync (for mmapped regions) as the case might be.
About the system load concern, yes calling msync/fsync for each request is going to slow your throughput drastically, so do that only if you have to. Remember you are really protecting against losing data on OS crashes which I would assume is rare and probably something most could live with. One general optimization done is to issue sync at regular intervals say 1 sec to get a good balance.
Either the Linux manpage information is incorrect or Linux is horribly non-conformant. msync is not supposed to have anything to do with whether the changes are committed to the logical state of the file, or whether other processes using mmap or read to access the file see the changes; it's purely an analogue of fsync and should be treated as a no-op except for the purposes of ensuring data integrity in the event of power failure or other hardware-level failure.
According to the manpage,
The file may not actually be
updated until msync(2) or munmap() is called.
So you will need to make sure you call munmap() prior to exiting at the very least.

Reading / writing from using I2C on Linux

I'm trying to read/write to a FM24CL64-GTR FRAM chip that is connected over a I2C bus on address 0b 1010 011.
When I'm trying to write 3 bytes (data address 2 bytes, + data one byte), I get a kernel message ([12406.360000] i2c-adapter i2c-0: sendbytes: NAK bailout.), as well as the write returns != 3. See code below:
#include <linux/i2c-dev.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdint.h>
int file;
char filename[20];
int addr = 0x53; // 0b1010011; /* The I2C address */
uint16_t dataAddr = 0x1234;
uint8_t val = 0x5c;
uint8_t buf[3];
sprintf(filename,"/dev/i2c-%d",0);
if ((file = open(filename,O_RDWR)) < 0)
exit(1);
if (ioctl(file,I2C_SLAVE,addr) < 0)
exit(2);
buf[0] = dataAddr >> 8;
buf[1] = dataAddr & 0xff;
buf[2] = val;
if (write(file, buf, 3) != 3)
exit(3);
...
However when I write 2 bytes, then write another byte, I get no kernel error, but when trying to read from the FRAM, I always get back 0. Here is the code to read from the FRAM:
uint8_t val;
if ((file = open(filename,O_RDWR)) < 0)
exit(1);
if (ioctl(file,I2C_SLAVE,addr) < 0)
exit(2);
if (write(file, &dataAddr, 2) != 2) {
exit(3);
if (read(file, &val, 1) != 1) {
exit(3);
None of the functions return an error value, and I have also tried it with:
#include <linux/i2c.h>
struct i2c_rdwr_ioctl_data work_queue;
struct i2c_msg msg[2];
uint8_t ret;
work_queue.nmsgs = 2;
work_queue.msgs = msg;
work_queue.msgs[0].addr = addr;
work_queue.msgs[0].len = 2;
work_queue.msgs[0].flags = 0;
work_queue.msgs[0].buf = &dataAddr;
work_queue.msgs[1].addr = addr;
work_queue.msgs[1].len = 1;
work_queue.msgs[1].flags = I2C_M_RD;
work_queue.msgs[1].buf = &ret;
if (ioctl(file,I2C_RDWR,&work_queue) < 0)
exit(3);
Which also succeeds, but always returns 0. Does this indicate a hardware issue, or am I doing something wrong?
Are there any FRAM drivers for FM24CL64-GTR over I2C on Linux, and what would the API be? Any link would be helpful.
I do not have experience with that particular device, but in our experience many I2C devices have "quirks" that require a work-around, typically above the driver level.
We use linux (CELinux) and an I2C device driver with Linux as well. But our application code also has a non-trivial I2C module that contains all the work-around intelligence for dealing with all the various devices we have experience with.
Also, when dealing with I2C issues, I often find that I need to re-acquaint myself with the source spec:
http://www.nxp.com/acrobat_download/literature/9398/39340011.pdf
as well as the usage of a decent oscilloscope.
Good luck,
Above link is dead, here are some other links:
http://www.nxp.com/documents/user_manual/UM10204.pdf
and of course wikipedia:
http://en.wikipedia.org/wiki/I%C2%B2C
The NAK was a big hint: the WriteProtect pin was externally pulled up, and had to be driven to ground, after that a single write of the address followed by data-bytes is successful (first code segment).
For reading the address can be written out first (using write()), and then sequential data can be read starting from that address.
Note that the method using the struct i2c_rdwr_ioctl_data and the struct i2c_msg (that is, the last code part you've given) is more efficient than the other ones, since with that method you execute the repeated start feature of I2c.
This means you avoid a STA-WRITE-STO -> STA-READ-<data>...-STO transition, because your communication will become STA-WRITE-RS-READ-<data>...STO (RS = repeated start). So, saves you a redundant STO-STA transient.
Not that it differs a lot in time, but if it's not needed, why losing on it...
Just my 2 ct.
Best rgds,
You had some mistakes!
The address of ic is Ax in hex, x can be anything but the 4 upper bits should be A=1010 !!!

Resources