matching the ls command output - c

I'm trying to code the ls command. I have the following function that prints each file name :
int ft_list(const char *filename)
{
DIR *dirp;
struct dirent *dir;
if (!(dirp = opendir(filename)))
return (-1);
while ((dir = readdir(dirp)))
{
if (dir->d_name[0] != '.')
ft_putendl(dir->d_name);
}
closedir(dirp);
return (0);
}
The ls command prints the files organized into columns to fit the screen width. I have read about it and I think it uses the ioctl standard library function, but I can't find any details. How exactly can I do this?

In order to arrange the files in columns, you need to figure out the current width of the terminal window. On many unix-like systems (including Linux and OS X), you can indeed use ioctl to get that information, using the TIOCGWINSZ selector.
This is precisely what ls does (on systems which support the ioctl request), once it has determined that standard output is a terminal (unless single-column firmat is forced with the -1 flag). If it cannot figure out the terminal width, it uses 80.
Here's a quick example of how to get the information. (On Linux systems, you can probably find the details by typing man tty_ioctl).
For simplicity, the following code assumes that stdout is file descriptor 1. In retrospect, FILE_STDOUT would have been better. If you wanted to check an arbitrary open file, you would need to use fileno to get the fd number for the FILE*.
/* This must come before any include, in order to see the
* declarations of Posix functions which are not in standard C
*/
#define _XOPEN_SOURCE 700
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/ioctl.h>
/* If stdout is a terminal and it is possible to find out how many
* columns its window has, return that number. Otherwise, return -1
*/
int window_get_columns(void) {
struct winsize sizes;
int cols = -1;
if (isatty(1)) {
/* Only try this if stdout is a terminal */
int status = ioctl(1, TIOCGWINSZ, &sizes);
if (status == 0) {
cols = sizes.ws_col;
}
}
return cols;
}
/* Example usage */
/* Print a line consisting of 'len' copies of the character 'ch' */
void print_row(int len, int ch) {
for (int i = 0; i < len; ++i) putchar(ch);
putchar('\n');
}
int main(int argc, char* argv[]) {
/* Print the first argument centred in the terminal window,
* if standard output is a terminal
*/
if (argc <= 1) return 1; /* No argument, nothing to do */
int width = window_get_columns();
/* If we can't figure out the width of the screen, just use the
* width of the string
*/
int arglen = strlen(argv[1]);
if (width < 0) width = arglen;
int indent = (width - arglen) / 2;
print_row(width - 1, '-');
printf("%*s\n", indent + arglen, argv[1]);
print_row(width - 1, '-');
return 0;
}
Since writing the above sample, I tracked down the source of the Gnu implementation of ls; its (somewhat more careful) invocation of ioctl will be seen here

Related

Can not read from a pipe, and another stdin issue

So, I asked here just a while ago, but half of that question was just me being dumb. And I still have issues. I hope that this will be clearer than the question before.
I'm writing POSIX cat, I nearly got it working, but I have couple of issues:
My cat can not read from a pipe and I really do not know why (redirecting (<) works fine)
I can not figure out how to make it continuously read stdin, without some issues. I had a version that worked "fine", but would create a stack-overflow. The other version wouldn't stop reading from stdin if there was only stdin i.e.: my-cat < file would read from stdin until it got terminated which it shouldn't, but it has to read from stdin and wait for termination if no files are suplied.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/stat.h>
#include <fcntl.h>
int main(int argc, char *argv[])
{
char opt;
while ((opt = getopt(argc, argv, "u")) != EOF) {
switch(opt) {
case 'u':
/* Make the output un-buffered */
setbuf(stdout, NULL);
break;
default:
break;
}
}
argc -= optind;
argv += optind;
int i = 0, fildes, fs = 0;
do {
/* Check for operands, if none or operand = "-". Read from stdin */
if (argc == 0 || !strcmp(argv[i], "-")) {
fildes = STDIN_FILENO;
} else {
fildes = open(argv[i], O_RDONLY);
}
/* Check for directories */
struct stat fb;
if (!fstat(fildes, &fb) && S_ISDIR(fb.st_mode)) {
fprintf(stderr, "pcat: %s: Is a directory\n", argv[i]);
i++;
continue;
}
/* Get file size */
fs = fb.st_size;
/* If bytes are read, write them to stdout */
char *buf = malloc(fs * sizeof(char));
while ((read(fildes, buf, fs)) > 0)
write(STDOUT_FILENO, buf, fs);
free(buf);
/* Close file if it's not stdin */
if (fildes != STDIN_FILENO)
close(fildes);
i++;
} while (i < argc);
return 0;
}
Pipes don't have a size, and nor do terminals. The contents of the st_size field is undefined for such files. (On my system it seems to always contain 0, but I don't think there is any cross-platform guarantee of that.)
So your plan of reading the entire file at one go and writing it all out again is not workable for non-regular files, and is risky even for them (the read is not guaranteed to return the full number of bytes requested). It's also an unnecessary memory hog if the file is large.
A better strategy is to read into a fixed-size buffer, and write out only the number of bytes you successfully read. You repeat this until end-of-file is reached, which is indicated by read() returning 0. This is how you solve your second problem.
On a similar note, write() is not guaranteed to write out the full number of bytes you asked it to, so you need to check its return value, and if it was short, try again to write out the remaining bytes.
Here's an example:
#define BUFSIZE 65536 // arbitrary choice, can be tuned for performance
ssize_t nread;
char buf[BUFSIZE]; // or char *buf = malloc(BUFSIZE);
while ((nread = read(filedes, buf, BUFSIZE)) > 0) {
ssize_t written = 0;
while (written < nread) {
ssize_t ret = write(STDOUT_FILENO, buf + written, nread - written);
if (ret <= 0)
// handle error
written += ret;
}
}
if (nread < 0)
// handle error
As a final comment, your program lacks error checking in general; e.g. if the file cannot be opened, it will proceed anyway with filedes == -1. It is important to check the return value of every system call you issue, and handle errors accordingly. This would be essential for a program to be used in real life, and even for toy programs created just as an exercise, it will be very helpful in debugging them. (Error checking would probably have given you some clues in figuring out what was wrong with this program, for instance.)
Your cat (You can call it my-cat, but I preferred to call it felix, just permit me the pun) should be used with stdio all the time to get the benefit of the buffering done by the stdio package. Below is a simplified version of cat using exclusively stdio package (almost exactly equal as it appears in K&R) and you'll see that is completely efficient as shown (you will see that the structure is almost exactly as yours, but I simplify the processing of the data copy /like K&R book/ and the processing of arguments /yours is a bit meshy/):
felix.c
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <getopt.h>
#define ERR(_code, _fmt, ...) do { \
fprintf(stderr,"%s: " _fmt, progname, \
##__VA_ARGS__); \
if (_code) exit(_code); \
} while (0)
char *progname = "cat";
void process(FILE *f);
int main(int argc, char **argv)
{
int opt;
while ((opt = getopt(argc, argv, "u")) != EOF) {
switch (opt) {
case 'u': setbuf(stdout, NULL); break;
}
}
/* for the case it has been renamed, calculate the basename
* of argv[0] (progname is used in the macro ERR above) */
progname = strrchr(argv[0], '/');
progname = progname
? progname + 1
: argv[0];
/* shift options */
argc -= optind;
argv += optind;
if (argc) {
int i;
for (i = 0; i < argc; i++) {
FILE *f = fopen(argv[i], "r");
if (!f) {
ERR(EXIT_FAILURE,
"%s: %s (errno = %d)\n",
argv[i], strerror(errno), errno);
}
process(f);
fclose(f);
}
} else {
process(stdin);
}
exit(EXIT_SUCCESS);
}
/* you don't need to complicate here, fgetc and putchar use buffering as you stated in main
* (no output buffering if you do the setbuf(NULL) and input buffering all the time). The buffer
* size is best to leave stdio to calculate it, as it queries the filesystem to get the best
* input/output size and create buffers this size. and the processing is simple with a loop like
* the one below. You'll get no appreciable difference between this and any other input/output.
* you can believe me, I've tested it. */
void process(FILE *f)
{
int c;
while ((c = fgetc(f)) != EOF) {
putchar(c);
}
}
As you see, nothing has been specially done to support redirection, as redirection is not done inside a program, but done by the program that calls it (in this case by the shell) When you start a program, you receive three already open file descriptors. These are the ones that the shell is using, or the ones that the shell just puts in the places of 0, 1, and 2 before starting your program. So your program has nothing to do to cope with redirection. Everything is done (in this case) in the shell... and this is why your program redirection works, even if you have not done anything for it to work. You have only to do redirection if you are going to call a program with its input, output or standard error redirected somewhere (and this somewhere is not the standard input, output or error you have received from your parent process)... but this is not the case of my-cat.

Why can I not mmap /proc/self/maps?

To be specific: why can I do this:
FILE *fp = fopen("/proc/self/maps", "r");
char buf[513]; buf[512] = NULL;
while(fgets(buf, 512, fp) > NULL) printf("%s", buf);
but not this:
int fd = open("/proc/self/maps", O_RDONLY);
struct stat s;
fstat(fd, &s); // st_size = 0 -> why?
char *file = mmap(0, s.st_size /*or any fixed size*/, PROT_READ, MAP_PRIVATE, fd, 0); // gives EINVAL for st_size (because 0) and ENODEV for any fixed block
write(1, file, st_size);
I know that /proc files are not really files, but it seems to have some defined size and content for the FILE* version. Is it secretly generating it on-the-fly for read or something? What am I missing here?
EDIT:
as I can clearly read() from them, is there any way to get the possible available bytes? or am I stuck to read until EOF?
They are created on the fly as you read them. Maybe this would help, it is a tutorial showing how a proc file can be implemented:
https://devarea.com/linux-kernel-development-creating-a-proc-file-and-interfacing-with-user-space/
tl;dr: you give it a name and read and write handlers, that's it. Proc files are meant to be very simple to implement from the kernel dev's point of view. They do not behave like full-featured files though.
As for the bonus question, there doesn't seem to be a way to indicate the size of the file, only EOF on reading.
proc "files" are not really files, they are just streams that can be read/written from, but they contain no pyhsical data in memory you can map to.
https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html
As already explained by others, /proc and /sys are pseudo-filesystems, consisting of data provided by the kernel, that does not really exist until it is read – the kernel generates the data then and there. Since the size varies, and really is unknown until the file is opened for reading, it is not provided to userspace at all.
It is not "unfortunate", however. The same situation occurs very often, for example with character devices (under /dev), pipes, FIFOs (named pipes), and sockets.
We can trivially write a helper function to read pseudofiles completely, using dynamic memory management. For example:
// SPDX-License-Identifier: CC0-1.0
//
#define _POSIX_C_SOURCE 200809L
#define _ATFILE_SOURCE
#define _GNU_SOURCE
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
/* For example main() */
#include <stdio.h>
/* Return a directory handle for a specific relative directory.
For absolute paths and paths relative to current directory, use dirfd==AT_FDCWD.
*/
int at_dir(const int dirfd, const char *dirpath)
{
if (dirfd == -1 || !dirpath || !*dirpath) {
errno = EINVAL;
return -1;
}
return openat(dirfd, dirpath, O_DIRECTORY | O_PATH | O_CLOEXEC);
}
/* Read the (pseudofile) contents to a dynamically allocated buffer.
For absolute paths and paths relative to current durectory, use dirfd==AT_FDCWD.
You can safely initialize *dataptr=NULL,*sizeptr=0 for dynamic allocation,
or reuse the buffer from a previous call or e.g. getline().
Returns 0 with errno set if an error occurs. If the file is empty, errno==0.
In all cases, remember to free (*dataptr) after it is no longer needed.
*/
size_t read_pseudofile_at(const int dirfd, const char *path, char **dataptr, size_t *sizeptr)
{
char *data;
size_t size, have = 0;
ssize_t n;
int desc;
if (!path || !*path || !dataptr || !sizeptr) {
errno = EINVAL;
return 0;
}
/* Existing dynamic buffer, or a new buffer? */
size = *sizeptr;
if (!size)
*dataptr = NULL;
data = *dataptr;
/* Open pseudofile. */
desc = openat(dirfd, path, O_RDONLY | O_CLOEXEC | O_NOCTTY);
if (desc == -1) {
/* errno set by openat(). */
return 0;
}
while (1) {
/* Need to resize buffer? */
if (have >= size) {
/* For pseudofiles, linear size growth makes most sense. */
size = (have | 4095) + 4097 - 32;
data = realloc(data, size);
if (!data) {
close(desc);
errno = ENOMEM;
return 0;
}
*dataptr = data;
*sizeptr = size;
}
n = read(desc, data + have, size - have);
if (n > 0) {
have += n;
} else
if (n == 0) {
break;
} else
if (n == -1) {
const int saved_errno = errno;
close(desc);
errno = saved_errno;
return 0;
} else {
close(desc);
errno = EIO;
return 0;
}
}
if (close(desc) == -1) {
/* errno set by close(). */
return 0;
}
/* Append zeroes - we know size > have at this point. */
if (have + 32 > size)
memset(data + have, 0, 32);
else
memset(data + have, 0, size - have);
errno = 0;
return have;
}
int main(void)
{
char *data = NULL;
size_t size = 0;
size_t len;
int selfdir;
selfdir = at_dir(AT_FDCWD, "/proc/self/");
if (selfdir == -1) {
fprintf(stderr, "/proc/self/ is not available: %s.\n", strerror(errno));
exit(EXIT_FAILURE);
}
len = read_pseudofile_at(selfdir, "status", &data, &size);
if (errno) {
fprintf(stderr, "/proc/self/status: %s.\n", strerror(errno));
exit(EXIT_FAILURE);
}
printf("/proc/self/status: %zu bytes\n%s\n", len, data);
len = read_pseudofile_at(selfdir, "maps", &data, &size);
if (errno) {
fprintf(stderr, "/proc/self/maps: %s.\n", strerror(errno));
exit(EXIT_FAILURE);
}
printf("/proc/self/maps: %zu bytes\n%s\n", len, data);
close(selfdir);
free(data); data = NULL; size = 0;
return EXIT_SUCCESS;
}
The above example program opens a directory descriptor ("atfile handle") to /proc/self. (This way you do not need to concatenate strings to construct paths.)
It then reads the contents of /proc/self/status. If successful, it displays its size (in bytes) and its contents.
Next, it reads the contents of /proc/self/maps, reusing the previous buffer. If successful, it displays its size and contents as well.
Finally, the directory descriptor is closed as it is no longer needed, and the dynamically allocated buffer released.
Note that it is perfectly safe to do free(NULL), and also to discard the dynamic buffer (free(data); data=NULL; size=0;) between the read_pseudofile_at() calls.
Because pseudofiles are typically small, the read_pseudofile_at() uses a linear dynamic buffer growth policy. If there is no previous buffer, it starts with 8160 bytes, and grows it by 4096 bytes afterwards until sufficiently large. Feel free to replace it with whatever growth policy you prefer, this one is just an example, but works quite well in practice without wasting much memory.

<io.h>_filelength and _filelengthi64 always returns 0

I am using low level io functions to fetch the size of a file in bytes and write it to stdout. I am using windows 7 64bit, and I am using visual studio 2017, x64 debugging mode. The functions _filelength and _filelengthi64 are exclusive to the windows operating system however when I use them they both return a 0 for any file I open. Here is the full code, but the issue should only lie with _sopen_s() or _filelengthi64():
Header
#pragma once
// Headers
#include <io.h>
#include <string.h>
#include <sys\stat.h>
#include <share.h>
#include <fcntl.h>
#include <errno.h>
// Constants
#define stdout 1
#define stderr 2
// Macros
#define werror_exit { werror(); return 1; }
#define werr_exit(s) { _write(stderr, (s), (unsigned int)strlen((s))); return 1; }
// Declarations
extern void werror();
extern void wnum(__int64 num);
Source
#include "readbinaryfile.h"
int main(int argc, char **argv)
{
int fhandle;
__int64 fsize;
// open binary file as read only. deny sharing write permissions. allow write permissions if new file
if (_sopen_s(&fhandle, argv[1], _O_RDONLY | _O_BINARY, _SH_DENYWR, _S_IWRITE) == -1)
werror_exit
else if (fhandle == -1)
werr_exit("\nERROR: file does not exist...\n")
if (fsize = _filelengthi64(fhandle) == -1)
{
if (_close(fhandle) == -1)
werror_exit
werror_exit
}
if (_close(fhandle) == -1)
werror_exit
// write the file size to stdout
wnum(fsize);
return 0;
}
// fetch the string representation of the errno global variable and write it to stderr
void werror()
{
char bufstr[95];
size_t buflen = 95; // MSDN suggested number for errno string length
strerror_s(bufstr, buflen, errno);
_write(stderr, bufstr, (unsigned int)buflen);
_set_errno(0);
}
// recursively write the ascii value of each digit in a number to stdout
void wnum(__int64 num)
{
if (num / 10 == 0)
{
_write(stdout, &(num += 48), 1);
return;
}
wnum(num / 10);
_write(stdout, &((num %= 10) += 48), 1);
}
I have tried passing many different filepaths to argv[1] yet they all still show an fsize of 0. In all of those cases, fhandle was assigned a value of 3 after using _sopen_s() which indicates no errors when opening the files. I have verified the operation of wnum() and werror(). I appreciate the help!
_filelengthi64(fhandle) doesn't return 0. The expression _filelengthi64(fhandle) == -1, however, will (assuming a successful call), which is then assigned to fsize. You are ignoring the C operator precedence, dictating that == has higher precedence than =. You will have to use parentheses to change the precedence:
if ((fsize = _filelengthi64(fhandle)) == -1)
{
...
If you want to reduce the amount of mental energy required to write (and especially read) code, it is generally a good idea to isolate normal code logic from error handling, e.g.:
// Normal code flow
fsize = _filelengthi64(fhandle);
// Error handling code
if (fsize == -1)
{
...

How does ls sort filenames?

I'm trying to write a function that mimics the output of the ls command in Unix. I was originally trying to perform this using scandir and alphasort, and this did indeed print the files in the directory, and it did sort them, but for some reason, this sorted list does not seem to match the same "sorted list" of filenames that ls gives.
For example, if I have a directory that contains file.c, FILE.c, and ls.c.
ls displays them in the order: file.c FILE.c ls.c
But when I sort it using alphasort/scandir, it sorts them as: FILE.c file.c ls.c
How does ls sort the files in the directory such that it gives such a differently ordered result?
To emulate default ls -1 behaviour, make your program locale-aware by calling
setlocale(LC_ALL, "");
near the beginning of your main(), and use
count = scandir(dir, &array, my_filter, alphasort);
where my_filter() is a function that returns 0 for names that begin with a dot ., and 1 for all others. alphasort() is a POSIX function that uses the locale collation order, same order as strcoll().
The basic implementation is something along the lines of
#define _POSIX_C_SOURCE 200809L
#define _ATFILE_SOURCE
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <locale.h>
#include <string.h>
#include <dirent.h>
#include <stdio.h>
#include <errno.h>
static void my_print(const char *name, const struct stat *info)
{
/* TODO: Better output; use info too, for 'ls -l' -style output? */
printf("%s\n", name);
}
static int my_filter(const struct dirent *ent)
{
/* Skip entries that begin with '.' */
if (ent->d_name[0] == '.')
return 0;
/* Include all others */
return 1;
}
static int my_ls(const char *dir)
{
struct dirent **list = NULL;
struct stat info;
DIR *dirhandle;
int size, i, fd;
size = scandir(dir, &list, my_filter, alphasort);
if (size == -1) {
const int cause = errno;
/* Is dir not a directory, but a single entry perhaps? */
if (cause == ENOTDIR && lstat(dir, &info) == 0) {
my_print(dir, &info);
return 0;
}
/* Print out the original error and fail. */
fprintf(stderr, "%s: %s.\n", dir, strerror(cause));
return -1;
}
/* We need the directory handle for fstatat(). */
dirhandle = opendir(dir);
if (!dirhandle) {
/* Print a warning, but continue. */
fprintf(stderr, "%s: %s\n", dir, strerror(errno));
fd = AT_FDCWD;
} else {
fd = dirfd(dirhandle);
}
for (i = 0; i < size; i++) {
struct dirent *ent = list[i];
/* Try to get information on ent. If fails, clear the structure. */
if (fstatat(fd, ent->d_name, &info, AT_SYMLINK_NOFOLLOW) == -1) {
/* Print a warning about it. */
fprintf(stderr, "%s: %s.\n", ent->d_name, strerror(errno));
memset(&info, 0, sizeof info);
}
/* Describe 'ent'. */
my_print(ent->d_name, &info);
}
/* Release the directory handle. */
if (dirhandle)
closedir(dirhandle);
/* Discard list. */
for (i = 0; i < size; i++)
free(list[i]);
free(list);
return 0;
}
int main(int argc, char *argv[])
{
int arg;
setlocale(LC_ALL, "");
if (argc > 1) {
for (arg = 1; arg < argc; arg++) {
if (my_ls(argv[arg])) {
return EXIT_FAILURE;
}
}
} else {
if (my_ls(".")) {
return EXIT_FAILURE;
}
}
return EXIT_SUCCESS;
}
Note that I deliberately made this more complex than strictly needed for your purposes, because I did not want you to just copy and paste the code. It will be easier for you to compile, run, and investigate this program, then port the needed changes -- possibly just the one setlocale("", LC_ALL); line! -- to your own program, than try and explain to your teacher/lecturer/TA why the code looks like it was copied verbatim from somewhere else.
The above code works even for files specified on the command line (the cause == ENOTDIR part). It also uses a single function, my_print(const char *name, const struct stat *info) to print each directory entry; and to do that, it does call stat for each entry.
Instead of constructing a path to the directory entry and calling lstat(), my_ls() opens a directory handle, and uses fstatat(descriptor, name, struct stat *, AT_SYMLINK_NOFOLLOW) to gather the information in basically the same manner as lstat() would, but name being a relative path starting at the directory specified by descriptor (dirfd(handle), if handle is an open DIR *).
It is true that calling one of the stat functions for each directory entry is "slow" (especially if you do /bin/ls -1 style output). However, the output of ls is intended for human consumption; and very often piped through more or less to let the human view it at leisure. This is why I would personally do not think the "extra" stat() call (even when not really needed) is a problem here. Most human users I know of tend to use ls -l or (my favourite) ls -laF --color=auto anyway. (auto meaning ANSI colors are used only if standard output is a terminal; i.e. when isatty(fileno(stdout)) == 1.)
In other words, now that you have the ls -1 order, I would suggest you modify the output to be similar to ls -l (dash ell, not dash one). You only need to modify my_print() for that.
In alphanumeric (dictionary) order.
That changes with language, of course. Try:
$ LANG=C ls -1
FILE.c
file.c
ls.c
And:
$ LANG=en_US.utf8 ls -1
file.c
FILE.c
ls.c
That is related to the "collating order". Not a simple issue by any measure.

Reading bytes from /dev/random fails

I have a piece of code written in POSIX compliant C and it doesn't seem to work correctly. The goal is to read from /dev/random, the interface to the Linux/BSD/Darwin kernel's random number generator and output the written byte to a file. I'm not quite sure what I'm overlooking as I'm sure I've covered every ground. Anyway, here it is:
int incinerate(int number, const char * names[]) {
if (number == 0) {
// this shouldn't happen, but if it does, print an error message
fprintf(stderr, "Incinerator: no input files\n");
return 1;
}
// declare some stuff we'll be using
long long lengthOfFile = 0, bytesRead = 0;
int myRandomInteger;
// open the random file block device
int zeroPoint = open("/dev/random", O_RDONLY);
// start looping through and nuking files
for (int i = 1; i < number; i++) {
int filePoint = open(names[i], O_WRONLY);
// get the file size
struct stat st;
stat(names[i], &st);
lengthOfFile = st.st_size;
printf("The size of the file is %llu bytes.\n", lengthOfFile);
while (lengthOfFile != bytesRead) {
read(zeroPoint, &myRandomInteger, sizeof myRandomInteger);
write(filePoint, (const void*) myRandomInteger, sizeof(myRandomInteger));
bytesRead++;
}
close(filePoint);
}
return 0;
}
Any ideas? This is being developed on OS X but I see no reason why it shouldn't also work on Linux or FreeBSD.
If it helps, I've included the following headers:
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/stat.h>
Instead of
write(filePoint, (const void*) myRandomInteger, sizeof(myRandomInteger));
you surely meant to write
write(filePoint, (const void*) &myRandomInteger, sizeof(myRandomInteger));
didn't you? If you use the random bytes read from /dev/random as a pointer, you're almost certain to encounter a segfault sooner or later.

Resources