Windows equivalent to Linux's readahead syscall? - c

Is there a Windows equivalent to Linux's readahead syscall?
EDIT:
I would like a full function signature if possible, showing the equivalent offset/count parameters (or lower/upper).
Eg:
The Linux function signature is:
ssize_t readahead(int fd, off64_t *offset, size_t count);
and an example of it's use is
readahead(file, 100, 500);
Where "file" is a file descriptor previously set by a function like mmap. This call is reading 500 bytes at index 100.
EDIT 2:
Please read this if you are unsure what readahead does: http://linux.die.net/man/2/readahead

Yes. It is FileSystemControl FSCTL_FILE_PREFETCH.
It is used in Windows Vista and above for prefetching both when an application starts and at boot time.
It is also used by the SuperFetch technology that uses heuristics to load applications at approximately the times of day you generally use them.
FSCTL_FILE_PREFETCH itself is not documented on MSDN, but it is easy to figure out the parameter format by examining the DeviceIoControl calls made on app startup: Just start an application in the debugger that already has a .pf file in the c:\Windows\Prefetch directory and break on DeviceIoControl (or if you're using a kernel debugger, break when the NTFS driver receives its first FSCTL_FILE_PREFETCH). Examine the buffer passed in and compare it with the .pf file and the range actually used later. I did this once out of curiosity but didn't record the details.
In case you are unfamiliar with DeviceIoControl and IRP_MJ_FILESYSTEM_CONTROL, here are some links to look at:
FileSystemControl at the IRP level IRP_MJ_FILESYSTEM_CONTROL
DeviceIoControl, which is used to invoke FileSystemControl IRPs
Structure of IO Control Codes

As of Windows 8, there exists a more or less direct equivalent to madvise(MADV_WILLNEED), which is effectively the same thing (Windows has an unified VM/cache system).
Assuming that you have memory-mapped the file, you can thus use PrefetchVirtualMemory to prefetch it.
This is still slightly more complicated than you'd wish, but not nearly as harsh as DeviceIoControl. Also note that you can easily prefetch several independent, discontinuous ranges.

I am not sure if I understand correctly, in what you said 'Where "file" is a file descriptor previously set by a function like mmap. This call is reading 500 bytes at index 100.' That sounds suspiciously like seeking to the offset and read 500 bytes... but you want it to be pre-fetched ahead...
In C Code, it would look something like this:
fseek(fp, 100, SEEK_CUR);
fread(&data, 500, 1, fp);
But prefetching it, I guess, you would want to hook up some kind of events using waithandles, and when the event gets raised, the data gets stored somewhere in a buffer...
To be honest, I have not come across a such thing that does pre-fetching data...but Ray's answer surprised me, but then again it is only for Vista upwards, if you want to maintain compatibility...that's something to keep in mind... but the links below may be of help...
Ok, there was a blog discussing this, a library written in Delphi, the source code is here, browsing the code here, ok, it may not be what you want but it may help you point in the direction... Sorry if its not what you are looking for...
Hope this helps,
Best regards,
Tom.

Related

ALSA API using in Matlab - buffer issue

being part of a lab course, I have to update the simulation about Pulse Coded Modulation. Initially, the simulation was written in 1998 using the OSS (open sound system) and was never updated thereafter. I have rewritten the entire code and ported it to ALSA.
The code itself is a bit long, that's why I haven't put it here but am providing a link.
Now to my issue: Whenever I want to play a vector of random length containing many samples, I start hearing weird periodic random noises. I have a feeling it's due to a buffer underrun. For a better understanding, I have recorded the output.
I believe it has to do something with the parameters I've set. Even though I tried out many cases, I didn't come to a solution.
Just take a look at the period size, buffer size, periods and the sbplay(..) function. PS.: My HW is set such that buffer size = period size * periods
I hope you can help me somehow! Thanks in advance
Code
Output WAV
BTW.: ALSA: buffer underrun on snd_pcm_writei call
didn't help me much...
Efe,
Why don't you try the audioplayer/audiorecorder functions in MATLAB. They use ALSO on Linux. If you want greater control over the latency try the dsp.AudioPlayer/AudioRecorder system objects.
Dinesh

random reading on very large files with fgets seems to bring Windows caching at it's limits

I have written a C/C++-program for Windows 7 - 64bit that works on very large files. In the final step it reads lines from an input-file (10GB+) and writes them to an output file. The access to the input-file is random, the writing is sequential.
EDIT: Main reason for this approach is to reduce RAM usage.
What I basically do in the reading part is this: (Sorry, very shortened and maybe buggy)
void seekAndGetLine(char* line, size_t lineSize, off64_t pos, FILE* filePointer){
fseeko64(filePointer, pos, ios_base::beg);
fgets(line, lineSize, filePointer);
}
Normally this code is fine, not to say fast, but under some very special conditions it gets very slow. The behaviour doesn't seem to be deterministic, since the performance drops occure on different machines at other parts of the file or even don't occure at all. It even goes so far, that the program totally stops reading, while there are no disc-operations.
Another sympthom seems to be the used RAM. My process keeps it's RAM steady, but the RAM used by the System grows sometimes very large. After using some RAM-Tools I found out, that the Windows Mapped File grows into several GBs. This behaviour also seems to depend on the hardware, since it occure on different machines at different parts of the process.
As far as I can tell, this problem doesn't exist on SSDs, so it definitely has something to do with the responsetime of the HDD.
My guess is that the Windows Caching gets somehow "wierd". The program is fast as long as the cache does it's work. But when Caching goes wrong, the behaviour goes either into "stop reading" or "grow cache size" and sometimes even both. Since I'm no expert for the windows caching algorithms, I would be happy to hear an explanation. Also, is there any way to get Windows out of C/C++ to manipulate/stop/enforce the caching.
Since I'm hunting this problem for a while now, I've already tried some tricks, that didn't work out:
filePointer = fopen(fileName, "rbR"); //Just fills the cache till the RAM is full
massive buffering of the read/write, to stop getting the two into each others way
Thanks in advance
Truly random access across a huge file is the worst possible case for any cache algorithm. It may be best to turn off as much caching as possible.
There are multiple levels of caching:
the CRT library (since you're using the f- functions)
the OS and filesystem
probably onboard the drive itself
If you replace your I/O calls via the f- functions in the CRT with the comparable ones in the Windows API (e.g., CreateFile, ReadFile, etc.) you can eliminate the CRT caching, which may be doing more harm than good. You can also warn the OS that you're going to be doing random accesses, which affects its caching strategy. See options like FILE_FLAG_RANDOM_ACCESS and possibly FILE_FLAG_NO_BUFFERING.
You'll need to experiment and measure.
You might also have to reconsider how your algorithm works. Are the seeks truly random? Can you re-sequence them, perhaps in batches, so that they're in order? Can you limit access to a relatively small region of the file at a time? Can you break the huge file into smaller files and then work with one piece at a time? Have you checked the level of fragmentation on the drive and on the particular file?
Depending on the larger picture of what your application does, you could possibly take a different approach - maybe something like this:
decide which lines you need from the input file and store the
line numbers in a list
sort the list of line numbers
read through the input file once, in order, and pull out the lines
you need (better yet, seek to next line and grab it, especially when there's big gaps)
if the list of lines you're grabbing is small enough, you can store
them in memory for reordering before output, otherwise, stick them
in a smaller temporary file and use that file as input for your
current algorithm to reorder the lines for final output
It's definitely a more complex approach, but it would be much kinder to your caching subsystem, and as a result, could potentially perform significantly better.

How to read a large file in UNIX/LINUX with limited RAM?

I have a large text file to be opened (eg- 5GB size). But with a limited RAM (take 1 GB), How can I open and read the file with out any memory error? I am running on a linux terminal with with the basic packages installed.
This was an interview question, hence please do not look into the practicality.
I do not know whether to look at it in System level or programmatic level... It would be great if someone can throw some light into this issue.
Thanks.
Read it character by character... or X bytes by X bytes... it really depends what you want to do with it... As long as you don't need the whole file at once, that works.
(Ellipses are awesome)
What do they want you to do with the file? Are you looking for something? Extracting something? Sorting? This will affect your approach.
It may be sufficient to read the file line by line or character by character if you're looking for something. If you need to jump around the file or analyze sections of it, then most likely want to memory map it. Look up mmap(). Here's an short article on the subject:memory mapped i/o
[just comment]
If you are going to use system calls (open() and read()), then reading character by character will generate a lot of system calls that severely slow down your application. Even with the existence of the buffer cache (or disk file), system calls are expensive.
It is better to read block by block where block size "SHOULD" be more than 1MB. In case of 1MB block size, you will issue 5*1024 system calls.

Display pixel on screen in C

How would I change a pixel on a display, in C?
Assume NOTHING: I am using a linux machine from console to do this. I do not want to use GUI toolkits or frameworks to draw the pixel. I do not want to draw the pixel in a window. I want to draw the pixel directly to the screen.
EDIT: I have a screen. I'm on a laptop running linux from console. I'd prefer a solution not using X as I'd rather learn how X works than how to use X.
If theres more information, ask, but don't assume. I'm not trying to build a GUI, and that was the main purpose of blocking assumptions as I don't want people to assume I'm doing things the long way when in reality I'm just tinkering.
EDIT 2: You may use any X11 related libraries provided that you can explain how they work.
If we really assume nothing, can we even assume that X is running? For that matter, can we even assume that there is a video card? Perhaps Linux is running headless and we're accessing it over a serial console.
If we are allowed to assume a few things, let's assume that Linux has booted with framebuffer support. (It's been a couple years since I worked with Linux framebuffers, I may get some of the details wrong.) There will be a device created, probably /dev/fb or /dev/fb0. Open that file and start writing RGB values at an offset, and the screen will change, pretty much regardless of anything: text console, graphical console, full-fledged desktop envrionment, etc. If you want to see if framebuffer support is working, do dd if=/dev/zero of=/dev/fb on the command line, and the display should go all black.
C doesnt have any graphics capabilities - you'd need to use a third party library for this.
You cannot assume a display in C. There is literally no way to do what you ask.
Edit: Okay, you have a display, but again, there's not a whole lot you can get from there. The point is that there are a TON of competing standards for graphics displays, and while some of them (VGA interfaces, for example) are standardized, a lot of the others (display driver interfaces, for example) are NOT. Much of what X (and other display device drivers, such as Windows or the like) do, is have specific interface code for how to talk to the display drivers; they abstract out the complexity of dealing with the display drivers. The windowing systems, though, have HUGE libraries of complicated and specific code for dealing with the display drivers; the fact that these things are relatively transparent is an indication of just how much work they've put into these things over time.
Very primitive and making a lot of assumptions:
fd = open("/dev/fb0", O_RDWR);
lseek(fd, 640*y+x, SEEK_SET);
write(fd, "\377\377\377\377", 4);
In reality, you would use mmap rather than write, and use the appropriate ioctl to query the screen mode rather than assuming 640xHHH 32bpp. There are also endian issues, etc.
So in real reality, you might use some sort of library code that handles this kind of thing for you.
I suppose you could paint to the terminal program that you are using as your console. All you have to do is figure out which one that is and look it up.
Whoops I assumed a terminal. :P
I think what you are looking for is information on how to write to the frame buffer. The easiest way would be to use SDL and render to the frame buffer, or else use GTK+ with DirectFB, although that goes against your edict on not using toolkits or frameworks.

Converting Win16 C code to Win32

In general, what needs to be done to convert a 16 bit Windows program to Win32? I'm sure I'm not the only person to inherit a codebase and be stunned to find 16-bit code lurking in the corners.
The code in question is C.
The meanings of wParam and lParam have changed in many places. I strongly encourage you to be paranoid and convert as much as possible to use message crackers. They will save you no end of headaches. If there is only one piece of advice I could give you, this would be it.
As long as you're using message crackers, also enable STRICT. It'll help you catch the Win16 code base using int where it should be using HWND, HANDLE, or something else. Converting these will greatly help with #9 on this list.
hPrevInstance is useless. Make sure it's not used.
Make sure you're using Unicode-friendly calls. That doesn't mean you need to convert everything to TCHARs, but means you better replace OpenFile, _lopen, and _lcreat with CreateFile, to name the obvious
LibMain is now DllMain, and the entire library format and export conventions are different
Win16 had no VMM. GlobalAlloc, LocalAlloc, GlobalFree, and LocalFree should be replaced with more modern equivalents. When done, clean up calls to LocalLock, LocalUnlock and friends; they're now useless. Not that I can imagine your app doing this, but make sure you don't depend on WM_COMPACTING while you're there.
Win16 also had no memory protection. Make sure you're not using SendMessage or PostMessage to send pointers to out-of-process windows. You'll need to switch to a more modern IPC mechanism, such as pipes or memory-mapped files.
Win16 also lacked preemptive multitasking. If you wanted a quick answer from another window, it was totally cool to call SendMessage and wait for the message to be processed. That may be a bad idea now. Consider whether PostMessage isn't a better option.
Pointer and integer sizes change. Remember to check carefully anywhere you're reading or writing data to disk—especially if they're Win16 structures. You'll need to manually redo them to handle the shorter values. Again, the least painful way to deal with this will be to use message crackers where possible. Otherwise, you'll need to manually hunt down and convert int to DWORD and so on where applicable.
Finally, when you've nailed the obvious, consider enabling 64-bit compilation checks. A lot of the issues faced with going from 16 to 32 bits are the same as going from 32 to 64, and Visual C++ is actually pretty smart these days. Not only will you catch some lingering issues; you'll get yourself ready for your eventual Win64 migration, too.
EDIT: As #ChrisN points out, the official guide for porting Win16 apps to Win32 is available archived, and both fleshes out and adds to my points above.
Apart from getting your build environment right, Here are few specifics you will need to address:
structs containing ints will need to change to short or widen from 16 to 32 bits. If you change the size of the structure and this is loaded/saved to disk you will need write data file upgrade code.
Per window data is often stored with the window handle using GWL_USERDATA. If you widen some of the data to 32 bits, your offsets will change.
POINT & SIZE structures are 64 bits in Win32. In Win16 they were 32 bits and could be returned as a DWORD (caller would split return value into two 16 bit values). This no longer works in Win32 (i.e. Win32 does not return 64 bit results) and the functions were changed to accept a pointers to store the return values. You will need to edit all of these. APIs like GetTextExtent are affected by this. This same issue also applies to some Windows messages.
The use of INI files is discouraged in Win32 in favour of the registry. While the INI file functions still work you will need to be careful with Vista issues. 16 bit programs often stored their INI file in the Windows system directory.
This is just a few of the issues I can recall. It has been over a decade since I did any Win32 porting. Once you get into it it is quite quick. Each codebase will have its own "feel" when it comes to porting which you will get used to. You will probably even find a few bugs along the way.
There was a definitive guide in the article Porting 16-Bit Code to 32-Bit Windows on MSDN.
The original win32 sdk had a tool that scanned source code and flagged lines that needed to be changed, but I can't remember the name of the tool.
When I've had to do this in the past, I've used a brute force technique - i.e.:
1 - update makefiles or build environment to use 32 bit compiler and linker. Optionally, just create a new project in your IDE (I use Visual Studio), and add the files manually.
2 - build
3 - fix errors
4 - repeat 2&3 until done
The pain of the process depends on the application you are migrating. I've converted 10,000 line programs in an hour, and 75,000 line programs in less than a week. I've also had some small utilities that I just gave up on and rewrote (mostly) from scratch.
I agree with Alan that trial and error is probably the best way.
Here are some good tips.
Agreed that the compiler will probably catch most of the errors. Also, if you are using "near" and "far" pointers you can remove those designations -- a pointer is just a pointer in Win32.

Resources