Allocating "weak" memory pages - c

I am interested to know if there is a way to allocate "weak" memory in userspace in common operating systems like Linux, OS X, or Windows (obviously not possible with standard interfaces). What I mean is the sort of an mmap(), which would invalidate the mapping if the OS elects to push the pages out of core.
Say, I want to work with a 10G dataset on a 32-bit system. To get a piece from this dataset, I read it from the file and decompress it to memory[. I would prefer to keep the decompressed versions of the pieces around, if possible, to avoid decompressing the data on each access, but to allow accessing all pieces I must eventually free some data to avoid exhausting the memory/address space.
I can emulate this by gluing a framework on top of malloc() to free the old pieces if malloc() NULLs out, but it deprives other processes of memory and makes them to page out (or pages out the decompressed pieces). Or, I could keep some soft limit in the application, but that seems arbitrary and only alleviates the problem, and is suboptimal if there's free memory around. I feel that this is something that a virtual memory manager in a modern OS should handle.
Any tips and information about how this problem is tackled in other modern applications are appreciated.

No, there is no mechanism in common operating systems that implement the weak userspace memory as you describe.
Weak references come from the domain of garbage collection frameworks where an object/allocation is otherwise abandoned (i.e. has no "strong" references). The weak references will be invalidated/nulled if the garbage collector gets to the allocation before the application attempts to recycle it by assigning the weak reference to a "strong" one.
The functionality you describe in your question and in the subsequent comments could be better and more simply implemented by an application cache that discards "pages" when he cache fills up and overwrites them with newly needed "pages". Be careful to implement mutexes if the cache can be accessed by multiple threads. The mutexes (if needed) are the most exotic thing here. Everything else is pretty standard vanilla programming.
By way of an academic exercise, you could implement the concept you are describing at the kernel level with a device driver meaning the capability would be exposed as a pseudo device. I would be very reluctant to acutally base a production implementation on this pseudo device as it has a negative effect on portability and maintainability, could adversely affect the entire platform's performance and behavior (after all we're talking kernel code here), and would by comparison take a long time to develop and test.
Good luck in your endeavor.

Related

Why do we even need cache coherence?

In languages like C, unsynchronized reads and writes to the same memory location from different threads is undefined behavior. But in the CPU, cache coherence says that if one core writes to a memory location and later another core reads it, the other core has to read the written value.
Why does the processor need to bother exposing a coherent abstraction of the memory hierarchy if the next layer up is just going to throw it away? Why not just let the caches get incoherent, and require the software to issue a special instruction when it wants to share something?
The acquire and release semantics required for C++11 std::mutex (and equivalents in other languages, and earlier stuff like pthread_mutex) would be very expensive to implement if you didn't have coherent cache. You'd have to write-back every dirty line every time you released a lock, and evict every clean line every time you acquired a lock, if couldn't count on the hardware to make your stores visible, and to make your loads not take stale data from a private cache.
But with cache coherency, acquire and release are just a matter of ordering this core's accesses to its own private cache which is part of the same coherency domain as the L1d caches of other cores. So they're local operations and pretty cheap, not even needing to drain the store buffer. The cost of a mutex is just in the atomic RMW operation it needs to do, and of course in cache misses if the last core to own the mutex wasn't this one.
C11 and C++11 added stdatomic and std::atomic respectively, which make it well-defined to access shared _Atomic int variables, so it's not true that higher level languages don't expose this. It would hypothetically be possible to implement on a machine that required explicit flushes/invalidates to make stores visible to other cores, but that would be very slow. The language model assumes coherent caches, not providing explicit flushes of ranges but instead having release operations that make every older store visible to other threads that do an acquire load that syncs-with the release store in this thread. (See When to use volatile with multi threading? for some discussion, although that answer is mainly debunking the misconception that caches could have stale data, from people mixed up by the fact that the compiler can "cache" non-atomic non-volatile values in registers.)
In fact, some of the guarantees on C++ atomic are actually described by the standard as exposing HW coherence guarantees to software, like "write-read coherence" and so on, ending with the note:
http://eel.is/c++draft/intro.races#19
[ Note: The four preceding coherence requirements effectively disallow compiler reordering of atomic operations to a single object, even if both operations are relaxed loads. This effectively makes the cache coherence guarantee provided by most hardware available to C++ atomic operations. — end note
(Long before C11 and C++11, SMP kernels and some user-space multithreaded programs were hand-rolling atomic operations, using the same hardware support that C11 and C++11 finally exposed in a portable way.)
Also, as pointed out in comments, coherent cache is essential for writes to different parts of the same line by other cores to not step on each other.
ISO C11 guarantees that a char arr[16] can have arr[0] written by one thread while another writes arr[1]. If those are both in the same cache line, and two conflicting dirty copies of the line exist, only one can "win" and be written back. C++ memory model and race conditions on char arrays
ISO C effectively requires char to be as large as smallest unit you can write without disturbing surrounding bytes. On almost all machines (not early Alpha and not some DSPs), that's a single byte, even if a byte store might take an extra cycle to commit to L1d cache vs. an aligned word on some non-x86 ISAs.
The language didn't officially require this until C11, but that just standardized what "everyone knew" the only sane choice had to be, i.e. how compilers and hardware already worked.
Ah, a very deep topic indeed!
Cache coherency between cores is used to synthesise (as closely as possible) and Symetric Multi Processing (SMP) environment. This harks back to the days when multiple single core CPUs were simply tagged on to the same single memory bus, circa mid 1990s, caches weren't really a thing, etc. With multiple CPUs with multiple cores each with multiple caches and multiple memory interfaces per CPU, the synthesis of an SMP-like environment is a lot more complicated, and cache-coherency is a big part of that.
So, when one asks, "Why does the processor need to bother exposing a coherent abstraction of the memory hierarchy if the next layer up is just going to throw it away?", one is really asking "Do we still need an SMP environment?".
The answer is software. An awful lot of software, including all major OSes, has been written around the assumption that they're running on an SMP environment. Take away the SMP, and we'd have to re-write literally everything.
There are now various sage commentators beginning to wonder in articles whether SMP is in fact a dead end, and that we should start worrying about how to get out of that dead end. I think that it won't happen for a good long while yet; the CPU manufacturers have likely got a few more tricks to play to get ever increasing performance, and whilst that keeps being delivered no one will want to suffer the pain of software incompatibility. Security is another reason to avoid SMP - Meltdown and Spectre exploit weaknesses in the way SMP has been synthesised - but I'd guess that whilst other mitigations (however distasteful) are available security alone will not be sufficient reason to ditch SMP.
"Why not just let the caches get incoherent, and require the software to issue a special instruction when it wants to share something?" Why not, indeed? We have been there before. Transputers (1980s, early 1990s) implemented Communicating Sequential Processes (CSP), where if the application needed a different CPU to process some data, the application would have to purposefully transfer data to that CPU. The transfers are (in CSP speak) through "Channels", which are more like network sockets or IPC pipes and not at all like shared memory spaces.
CSP is having something of a resurgence - as a multiprocessing paradigm it has some very beneficial features - and languages such as Go, Rust, Erlang implement it. The thing about those languages' implementations of CSP is that they're having to synthesise CSP on top of an SMP environment, which in turn is synthesised on top of an electronic architecture much more reminiscent of Transputers!
Having had a lot of experience with CSP, my view is that every multi-process piece of software should use CSP; it's a lot more reliable. The "performance hit" of "copying" data (which is what you have to do to do CSP properly on top of SMP) isn't so bad; it's about the same amount of traffic over the cache-coherency connections to copy data from one CPU to another as it is to access the data in an SMP-like way.
Rust is very interesting, because with it's syntax strongly expressing data ownership I suspect that it doesn't have to copy data to implement CSP, it can transfer ownership between threads (processes). Thus it may be getting the benefits of CSP, but without having to copy the data. Therefore it could be very efficient CSP, even if every thread is running on a CPU single core. I've not yet explored Rust deeply enough to know that that is what it's doing, but I have hopes.
On of the nice things about CSP is that with Channels being like network sockets or IPC pipes, one can readily implement CSP across actual network sockets. Raw sockets are not in themselves ideal - they're asynchronous and so more akin to Actor Model (as is ZeroMQ). Actor Model is fairly OK - and I've used it - but it's not as guarateed devoid of runtime problems as CSP is. So one has to implement the CSP bit oneself or find a library. However, with that in place CSP becomes a software architecture that can more easily span arbitrary networks of computers without having to change the software architecture; a local channel and a network channel are "the same", except the network one is a bit slower.
It's a lot harder to take a multithreaded piece of software that assumes SMP, uses semaphores, etc to scale up across multiple machines on a network. In fact, it can't, and has to be re-written.
More recently than Transputers, the Cell processor (Playstation 3 fame) was a multi-core device that did exactly as you suggest. It had a single CPU core, and 8 SPE maths cores each with 255k on-chip core-speed static RAM. To use the SPEs you had to write software to ships code and data in and out of that 256k (there was a monster-fast internal ring bus for doing this, and a very fast external memory interface). The result was that, with the right developer, very good results could be attained.
It took Intel about a further 10 years to usefully get x64 up to about the same performance; adding in a Fused Multply-Add instruction into SSE was what finally got them there, an instruction they'd been keeping in Itanium's repetoire in the vain hope of boosting its appeal. Cell (the SPEs were based in the PowerPC equivalent of SSE - Altivec) had had an FMA instruction from the get-go.
Cache coherency is not needed if a developer takes care of
issuing lock(+ memory barriers) / (mem. barrier)unlock irrespective of it.
Cache coherency is of little value, or even has a negative value
in terms of cost, power, performance, validation etc.
Today, software is more and more distributed. Any way, coherency
can't help two processes running on two different machines.
Even for multi-threaded SW, we end up using IPC.
only small part of data is shared in multi threaded sw.
A large part of data is not shared, if shared, memory barriers should
solve cache syncing.
practically, SW developers depend on explicit locks to access shared data. These locks can efficiently flush the caches with h/w assistance (efficient means, only the caches lines that are modified AND also cached else where). And already, this is exactly done when we lock/unlock. Since every does above said lock/unlock, then Cache coherency is redundant and wastage of silicon space/power, hw engineers sleep.
all compilers(at least C/C++, python VM ) generate code for single threaded, non shared data. If I need to share the data, I just tell it is shared, but not how and why (volatile?). Developers need to take care of managing (again lock/unlock across hw cores/sw threads/hw threads). Most of the time, we write in HLL with non-atomic data. Cache-coherency does not add any value to developers, so, he/she fall back to managing it through locks, which instruct the cache system to efficiently flush. All caches systems have logs of cache lines to flush efficiently w/ or w/o coherency support. (think of cached but non coherent memory. this memory still has logs which can be used for efficient flushing)
Cache coherency very complex on silicon, consuming space and power.
In any case, SW developers takes care of issuing memory barriers (via locks).
So, I think, it is good to get rid of coherency and tell developers to own it.
But I see, trend is opposite.
Look at CXL memory etc... It is coherent.
I am looking for a system call where I can just turn off the cache coherency
for my threads and see experiment

How can i disable cpu cache for certain memory regions?

I read on wikipedia that disabling cpu-cache can improve performance:
Marking some memory ranges as non-cacheable can improve performance, by avoiding caching of memory regions that are rarely re-accessed.
When I googled how to do it in c on linux however, I didn't find anything. It's not that I really need this feature but I'm interested anyways.
And do you know of any projects which use this optimization?
Edit: I'm programming for x86_64
That comment about non-caching doesn't mean what you think it means, and where it is used, it isn't usually a user-accessible feature. That is, CPU cache control is typically a privileged operation.
That said...
-- A normal user program can be build with functions who's attributes are "hot" or "cold" to let the compiler tell the loader to group the functions in ways that will utilize the cache most usefully.
-- A normal program can use the madvise() function in linux to tell the paging function various things, including the fact that the memory just used is or is not likely to be used again soon.
-- The kernel itself uses the Memory Type Range Regesters (mtrr) and Page Attribute Table (pat) flags in later kernels, to tell the hardware that particular ranges of memory (such as the memory mapped display buffer, and the various parts of the PCI bus) are not to be cached.
"Normal Data™" such as you are likely to use in any C program will essentially never benefit from marking any of its data not cache-worthy. The performance improvement that not-cached data enjoys is the subsequent absence of the various cache-flush and memory barrier operations that memory mapped devices and display buffers would need almost constantly. Laying a cache over a memory mapped device, for example, would require a cache invalidate command before every read and a cache forced write command after every single write to make sure that the reads and writes happen at the exact moment needed. This would "poison" the cache usage, using up and instantly discarding cache lines (a physically limited resource) in a most unfriendly and unhelpful way.
In the rare case that you write a program that gains access to one of these cache harmful regions -- such as if you wrote part of the X display server on a linux system -- the kernel would have already set the registers for the device and the non-cache behavior would be transparent to you.
There is effectively no time where your normal application grade program is going to benefit from any ability to mark a variable as harmful to cache beyond the various madvise() type of usage.
Even then, the cases were you could gain any benefit are so rare that if you'd ever acutally run into one, the problem set would have included the need and methodology as part of your research and you'd have been told how and why so explicitly you'd never have needed to ask this question.
To go back to the same example again, if you'd been writing the necessary driver, when you'd been reading up on the display adapter hardware or the PCI bus the various flags and techniques would have been documented and discussed in the hardware guide.
There are ways to pull off cache ejection and such from user space with things like the CLCLEAR instruction on an intel platform. These techniques will not improve general performance.
Since it's a privileged operation on a Linux system, you could write a kernel driver that acquired and marked a region of memory as uncacheable and then let you map it into your application. But the need for such a region is so rare, and so likely to be misused, that there isn't a normal methodology for doing it in place.
So how do you do it? You don't, at least not the you that you are today. When you become a kernel driver writer with an intimate specialty knowledge of multi-threaded code and data synchronization issues, you'll know how you could do it, and at that point you'll know why you don't want to except as a last resort.
TL;DR :: because of the way linux uses and manages data and code, there is never a benefit for marking any part of a normal application as uncacheable that doesn't cause more heartbreak than it saves. As such, there is no unprivileged API for doing this.
P.S. Also, that said, someone already pointed to things that lead to this article http://lwn.net/Articles/255364/ which covers ways to make your program very cache friendly and some of the ways that you can do some cache bypass operations very cheaply. For instance use of memset() tends to go around the cache while setting memory, and some operations can "stream past" the cache. This isn't the same thing as what you ask, but once you understand all of that article you'll have a much better understanding of why marking a region of memory as uncachable is usually, as the Jedi say, not the solution you are looking for.
Recently i needed to experiment with uncached memory in a cache-heavy multi-threaded application.
I came up with this kernel module which allows to map uncached memory to userspace.
User process requests uncached memory by calling mmap() on module's character device (see test directory for demo).
What every programmer should know about memory is indeed a must read !

Which operating systems, and how, can pin pages in a database buffer pool?

Most relational database construction textbooks talk about the concept of being able to pin a page, i.e. prevent the operating system from swapping it out of memory. The concept is so that the database software can use it's own buffer replacement algorithm, which might be a better fit than whatever the OS virtual memory policy provides.
It is unclear to me whether typical desktop operating systems actually provide the programmer with the capability to pin pages. The best I can find on OS X, for example, refers to wired pages, but these seem to be only usable by the superuser.
Is the concept of pinning pages, and of defining appropriate buffer replacement strategies that supersede that of the OS, only of theoretical interest and not really implemented by real relational database systems? Or is it the case that typical desktop OS'es (Linux, Windows, OS X) do include hooks for pinning, and typical relational DB software (Oracle, SQL Server, PostgreSQL, MySQL, etc) uses them?
In PostgreSQL, the database server copies the pages from the file (or from the OS, really) into a shared memory segment which PostgreSQL controls. The OS doesn't know what the mapping is between the file system blocks and the shared memory blocks, so the OS couldn't write those pages back out to their disk locations even if it wanted to, until PostgreSQL tells it to do so by issuing a seek and a write.
The OS could decide to swap parts of shared memory out to disk into a swap partition (for example, if it were under severe memory stress), but it can't write them back to their native location on disk since it doesn't know what that location is.
There are ways to tell the OS not to page out certain parts of memory, such as shmctl(shmid,SHM_LOCK,NULL). But these are mostly intended for security purposes, not performance purposes. For example, you use it to prevent very sensitive information (like the decrypted copy of a private key) from accidentally getting written to swap partitions, from which it might be recovered by the bad guys.
#jjanes is correct to say that the OS can't really write out Pg's shared memory buffer, and can't control what PostgreSQL reads into it, so it doesn't make sense to "pin" it. But that's only half the story.
PostgreSQL does not offer any feature for pinning pages from tables in its shared memory segment. It could do so, and it might arguably be useful, but nobody has implemented it. In most cases the buffer replacement algorithm does a pretty good job by its self.
Partly this is because PostgreSQL relies heavily on the operating system's buffer caches, rather than trying to implement its own. Data might be evicted from shared_buffers, but it's usually still cached in the OS. It's not unreasonable to think of shared_buffers as a first-level cache, and the OS disk cache as the second-level cache.
The features available to control what's kept in the operating system's disk cache are whatever the OS provides. In general, that's not much, because again modern OSes tend to do a better job if you leave them alone and let them manage things themselves.
The idea of manual buffer management, etc, is IMO largely a relic of times when systems had simpler and less effective algorithms for managing caches and buffers automatically.
The main time that automation falls down is if you have something that's used only intermittently, but you want to ensure is available with extremely good response times when it is used; i.e. you wish to degrade the overall system's throughput to make one part of it more responsive. PostgreSQL doesn't offer much control over that; most people simply ensure that they have something regularly querying the data of interest to keep it warm in the cache.
You could write a relatively simple extension to mmap() a file and mlock() its range, but it'd be pretty wasteful and you'd have to fiddle with the default OS limits designed to stop you from locking too much memory.
(FWIW, I think Oracle offers quite a bit of control over pinning relations, indexes, etc, in tune with its "manually control everything whether you want to or not" philosophy, and it bypasses much of the operating system in the process.)
Speaking for SQL Server (on Windows, obviously), there's an OS setting that allows the SQL engine to ignore requests from the OS in response to memory pressure. That setting is called Lock Pages in Memory (LPIM). That permissions is granted on a per-account basis and needs to be granted to the account running your SQL service when the service is started.
Keep in mind that this isn't always a good idea. For example, in a virtualized environment, the hypervisor communicates its memory needs via a balloon driver process in the guest. If the hypervisor needs more memory, it inflates the memory needs of the balloon in the guest. If your SQL process has LPIM turned on, it won't respond and the hypervisor can start flagging as a result. And if the hypervisor isn't happy, ain't nobody happy.

Multiprocessors vs Multithreading in the context of PThreads

I have an application level (PThreads) question regarding choice of hardware and its impact on software development.
I have working multi-threaded code tested well on a multi-core single CPU box.
I am trying to decide what to purchase for my next machine:
A 6-core single CPU box
A 4-core dual CPU box
My question is, if I go for the dual CPU box, will that impact the porting of my code in a serious way? Or can I just allocate more threads and let the OS handle the rest?
In other words, is multiprocessor programming any different from (single CPU) multithreading in the context of a PThreads application?
I thought it would make no difference at this level, but when configuring a new box, I noticed that one has to buy separate memory for each CPU. That's when I hit some cognitive dissonance.
More Detail Regarding the Code (for those who are interested): I read a ton of data from disk into a huge chunk of memory (~24GB soon to be more), then I spawn my threads. That initial chunk of memory is "read-only" (enforced by my own code policies) so I don't do any locking for that chunk. I got confused as I was looking at 4-core dual CPU boxes - they seem to require separate memory. In the context of my code, I have no idea what will happen "under the hood" if I allocate a bunch of extra threads. Will the OS copy my chunk of memory from one CPU's memory bank to another? This would impact how much memory I would have to buy (raising the cost for this configuration). The ideal situation (cost-wise and ease-of-programming-wise) is to have the dual CPU share one large bank of memory, but if I understand correctly, this may not be possible on the new Intel dual core MOBOs (like the HP ProLiant ML350e)?
Modern CPUs1 handle RAM locally and use a separate channel2 to communicate between them. This is a consumer-level version of the NUMA architecture, created for supercomputers more than a decade ago.
The idea is to avoid a shared bus (the old FSB) that can cause heavy contention because it's used by every core to access memory. As you add more NUMA cells, you get higher bandwidth. The downside is that memory becomes non-uniform from the point of view of the CPU: some RAM is faster than others.
Of course, modern OS schedulers are NUMA-aware, so they try to reduce the migration of a task from one cell to another. Sometimes it's okay to move from one core to another in the same socket; sometimes there's a whole hierarchy specifying which resources (1-,2-,3-level cache, RAM channel, IO, etc) are shared and which aren't, and that determines if there would be a penalty or not by moving the task. Sometimes it can determine that waiting for the right core would be pointless and it's better to shovel the whole thing to another socket....
In the vast majority of cases, it's best to leave the scheduler do what it knows best. If not, you can play around with numactl.
As for the specific case of a given program; the best architecture depends heavily in the level of resource sharing between threads. If each thread has its own playground and mostly works alone within it, a smart enough allocator would prioritize local RAM, making it less important on which cell each thread happens to be.
If, on the other hand, objects are allocated by one thread, processed by another and consumed by a third; performance would suffer if they're not on the same cell. You could try to create small thread groups and limit heavy sharing within the group, then each group could go on a different cell without problem.
The worst case is when all threads participate in a great orgy of data sharing. Even if you have all your locks and processes well debugged, there won't be any way to optimize it to use more cores than what are available on a cell. It might even be best to limit the whole process to just use the cores in a single cell, effectively wasting the rest.
1 by modern, I mean any AMD-64bit chip, and Nehalem or better for Intel.
2 AMD calls this channel HyperTransport, and Intel name is QuickPath Interconnect
EDIT:
You mention that you initialize "a big chunk of read-only memory". And then spawn a lot of threads to work on it. If each thread works on its own part of that chunk, then it would be a lot better if you initialize it on the thread, after spawning it. That would allow the threads to spread to several cores, and the allocator would choose local RAM for each, a much more effective layout. Maybe there's some way to hint the scheduler to migrate away the threads as soon as they're spawned, but I don't know the details.
EDIT 2:
If your data is read verbatim from disk, without any processing, it might be advantageous to use mmap instead of allocating a big chunk and read()ing. There are some common advantages:
No need to preallocate RAM.
The mmap operation is almost instantaneous and you can start using it. The data will be read lazily as needed.
The OS can be way smarter than you when choosing between application, mmaped RAM, buffers and cache.
it's less code!
Non needed data won't be read, won't use up RAM.
You can specifically mark as read-only. Any bug that tries to write will cause a coredump.
Since the OS knows it's read-only, it can't be 'dirty', so if the RAM is needed, it will simply discard it, and reread when needed.
but in this case, you also get:
Since data is read lazily, each RAM page would be chosen after the threads have spread on all available cores; this would allow the OS to choose pages close to the process.
So, I think that if two conditions hold:
the data isn't processed in any way between disk and RAM
each part of the data is read (mostly) by one single thread, not touched by all of them.
then, just by using mmap, you should be able to take advantage of machines of any size.
If each part of the data is read by more than one single thread, maybe you could identify which threads will (mostly) share the same pages, and try to hint the scheduler to keep these in the same NUMA cell.
For the x86 boxes you're looking at, the fact that memory is physically wired to different CPU sockets is an implementation detail. Logically, the total memory of the machine appears as one large pool - your wouldn't need to change your application code for it to run correctly across both CPUs.
Performance, however, is another matter. There is a speed penalty for cross-socket memory access, so the unmodified program may not run to its full potential.
Unfortunately, it's hard to say ahead of time whether your code will run faster on the 6-core, one-node box or the 8-core, two-node box. Even if we could see your code, it would ultimately be an educated guess. A few things to consider:
The cross-socket memory access penalty only kicks in on a cache miss, so if your program has good cache behaviour then NUMA won't hurt you much;
If your threads are all writing to private memory regions and you're limited by write bandwidth to memory, then the dual-socket machine will end up helping;
If you're compute-bound rather than memory-bandwidth-bound then 8 cores is likely better than 6;
If your performance is bounded by cache read misses then the 6 core single-socket box starts to look better;
If you have a lot of lock contention or writes to shared data then again this tends to advise towards the single-socket box.
There's a lot of variables, so the best thing to do is to ask your HP reseller for loaner machines matching the configurations you're considering. You can then test your application out, see where it performs best and order your hardware accordingly.
Without more details, it's hard to give a detailed answer. However, hopefully the following will help you frame the problem.
If your thread code is proper (e.g. you properly lock shared resources), you should not experience any bugs introduced by the change of hardware architecture. Improper threading code can sometimes be masked by the specifics of how a specific platform handles things like CPU cache access/sharing.
You may experience a change in application performance per equivalent core due to differing approaches to memory and cache management in the single chip, multi core vs. multi chip alternatives.
Specifically if you are looking at hardware that has separate memory per CPU, I would assume that each thread is going to be locked to the CPU it starts on (otherwise, the system would have to incur significant overhead to move a thread's memory to memory dedicated to a different core). That may reduce overall system efficiency depending on your specific situation. However, separate memory per core also means that the different CPUs do not compete with each other for a given cache line (the 4 cores on each of the dual CPUs will still potentially compete for cache lines, but that is less contention than if 6 cores are competing for the same cache lines).
This type of cache line contention is called False Sharing. I suggest the following read to understand if that may be an issue you are facing
http://www.drdobbs.com/parallel/eliminate-false-sharing/217500206?pgno=3
Bottom line is, application behavior should be stable (other than things that naturally depend on the details of thread scheduling) if you followed proper thread development practices, but performance could go either way depending on exactly what you are doing.

Memory Protection without MMU

I would like to know how memory can be protected without MMU support. I have tried to google it, but have not seen any worthwile papers or research on it. And those which deal with it only deals it for bugs, such as uninitialized pointers and not memory corruption due to a soft error, that is, due to a hardware transient fault corrupting an instruction that writes to a memory location.
The reason I want to know this is because I am working on a proprietary manycore platform without any Memory Protection. Now my question is, can software be used to protect memory, especially for wild writes due to soft erros (as opposed to mistakes by a programmer). Any help on this would be really appreciated.
If you're looking for Runtime memory protection the sane only option is hardware support. Hardware is the only way to intervene in a bad memory access before it can cause damage. Any software solution would be vulnerable to the very memory errors it is trying to protect against.
With software you could possibly implement a verification/detection scheme. You could periodically check portions of memory that the currently running program should not have access and see if they have changed (probably by CRCing these areas). But of course if the rogue program damages the area where the checksums are held, or where the checking program's code is held, then all bets are off.
Even this software checking solution would be more of a debugging utility than a permanent runtime protection. It is likely that a device with no MMU is a small embedded device which won't have the spare cycles to be constantly checking the device's memory.
Usually devices without MMUs are designed to run a single program with no kernel or anything else, and thus there is nothing to protect. If you need to run multiple programs and feel you need protection, you probably need a more advanced piece of hardware that supports the kind of features you're looking for.
If you want software implemented memory protection, then you will need support from your compiler and its associated libraries. I expect that there is one compiler only on this platform and so you should contact the vendor. I wouldn't hold out much hope for a positive response. Even if they had such tools, I would expect the performance of software memory protection to be unacceptable.
MMU less systems are present in several embedded solutions.
The memory is managed by the kernel code. The entire memory (heap) is divided into heap lists of various sizes (heap lists can be of sizes 4 bytes, 8 bytes, 16 bytes ..... upto 1024 bytes)and there's a header attached to each heap block that tells whether the particular heap block is taken or not. So, that when u need to assign a new heap block, you can browse through the heap lists and see which heap blocks are free and can assign them to the requesting application. And the same is the case when you free a particular sized heap block, the headers of that block are updated to reflect that it has been freed.
Now, this implementation has to take care of the scenario when the application requested a particular size of heap block and that size of heap list is full. In that case you break up a block from the next size of heap list or join together smaller sized heap blocks and add to the requested sized heap list.
The implementation is much simpler than it seems.
Depends on what application platform will run. There is technology called Type-Safe Language (ATS, for instance) which can protect from software errors. And such languages may have good performance (again ATS, for instance).

Resources