Equivalent of mmap with MAP_GROWSDOWN in Windows - c

In Linux, I could use mmap with the MAP_GROWSDOWN flag to allocate memory for an automatically-growing stack. To quote the manpage,
MAP_GROWSDOWN
This flag is used for stacks. It indicates to the kernel
virtual memory system that the mapping should extend
downward in memory. The return address is one page lower
than the memory area that is actually created in the
process's virtual address space. Touching an address in
the "guard" page below the mapping will cause the mapping
to grow by a page. This growth can be repeated until the
mapping grows to within a page of the high end of the next
lower mapping, at which point touching the "guard" page
will result in a SIGSEGV signal.
Is there some equivalent technique in Windows? Even something ugly like asking the OS notify you about page faults so you can allocate a new page underneath (and make it look contiguous by asking the OS to fiddle around with page tables)?

With VirtualAlloc you can reserve a block of memory, commit the top two pages, and set the lower of the two pages with PAGE_GUARD. When the stack grows down and accesses the guard page, a structured exception is thrown that you can handle to commit the next page down and set PAGE_GUARD on it.
The above is similar to how the stack is handled in Windows processes. Below is a description of the stack using Sysinternals VMMAP.EXE. You can see it is a 256KB stack with 32K committed, 12K of guard pages, and 212K of reserved stack remaining.

Related

Is it possible to control page-out and page-in by user programming? If yes then how?

My questions are as follows:
I mmap(memory mapping) a file into the virtual memory space.
When I access the first byte of the file using a pointer at the first time, the OS will try to access the data in memory, but it will fails and raises the page fault, because the data doesn't present in memory now. So the OS will swap the data from disk into memory. Finally my access will success.
(question is coming)
When I modify the data(in-memory) and write back into disk file, how could I just free the physical memory for other using, but remain virtual memory for fetching the data back into memory as needed?
It sounds like the page-out and page-in behaviors where the OS know the memory is exhaust, it will swap the LRU(or something like that) memory page into disk(swap files) and free the physical memory for other process, and fetch the evicted data back into memory as needed. But this mechanism is controlled by OS.
For some reasons, I need to control the page-out and page-in behaviors by myself. So how should I do? Hack the kernel?
You can use the madvise system call. Its behaviour is affected by the advice argument; there are many choices for advice and the optimal one should be picked based on the specifics of your application.
The flag MADV_DONTNEED means that the given range of physical backing frames should be unconditionally freed (i.e. paged out). Also:
After a successful MADV_DONTNEED operation, the semantics of
memory access in the specified region are changed: subsequent
accesses of pages in the range will succeed, but will result
in either repopulating the memory contents from the up-to-date
contents of the underlying mapped file (for shared file
mappings, shared anonymous mappings, and shmem-based
techniques such as System V shared memory segments) or zero-
fill-on-demand pages for anonymous private mappings.
This could be useful if you're absolutely certain that it will be very long until you access the same position again.
However it might not be necessary to force the kernel to actually page out; instead another possibility, if you're accessing the mapping sequentially is to use madvise with MADV_SEQUENTIAL to tell kernel that you'd access your memory mapping mostly sequentially:
Expect page references in sequential order. (Hence, pages in the given range can be aggressively read ahead, and may be freed soon after they are accessed.)
or MADV_RANDOM
Expect page references in random order. (Hence, read ahead may be less useful than normally.)
These are not as aggressive as explicitly calling MADV_DONTNEED to page out. (Of course you can combine these with MADV_DONTNEED as well)
In recent kernel versions there is also the MADV_FREE flag which will lazily free the page frames; they will stay mapped in if enough memory is available, but are reclaimed by the kernel if the memory pressure grows.
You can checout mlock+munlock to lock/unlock the pages. This will give you control over pages being swapped out.
You need to have CAP_IPC_LOCK capability to perform this operation though.

How programmatically get Linux process's stack start and end address?

For a mono threaded program, I want to check whether or not a given virtual address is in the process's stack. I want to do that inside the process which is written in C.
I am thinking of reading /proc/self/maps to find the line labelled [stack] to get start and end address for my process's stack. Thinking about this solution led me to the following questions:
/proc/self/maps shows a stack of 132k for my particular process and the maximum size for the stack (ulimit -s) is 8 mega on my system. How does Linux know that a given page fault occurring because we are above the stack limit belongs to the stack (and that the stack must be made larger) rather than that we are reaching another memory area of the process ?
Does Linux shrink back the stack ? In other words, when returning from deep function calls for example, does the OS reduce the virtual memory area corresponding to the stack ?
How much virtual space is initially allocated for the stack by the OS ?
Is my solution correct and is there any other cleaner way to do that ?
Lots of the stack setup details depend on which architecture you're running on, executable format, and various kernel configuration options (stack pointer randomization, 4GB address space for i386, etc).
At the time the process is exec'd, the kernel picks a default stack top (for example, on the traditional i386 arch it's 0xc0000000, i.e. the end of the user-mode area of the virtual address space).
The type of executable format (ELF vs a.out, etc) can in theory change the initial stack top. Any additional stack randomization and any other fixups are then done (for example, the vdso [system call springboard] area generally is put here, when used). Now you have an actual initial top of stack.
The kernel now allocates whatever space is needed to construct argument and environment vectors and so forth for the process, initializes the stack pointer, creates initial register values, and initiates the process. I believe this provides the answer for (3): i.e. the kernel allocates only enough space to contain the argument and environment vectors, other pages are allocated on demand.
Other answers, as best as I can tell:
(1) When a process attempts to store data in the area below the current bottom of the stack region, a page fault is generated. The kernel fault handler determines where the next populated virtual memory region within the process' virtual address space begins. It then looks at what type of area that is. If it's a "grows down" area (at least on x86, all stack regions should be marked grows-down), and if the process' stack pointer (ESP/RSP) value at the time of the fault is less than the bottom of that region and if the process hasn't exceeded the ulimit -s setting, and the new size of the region wouldn't collide with another region, then it's assumed to be a valid attempt to grow the stack and additional pages are allocated to satisfy the process.
(2) Not 100% sure, but I don't think there's any attempt to shrink stack areas. Presumably normal LRU page sweeping would be performed making now-unused areas candidates for paging out to the swap area if they're really not being re-used.
(4) Your plan seems reasonable to me: the /proc/NN/maps should get start and end addresses for the stack region as a whole. This would be the largest your stack has ever been, I think. The current actual working stack area OTOH should reside between your current stack pointer and the end of the region (ordinarily nothing should be using the area of the stack below the stack pointer).
My answer is for linux on x64 with kernel 3.12.23 only. It might or might not apply to aother versions or architectures.
(1)+(2) I'm not sure here, but I believe it is as Gil Hamilton said before.
(3) You can see the amount in /proc/pid/maps (or /proc/self/maps if you target the calling process). However not all of that it actually useable as stack for your application. Argument- (argv[]) and environment vectors (__environ[]) usually consume quite a bit of space at the bottom (highest address) of that area.
To actually find the area the kernel designated as "stack" for your application, you can have a look at /proc/self/stat. Its values are documented here. As you can see, there is a field for "startstack". Together with the size of the mapped area, you can compute the current amount of stack reserved. Along with "kstkesp", you could determine the amount of free stack space or actually used stack space (keep in mind that any operation done by your thread most likely will change those values).
Also note, that this works only for the processes main thread! Other threads won't get a labled "[stack]" mapping, but either use anonymous mappings or might even end up on the heap. (Use pthreads API to find those values, or remember the stack-start in the threads main function).
(4) As explained in (3), you solution is mostly OK, but not entirely accurate.

Does Virtual Memory area struct only comes into picture when there is a page fault?

Virtual Memory is a quite complex topic for me. I am trying to understand it. Here is my understanding for a 32-bit system. Example RAM is just 2GB. I have tried reading many links, and I am not confident at the moment. I would like you people to help me in clearing up my concepts. Please acknowledge my points, and also please answer for what you feel is wrong. I have also a confused section in my points. So, here starts the summary.
Every process thinks it is only running. It can access the 4GB of memory - virtual address space.
When a process access a virtual address it is translated to physical address via MMU.
This MMU is a part of a CPU - a hardware.
When the MMU cannot translate the address to a physical one, it raises a page fault.
On page fault, the kernel is notified. The kernel check the VM area struct. If it can find it - may be on disk. It will do some page-in /page-out. And get this memory on the RAM.
Now MMU will again try and will succeed this time.
In case the kernel cannot find the address, it will raise a signal. For example, invalid access will raise a SIGSEGV.
Confused points.
Does Page table is maintained in Kernel? This VM area struct has a page table ?
How MMU cannot find the address in physical RAM. Let's say it translates to some wrong address in RAM. Still the code will execute, but it will be a bad address. How MMU ensures that it is reading a right data? Does it consult Kernel VM area everytime?
Is the Mapping table - virtual to physical is inside a MMU. I have read it that is maintained by an individual process. If it is inside a process, why I can't see it.
Or if it is MMU, how MMU generates the address - is it that Segment + 12-bit shift -> Page frame number, and then the addition of offset (bits -1 to 10) -> gives a physical address.
Does it mean that for a 32-bit architecture, with this calculation in my mind. I can determine the physical address from a virtual address.
cat /proc/pid_value/maps. This shows me the current mapping of the vmarea. Basically, it reads the Vmarea struct and prints it. That means that this is important. I am not able to fit this piece in the complete picture. When the program is executed does the vmarea struct is generated. Is VMAREA comes only into the picture when the MMU cannnot translate the address i.e. Page fault? When I print the vmarea it displays the address range , permission and mapped to file descriptor, and offset. I am sure this file descriptor is the one in the hard-disk and the offset is for that file.
The high-mem concept is that kernel cannot directly access the Memory region greater than 1 GB(approx). Thus, it needs a page table to indirectly map it. Thus, it will temporarily load some page table to map the address. Does HIGH MEM will come into the picture everytime. Because Userspace can directly translate the address via MMU. On what scenario, does kernel really want to access the High MEM. I believe the kernel drivers will mostly be using kmalloc. This is a direct memory + offset address. In this case no mapping is really required. So, the question is on what scenario a kernel needs to access the High Mem.
Does the processor specifically comes with the MMU support. Those who doesn't have MMU support cannot run LInux?
Does Page table is maintained in Kernel? This VM area struct has a page table ?
Yes. Not exactly: each process has a mm_struct, which contains a list of vm_area_struct's (which represent abstract, processor-independent memory regions, aka mappings), and a field called pgd, which is a pointer to the processor-specific page table (which contains the current state of each page: valid, readable, writable, dirty, ...).
The page table doesn't need to be complete, the OS can generate each part of it from the VMAs.
How MMU cannot find the address in physical RAM. Let's say it translates to some wrong address in RAM. Still the code will execute, but it will be a bad address. How MMU ensures that it is reading a right data? Does it consult Kernel VM area everytime?
The translation fails, e.g. because the page was marked as invalid, or a write access was attempted against a readonly page.
Is the Mapping table - virtual to physical is inside a MMU. I have read it that is maintained by an individual process. If it is inside a process, why I can't see it.
Or if it is MMU, how MMU generates the address - is it that Segment + 12-bit shift -> Page frame number, and then the addition of offset (bits -1 to 10) -> gives a physical address.
Does it mean that for a 32-bit architecture, with this calculation in my mind. I can determine the physical address from a virtual address.
There are two kinds of MMUs in common use. One of them only has a TLB (Translation Lookaside Buffer), which is a cache of the page table. When the TLB doesn't have a translation for an attempted access, a TLB miss is generated, the OS does a page table walk, and puts the translation in the TLB.
The other kind of MMU does the page table walk in hardware.
In any case, the OS maintains a page table per process, this maps Virtual Page Numbers to Physical Frame Numbers. This mapping can change at any moment, when a page is paged-in, the physical frame it is mapped to depends on the availability of free memory.
cat /proc/pid_value/maps. This shows me the current mapping of the vmarea. Basically, it reads the Vmarea struct and prints it. That means that this is important. I am not able to fit this piece in the complete picture. When the program is executed does the vmarea struct is generated. Is VMAREA comes only into the picture when the MMU cannnot translate the address i.e. Page fault? When I print the vmarea it displays the address range , permission and mapped to file descriptor, and offset. I am sure this file descriptor is the one in the hard-disk and the offset is for that file.
To a first approximation, yes. Beyond that, there are many reasons why the kernel may decide to fiddle with a process' memory, e.g: if there is memory pressure it may decide to page out some rarely used pages from some random process. User space can also manipulate the mappings via mmap(), execve() and other system calls.
The high-mem concept is that kernel cannot directly access the Memory region greater than 1 GB(approx). Thus, it needs a page table to indirectly map it. Thus, it will temporarily load some page table to map the address. Does HIGH MEM will come into the picture everytime. Because Userspace can directly translate the address via MMU. On what scenario, does kernel really want to access the High MEM. I believe the kernel drivers will mostly be using kmalloc. This is a direct memory + offset address. In this case no mapping is really required. So, the question is on what scenario a kernel needs to access the High Mem.
Totally unrelated to the other questions. In summary, high memory is a hack to be able to access lots of memory in a limited address space computer.
Basically, the kernel has a limited address space reserved to it (on x86, a typical user/kernel split is 3Gb/1Gb [processes can run in user space or kernel space. A process runs in kernel space when a syscall is invoked. To avoid having to switch the page table on every context-switch, on x86 typically the address space is split between user-space and kernel-space]). So the kernel can directly access up to ~1Gb of memory. To access more physical memory, there is some indirection involved, which is what high memory is all about.
Does the processor specifically comes with the MMU support. Those who doesn't have MMU support cannot run Linux?
Laptop/desktop processors come with an MMU. x86 supports paging since the 386.
Linux, specially the variant called µCLinux, supports processors without MMUs (!MMU). Many embedded systems (ADSL routers, ...) use processors without an MMU. There are some important restrictions, among them:
Some syscalls don't work at all: e.g fork().
Some syscalls work with restrictions and non-POSIX conforming behavior: e.g mmap()
The executable file format is different: e.g bFLT or ELF-FDPIC instead of ELF.
The stack cannot grow, and its size has to be set at link-time.
When a program is loaded first the kernel will setup a kernel VM-Area for that process is it? This Kernel VM Area actually holds where the program sections are there in the memory/HDD. Then the entire story of updating CR3 register, and page walkthrough or TLB comes into the picture right? So, whenever there is a pagefault - Kernel will update the page table by looking at Kernel virtual memory area is it? But they say Kernel VM area keeps updating. How this is possible, since cat /proc/pid_value/map will keep updating.The map won't be constant from start to end. SO, the real information is available in the Kernel VM area struct is it? This is the acutal information where the section of program lies, it could be HDD or physical memory -- RAM? So, this is filled during process loading is it, the first job? Kernel does the page in page out on page fault, and will update the Kernel VM area is it? So, it should also know the entire program location on the HDD for page-in / page out right? Please correct me here. This is in continuation to my first question of the previous comment.
When the kernel loads a program, it will setup several VMAs (mappings), according to the segments in the executable file (which on ELF files you can see with readelf --segments), which will be text/code segment, data segment, etc... During the lifetime of the program, additional mappings may be created by the dynamic/runtime linkers, by the memory allocator (malloc(), which may also extend the data segment via brk()), or directly by the program via mmap(),shm_open(), etc..
The VMAs contain the necessary information to generate the page table, e.g. they tell whether that memory is backed by a file or by swap (anonymous memory). So, yes, the kernel will update the page table by looking at the VMAs. The kernel will page in memory in response to page faults, and will page out memory in response to memory pressure.
Using x86 no PAE as an example:
On x86 with no PAE, a linear address can be split into 3 parts: the top 10 bits point to an entry in the page directory, the middle 10 bits point to an entry in the page table pointed to by the aforementioned page directory entry. The page table entry may contain a valid physical frame number: the top 22 bits of a physical address. The bottom 12 bits of the virtual address is an offset into the page that goes untranslated into the physical address.
Each time the kernel schedules a different process, the CR3 register is written to with a pointer to the page directory for the current process. Then, each time a memory access is made, the MMU tries to look for a translation cached in the TLB, if it doesn't find one, it looks for one doing a page table walk starting from CR3. If it still doesn't find one, a GPF fault is raised, the CPU switches to Ring 0 (kernel mode), and the kernel tries to find one in the VMAs.
Also, I believe this reading from CR, page directory->page-table->Page frame number-memory address this all done by MMU. Am I correct?
On x86, yes, the MMU does the page table walk. On other systems (e.g: MIPS), the MMU is little more than the TLB, and on TLB miss exceptions the kernel does the page table walk by software.
Though this is not going to be the best answer, iw ould like to share my thoughts on confused points.
1. Does Page table is maintained...
Yes. kernel maintains the page tables. In fact it maintains nested page tables. And top of the page tables is stored in top_pmd. pmd i suppose it is page mapping directory. You can traverse through all the page tables using this structure.
2. How MMU cannot find the address in physical RAM.....
I am not sure i understood the question. But in case because of some problem, the instruction is faulted or out of its instruction area is being accessed, you generally get undefined instruction exception resulting in undefined exception abort. If you look at the crash dumps, you can see it in the kernel log.
3. Is the Mapping table - virtual to physical is inside a MMU...
Yes. MMU is SW+HW. HW is like TLB and all. The mapping tables are stored here. For instructions, that is for code section i always converted the physical-virtual address and always they matched. And almost all the times it matches for Data sections as well.
4. cat /proc/pid_value/maps. This shows me the current mapping of the vmarea....
This is more used for analyzing the virtual addresses of user space stacks. As you know virtually all the user space programs can have 4 GB of virtual address. So unlike kernel if i say 0xc0100234. You cannot directly go and point to the istruction. So you need this mapping and the virtual address to point the instruction based on the data you have.
5. The high-mem concept is that kernel cannot directly access the Memory...
High-mem corresponds to user space memory(some one correct me if i am wrong). When kernel wants to read some data from a address at user space you will be accessing the HIGHMEM.
6. Does the processor specifically comes with the MMU support. Those who doesn't have MMU support cannot run LInux?
MMU as i mentioned is HW + SW. So mostly it would be coming with the chipset. and the SW would be generally architecture dependent. You can disable MMU from kernel config and build. I have never tried it though. Mostly these days allthe chipsets have it. But small boards i think they disable MMU. I am not entirely sure though.
As all these are conceptual questions, i may be lacking some knowledge and be wrong at places. If so others please correct me.

how does a program runs in memory and the way memory is handled by Operating system

I am not clear with memory management when a process is in execution
during run time
Here is a diagram
I am not clear with the following in the image:
1) What is the stack which this image is referring to?
2) What is memory mapping segment which is referring to file mappings?
3) What does the heap have to do with a process. Is the heap only handled in a process or is the heap something maintained by the operating system kernel and then memory space is allocated by malloc (using the heap) when ever a user space application invokes this?
The article mentions
http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/
virtual address space, which in 32-bit mode is always a 4GB block of
memory addresses. These virtual addresses are mapped to physical
memory by page tables,
4) Does this mean that at a time only one program runs in memory occupying entire 4 GB of RAM?
The same article also mentions
Linux randomizes the stack, memory mapping segment, and heap by adding
offsets to their starting addresses. Unfortunately the 32-bit address
space is pretty tight, leaving little room for randomization and
hampering its effectiveness.
5) Is it referring to randomizing the stack within a process or is it referring to something which is left after counting the space of all the processes?
1) What is the stack which this image is referring to?
The stack is for allocating local variables and function call frames (which include things like function parameters, where to return after the function has called, etc.).
2) What is memory mapping segment which is referring to file mappings?
Memory mapping segment holds linked libraries. It also is where mmap calls are allocated. In general, a memory mapped file is simply a region of memory backed by a file.
3) What does the heap have to do with a process. Is the heap only handled in a process or is the heap something maintained by the operating system kernel and then memory space is allocated by malloc (using the heap) when ever a user space application invokes this?
The heap is process specific, and is managed by the process itself, however it must request memory from the OS to begin with (and as needed). You are correct, this is typically where malloc calls are allocated. However, most malloc implementations make use of mmap to request chunks of memory, so there is really less of a distinction between heap and the memory mapping segment. Really, the heap could be considered part of the memory mapped segment.
4) Does this mean that at a time only one program runs in memory occupying entire 4 GB of RAM?
No, that means the amount of addressable memory available to the program is limited to 4 GB of RAM, what is actually contained in memory at any given time is dependent on how the OS allocated physical memory, and is beyond the scope of this question.
5) Is it referring to randomizing the stack within a process or is it referring to something which is left after counting the space of all the processes?
I've never seen anything that suggests 4gb of space "hampers" the effectiveness of memory allocation strategies used by the OS. Additionally, as #Jason notes, the locations of the stack, memory mapped segment, and heap are randomized "to prevent predictable security exploits, or at least make them a lot harder than if every process the OS managed had each portion of the executable in the exact same virtual memory location." To be specific, the OS is randomizing the virtual addresses for the stack, memory mapped region, and heap. On that note, everything the process sees is a virtual address, which is then mapped to a physical address in memory, depending on where the specific page is located. More information about the mapping between virtual and physical addresses can be found here.
This wikipedia article on paging is a good starting point for learning how the OS manages memory between processes, and is a good resource to read up on for answering questions 4 and 5. In short, memory is allocated in pages to processes, and these pages either exist in main memory, or have been "paged out" to the disk. When a memory address is requested by a process, it will move the page from the disk to main memory, replacing another page if needed. There are various page replacement strategies that are used and I refer you to the article to learn more about the advantages and disadvantages of each.
Part 1. The Stack ...
A function can call a function, which might call another function. Any variables allocated end up on the stack through each iteration. And de-allocated as each function exits, hence "stack". You might consider Wikipedia for this stuff ... http://en.wikipedia.org/wiki/Stack_%28abstract_data_type%29
Linux randomizes the stack, memory mapping segment, and heap by adding offsets to their starting addresses. Unfortunately the 32-bit address space is pretty tight, leaving little room for randomization and hampering its effectiveness.
I believe this is more of a generalization being made in the article when comparing the ability to randomize in 32 vs. 64-bits. 3GB of addressable memory in 32-bits is still quite a bit of space to "move around" ... it's just not as much room as can be afforded in a 64-bit OS, and there are certain applications, such as image-editors, etc. that are very memory intensive, and can easily use up the entire 3GB of addressable memory available to them. Keep in mind I'm saying "addressable" memory ... this is dependent on the platform and not the amount of physical memory available in the system.

Do mmap/mprotect-readonly zero pages count towards committed memory?

I want to keep virtual address space reserved in my process for memory that was previously used but is not presently needed. I'm interested in the situation where the host kernel is Linux and it's configured to prevent overcommit (which it does by detailed accounting for all committed memory).
If I just want to prevent the data that my application is no longer using from occupying physical memory or getting swapped to disk (wasting resources either way), I can madvise the kernel that it's unneeded, or mmap new zero pages over top of it. But neither of these approaches will necessarily reduce the amount of memory that counts as committed, which other processes are then prevented from using.
What if I replace the pages with fresh zero pages that are marked read-only? My intent is that they don't count towards committed memory, and further that I can later use mprotect to make them writable, and that it would fail if making them writable would go over the committed memory limit. Is my understanding correct? Will this work?
If you're not using the page (reading or writing to it), it won't be commited to your address space (only reserved).
But your address space is limited, so you can't play as you want/like with it.
See for example ElectricFence which may fail for large number of allocations, because of insertion of "nul page/guard page" (anonymous memory with no access).
Have a look at these thread : "mprotect() failed: Cannot allocate memory" :
http://thread.gmane.org/gmane.comp.lib.glibc.user/538/focus=976052
On Linux, assuming overcommit has not been disabled, you can use the MAP_NORESERVE flag to mmap, which will ensure that the page in question will not be accounted as allocated memory prior to being accessed. If overcommit has been completely disabled, see below about multiple-mapping pages.
Note that Linux's behavior for zero pages has changed at times in the past; with some kernel versions, simply reading the page would cause it to be allocated. With others, a write is necessary. Note that the protection flags do not cause allocation directly; however they can prevent you from accidentally triggering an allocation. Therefore, for most reliable results you should avoid accessing the page at all by mprotecting with PROT_NONE.
As another, more portable option, you can map the same page at multiple locations. That is, create and open an empty temp file, unlink it, ftruncate to some reasonable number of pages, then mmap repeatedly at offset 0 into the file. This will absolutely guarantee the memory only counts once against your program's memory usage. You can even use MAP_PRIVATE to auto-reallocate it when you write to the page.
This may have higher memory usage than the MAP_NORESERVE technique (both for kernel tracking data, and for the pages of the temp file itself), however, so I would recommend using MAP_NORESERVE instead when available. If you do use this technique, try to make the region being mapped reasonably large (and put it in /dev/shm if on Linux, to avoid actual disk IO). Each individual mmap call will consume a certain amount of (non-swappable) kernel memory to track it, so it's good to keep that count down.

Resources