Decrease datomic memory usage - datomic

So I'm on a tiny server with not much ram to spare and when I try to run datomic, it gets angry at me!
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000b5a00000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1073741824 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/hs_err_pid1662.log
I come across this: https://groups.google.com/forum/#!topic/datomic/5_ZPZBFmCJg which says I need to change more than just
object-cache-max in my transactor .properties file. Unfortunately it doesn't continue with what exactly I need to change in addition. Help would be appreciated.

You might read the docs on capacity planning for more context on configuring a Datomic transactor. As the group issue mentions, the -Xms1g and -Xmx1g settings are asking for a gig of RAM. The docs I've linked show part of the solution in this case:
You can set the maximum memory available to a JVM process with the
-Xmx flag to java (or to bin/transactor).
Micros are not supported for Datomic deployment, though there are some being run with success out in the wild (very low write loads). You might try, for example, a configuration like this:
memory-index-threshold=16m
memory-index-max=64m
object-cache-max=128m
With -Xmx set to 512MB. This may take additional steps in AWS, etc. as reported here. The basic answer, though, is that you'll need to decrease the max heap size and experiment with reduced values for each of the other memory settings to accommodate the lower setting.

To add some concrete details, this is what I had to get it working:
In some-transactor.properties:
memory-index-threshold=16m
memory-index-max=64m
object-cache-max=128m
Run the transactor with:
$ /path/to/bin/transactor -Xms512m -Xmx512m /path/to/some-transactor.properties

Related

Better Memory (Heap) management on Solaris 10

I have c code with embedded SQL for Oracle through Pro*C.
Whenever I do an insert or update (below given an update example),
update TBL1 set COL1 = :v, . . . where rowid = :v
To manage bulk insertions and updates, I have allocated several memory chunks to insert as bulk and commit once. There are other memory allocations too going on as and when necessary. How do I better manage the memory (heap) for dynamic memory allocations? One option is to have the heap size configurable during the GNU linking time. I'm using g++ version 2.95, I know it's quite an old version, but have to use this for legacy. Since the executable (runs on solaris 10), obce built, could run on several production environments with varied resources, one-size-fit-all for heap size allocation may not be appropriate. As an alternative, need some mechanism where heaps may elastically grow as and when needed. Unlike Linux, Solaris, I think, does not have the concept of over-allocated memory. So, memory allocations could fail with ENOMEM if there is no more space left. What could be better strategy to know that we could be crossing the danger level and now we should either deallocate chunks that we are storing in case these are done using or transfer memory chunks to oracle DB in case these are still pending to be loaded and finally deallocate. Any strategy that you could suggest?
C is not java where the heap size is fixed at startup.
The heap and the stack of a C compiled application both share the same virtual memory space and adjust dynamically.
The size of this space depends on whether you are compiling a 32 bit or a 64 bit binary, and also whether your kernel is a 32 bit or a 64 bit one (on SPARC hardware, it's always 64 bit).
If you have not enough RAM and want Solaris to accept large memory reservations anyway, a similar way Linux over commits memory, you can just add enough swap for the reservation to be backed by actual storage.
If for some reason, you are unhappy with the Solaris libc memory allocator, you can evaluate the bundled alternative ones like libumem, mtmalloc or the third party hoard. See http://www.oracle.com/technetwork/articles/servers-storage-dev/mem-alloc-1557798.html for details.
One solution would be to employ soft limits in your code for various purposes, f.e. that only 100 transactions at a time are handled and other transactions have to wait until the previous ones are deallocated. This guarantees predictable behavior, as no code part can use up more memory than allowed.
The question is:
Do you really run out of memory in your application or do you fragment your memory and fail to get a sufficient large contiguous block of memory? The strategies to handle each case are different.

Benchmarking memory use in the Keil uVision 5 simulator

I have a Keil uVision project that I would like to benchmark extensively. The code is currently running in simulator mode. To visualize results we simply store characters in a memory region and display said region as ASCII.
This approach works pretty well to obtain timings using the Cortex-M system ticks. However, I don't have an idea for the ram usage of the code:
Ideally I would the simulator to stop the execution when the maximum amount of ram was used.
I would also like to see maximum heap usage (or even per function).
Is there a way to do obtain these values? I'm aware of the maximum stack size reported by the build system.
Is there a way to limit the amount of ram available in the uVision simulator?
Thanks
There is a fairly obvious solution: Just count the ram in the memory window. First find the memory regions allocated for the heap and the stack (this can usually be found in the startup assembly file). Then go through the memory window in the debugger and just look where the memory didn't get changed.
Keil usually initializes the memory with 0 so the stack boundaries can easily be seen.
The total stack usage can be computed in the following way
$TOTAL = $TOP - $BOTTOM
If the boundary can't be seen it might make sense to first initialize the memory with a pattern (see here).

profiling maximum memory usage in C application - linux

I'm developing C module for php under linux and I'm trying to find a way that could help me profile my code by maximum memory spike (usage).
Using valgrind I can get total memory allocation within the code. But as it is with allocated memory it comes and goes ;). What I need to get is the highest memory usage that appeared during C application run so I could get total overview on memory requirements and have some measurement point for optimization of the code.
Does anyone know any tool/trick/good practice that could help ?
Take a look at Massif: http://valgrind.org/docs/manual/ms-manual.html
Have you checked massif (one of Valgrind's tool)?
this is actually what you are looking for
another possibility would be memusage (one of glibc's utilities, glibc-utils)

Increment Memory Usage in SQL CLR

To get the total memory used by SQL CLR, you would run the following query:
select single_pages_kb + multi_pages_kb + virtual_memory_committed_kb from sys.dm_os_memory_clerks where type = 'MEMORYCLERK_SQLCLR'
The result I am getting is:
Is there any way to increment this memory? If so How, besides buying more RAM...
Based on personal--if not overly informed--experience, I'm pretty certain you (aka the "outside user") cannot control how much memory SQL allocates to CLR processes.
Further information that may or may not help here: there are limts, ratios, and (our big headache) fragmentation of memory assigned over time (that's days of regular use). Our issues could only address by stopping and restarting the SQL service. Again, I'm pretty certain that it doesn't matter how much memory is available on the box, so much as the internal means by which SQL addresses and allocates it. The problems that we were having back when were tangled, confused, recurrent, and very irritating... and then, based on my research, we upgraded to 64-bit edition (SQL 2008), which has very different means of addressing and allocating all that memory we had installed on the box. All of our problems went away, and I haven't had to consider the situation ever since.

Keeping address space size to 4 GB in 64 bit

I want to keep some applications to run with a 4 GB address space within a 64 bit OS running on a 64 bit processor (x86 xeon, 8 core). I know there is an option to compile with -m32 option, but at this moment the computer I'm working with, doesn't have the required support for compiling with -m32, so I can't use it, neither I can install anything on that computer as I don't have any rights.
Now my question is if there is any possibility to restrict address space to 4 GB. Please don't ask me why I want to do so, just tell me how if that is possible. Thanks.
The ulimit/setrlimit mechanism, accessible in bash via ulimit, can do this with minor drawbacks.
ulimit -v 4000000
should limit the memory available to the current process group to 4 GB.
However, it limits the total size of your address space mappings, and does not limit your mappings' offset
- you still may have pointers larger than 2^32.
You could try using setrlimit to impose a virtual memory limit on the process during its initialization, via the RLIMIT_AS attribute. I have used setrlimit to increase resource allocations to a user space process before, but I don't see why it wouldn't work in the other direction as well.

Resources