OpenSearchServer: Why am I getting this error Error (java.lang.NullPointerException) - database

I am using OpenSearchServer v1.2.4 rc3.
In the first few days it's working fine.
But when its Index size reached 1.0GB I got this error
"Error (java.lang.NullPointerException)"
when I start my crawler. The crawler works fine for some time and then stops with this error
"Error (java.lang.NullPointerException)".
What's wrong?

Depending of the size of your index, a memory parameter must be added. By default, OpenSearchServer is setup to run on small server with the default RAM value provided by the Java Virtual Machine (from 64MB to 512MB only).
For large indexes, you must set up a higher value. On a Unix/Linux server, just create an /etc/opensearchserver file with the following content:
CATALINA_OPTS="-Xms2G -Xmx2G -server"
export CATALINA_OPTS
On a Windows server, edit the start.bat files. Add the following line just after :okExec
set CATALINA_OPTS="-Xms2G -Xmx2G -server"
Replace 2G (which mean 2 GB) by the size of the memory you want to allocate to OpenSearchServer.
In a 32 bits version, the memory is limited to 2.5GB. You can use more memory with a 64 bits operating system using the following lines (on Unix/Linux):
CATALINA_OPTS="-Xms12G -Xmx12G -d64 -server"
for Window 64bits:
set CATALINA_OPTS="-Xms12G -Xmx12G -d64 -server"
After restarting OpenSearchServer, just check in the Runtime tab panel that you have the correct size of memory available.
Regarding the error details, it is more useful to have the full stack trace. You can find it in the log file (data/logs/oss.log), or in the Runtime/Logs tab panel.

Related

Memory failure when running gem5 SE RISCV code

When I try to run a simulation in SE mode in gem5 I get the following output:
warn: No dot file generated. Please install pydot to generate the dot file and pdf. build/RISCV/mem/mem_interface.cc:791: warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes) 0: system.remote_gdb: listening for remote gdb on port 7000 build/RISCV/sim/simulate.cc:194: info: Entering event queue # 0. Starting simulation... build/RISCV/sim/mem_state.cc:443: info: Increasing stack size by one page. build/RISCV/sim/mem_state.cc:99: panic: Someone allocated physical memory at VA 0x4000000000000000 without creating a VMA! Memory Usage: 619616 KBytes Program aborted at tick 2222000
I'm using the ELF-linux cross compiler. Compiling with the Newlib-ELF cross compiler simulates just fine, but the thing is that I need to use pthreads(openmp) and the Newlib compilation doesn't support it. To get a grip on things I tried to simulate in x86, and found out that it wont work either with a simple gnu/gcc compilation. Then I complied replicating what the test-progs folder did with docker and then it worked fine. Is this the way to go? Since the error says there are problems with physical memory, would compiling with docker help out, or am I missing an obviuos fix? How would go about compiling RISCV with docker (I couldn't find examples of docker+RISCV)?

How to recover from infinite reboot loops in NodeMCU?

My NodeMCU program has gone in to infinite reboot loop.
My code is functionally working but any action I try to do, e.g. file.remove("init.lua") or even just =node.heap(), it panics and reboots saying: PANIC: unprotected error in call to Lua API (not enough memory).
Because of this, I'm not able to change any code or delete init.lua to stop automatic code execution.
How do I recover?
I tried re-flashing another version of NodeMCU, but it started emitting garbage in serial port.
Then, I recalled that NodeMCU had two extra files: blank.bin and esp_init_data_default.bin.
I flashed them at 0x7E000 and 0x7C000 respectively.
They are also available as INTERNAL://BLANK and INTERNAL://DEFAULT in the NodeMCU flasher.
This booted the new NodeMCU firmware, all my files were gone and I'm out of infinite reboot loop.
Flash the following files:
0x00000.bin to 0x00000
0x10000.bin to 0x10000
And, the address for esp_init_data_default.bin depends on the size of your module's flash.
0x7c000 for 512 kB, modules like ESP-01, -03, -07 etc.
0xfc000 for 1 MB, modules like ESP8285, PSF-A85
0x1fc000 for 2 MB
0x3fc000 for 4 MB, modules like ESP-12E, NodeMCU devkit 1.0, WeMos D1 mini
Then, after flashing those binaries format its file system (run "file.format()" using ESPlorer) before flashing any other binaries.
Downloads Link
I've just finished working through a similar problem. In my case it was end-user error that caused a need to forcibly wipe init.lua, but I think both problems could be solved similarly. (For completeness, my problem was putting a far-too-short dsleep() call in init.lua, leaving the board resetting itself immediately upon starting init.lua.)
I tried flashing new NodeMCU firmware, writing blank.bin and esp_init_data_default.bin to 0x7E000 and 0x7C000, and also writing 0x00000.bin to 0x00000 and 0x10000.bin to 0x10000. None of these things helped in my case.
My hardware is an Adafruit Huzzah ESP8266 breakout (ESP-12), with 4MB of flash.
What worked for me was:
Download the NONOS SDK from Espressif (I used version 1.5.2 from http://bbs.espressif.com/viewtopic.php?f=46&t=1702).
Unzip it to get at boot_v1.2.bin, user1.1024.new.2.bin, blank.bin, and esp_init_data_default.bin (under bin/ and bin/at/).
Flash the following files to the specified memory locations:
boot_v1.2.bin to 0x00000
user1.1024.new.2.bin to 0x010000
esp_init_data_default.bin to 0xfc000
blank.bin to 0x7e000
Note about flashing:
I used esptool.py 1.2.1.
Because of the nature of my problem, I was only able to write changes to the flash when in programming mode (i.e. after booting with GPIO0 held down to GND).
I found that I needed to reset the board between each step (else invocations of esptool.py after the first would fail).
Erased the flash. esptool.py --port <your/port> erase_flash
Then I was able to write a new firmware. I used a stock nodeMCU 0.9.5 just to isolate variables, but I strongly suspect any firmware would work at this point.
The only think that worked for me was python flash tool esptool in ubuntu, windows flashtool never deleted init.lua and reboot loop.
Commands (ubuntu):
git clone https://github.com/themadinventor/esptool.git
cd esptool
python esptool.py -h
ls -l /dev/tty*
nodemcu_latest.bin can be downloaded from github or anywhere.
sudo python esptool.py -p /dev/ttyUSB0 --baud 460800 write_flash --flash_size=8m 0 nodemcu_latest.bin

Sybase initializes but does not run

I am using Red Hat 5.5 and I am trying to run Sybase ASE 12.5.4.
Yesterday I was trying to use the command "service sybase start" and the console showed sybase repeatedly trying to initialize, but failing, the main database server.
UPDATE:
I initialized a database at /ims_systemdb/master using the following commands:
dataserver -d /ims_systemdb/master -z 2k -b 51204 -c $SYBASE/ims.cfg -e db_error.log
chmod a=rwx /ims_systemdb/master
ls -al /ims_systemdb/master
And it gives me a nice database at /ims_systemdb/master with a size of 104865792 bytes (2048x51240).
But when I run
service sybase start
The error log at /logs/sybase_error.log goes like this:
00:00000:00000:2013/04/26 16:11:45.18 kernel Using config area from primary master device.
00:00000:00000:2013/04/26 16:11:45.19 kernel Detected 1 physical CPU
00:00000:00000:2013/04/26 16:11:45.19 kernel os_create_region: can't allocate 11534336000 bytes
00:00000:00000:2013/04/26 16:11:45.19 kernel kbcreate: couldn't create kernel region.
00:00000:00000:2013/04/26 16:11:45.19 kernel kistartup: could not create shared memory
I read "os_create_region" is normal if you don't set shmmax in sysctl high enough, so I set it to 16000000000000, but I still get this error. And sometimes, when I'm playing around with the .cfg file, I get this error message instead:
00:00000:00000:2013/04/25 14:04:08.28 kernel Using config area from primary master device.
00:00000:00000:2013/04/25 14:04:08.29 kernel Detected 1 physical CPU
00:00000:00000:2013/04/25 14:04:08.85 server The size of each partitioned pool must have atleast 512K. With the '16' partitions we cannot configure this value f
Why do these two errors appear and what can I do about them?
UPDATE:
Currently, I'm seeing the 1st error message (os cannot allocate bytes). The contents of /etc/sysctl.conf are as follows:
kernel.shmmax = 4294967295
kernel.shmall = 1048576
kernel.shmmni = 4096
But the log statements earlier state that
os_create_region: can't allocate 11534336000 bytes
So why is the region it is trying to allocate so big, and where did that get set?
The Solution:
When you get a message like "os_create_region: can't allocate 11534336000 bytes", what it means is that Sybase's configuration file is asking the kernel to create a region that exceeds the shmmax variable in /etc/sysctl.conf
The main thing to do is to change ims.conf (or whatever configuration file you are using). Then, you change the max memory variable in the physical memory section.
[Physical Memory]
max memory = 64000
additional network memory = 10485760
shared memory starting address = DEFAULT
allocate max shared memory = 1
For your information, my /etc/sysctl.conf file ended with these three lines:
kernel.shmmax = 16000000000
kernel.shmall = 16000000000
kernel.shmmni = 8192
And once this is done, type "showserver" to reveal what processes are running.
For more information, consult the Sybase System Administrator's Guide, volume 2 as well as Michael Gardner's link to Red Hat memory management in the comments earlier.

Using cscope with VIM: adding database returns errno 75

I've got a pretty large cscope.out database (over 2GB) and an inverted index of over 1GB, and when I issue the command :cscope add "path to database", I get the following error:
E563: stat("path to database") error: 75
Looking at the source code, it seems to return the errno, where 75 means value too large for defined data type.
How can I get it to load my db?
32 bit vim imposes a 2GB limit on cscope databases. Use 64 bit vim to overcome this limitation.

How many files can i have opened at once?

On a typical OS how many files can i have opened at once using standard C disc IO?
I tried to read some constant that should tell it, but on Windows XP 32 bit that was a measly 20 or something. It seemed to work fine with over 30 though, but i haven't tested it extensively.
I need about 400 files opened at once at max, so if most modern OS's support that, it would be awesome. It doesn't need to support XP but should support Linux, Win7 and recent versions of Windows server.
The alternative is to write my own mini file system which i want to avoid if possible.
On Linux, this is dependent on the amount of available file descriptors.
You can use ulimit -n to set / show the number of available FD's per shell.
See these instructions to how to check (or change) the value of available total FD:s in Linux.
This IBM support article suggests that on Windows the number is 512, and you can change it in the registry (as instructed in the article)
As open() returns the fd as int - size of int limits also the upper limit.
(irrelevant as INT_MAX is a lot)
A process can query the limit using the getrlimit system-call.
#include<sys/resource.h>
struct rlimit rlim;
getrlimit(RLIMIT_NOFILE, &rlim);
printf("Max number of open files: %d\n", rlim.rlim_cur-1);
FYI, as root, you have first to modify the 'nofile' item in /etc/security/limits.conf . For example:
* hard nofile 10240
* soft nofile 10240
(changes in limits.conf typically take effect when the user logs in)
Then, users can use the ulimit -n bash command. I've tested this with up to 10,240 files on Fedora 11.
ulimit -n <max_number_of_files>
Lastly, all this is limited by the kernel limit, given by: (I guess you could echo a value into this to go even higher... at your own risk)
cat /proc/sys/fs/file-max
Also, see http://www.karakas-online.de/forum/viewtopic.php?t=9834

Resources