npm package error,<--- JS stacktrace ---> FATAL ERROR: invalid table size Allocation failed - JavaScript heap out of memory - reactjs

#running other reactjs projects work but running some others run into this error, increasing memory is not solving the issue and clearing cache is also not healing the node,I am stack#
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
Starting the development server...
<--- Last few GCs --->
[24628:0000025F59DB78F0] 10935 ms: Scavenge 318.7 (375.5) -> 318.7 (375.5) MB, 37.4 / 0.0 ms (average mu = 0.990, current mu = 0.984) allocation failure
[24628:0000025F59DB78F0] 13187 ms: Scavenge 510.7 (567.5) -> 510.7 (567.5) MB, 165.5 / 0.0 ms (average mu = 0.990, current mu = 0.984) allocation failure
[24628:0000025F59DB78F0] 18581 ms: Scavenge 894.7 (951.6) -> 894.7 (951.6) MB, 315.3 / 0.0 ms (average mu = 0.990, current mu = 0.984) allocation failure
<--- JS stacktrace --->
FATAL ERROR: invalid table size Allocation failed - JavaScript heap out of memory
1: 00007FF635DA7B7F v8::internal::CodeObjectRegistry::~CodeObjectRegistry+114079
2: 00007FF635D34546 DSA_meth_get_flags+65542
3: 00007FF635D353FD node::OnFatalError+301
4: 00007FF63666B29E v8::Isolate::ReportExternalAllocationLimitReached+94
5: 00007FF63665587D v8::SharedArrayBuffer::Externalize+781
6: 00007FF6364F8C4C v8::internal::Heap::EphemeronKeyWriteBarrierFromCode+1468
7: 00007FF635FC8D89 v8::internal::Isolate::FatalProcessOutOfHeapMemory+25
8: 00007FF63632D115 v8::internal::HashTable<v8::internal::NumberDictionary,v8::internal::NumberDictionaryShape>::EnsureCapacity<v8::internal::Isolate>+341
9: 00007FF63632AE66 v8::internal::Dictionary<v8::internal::NumberDictionary,v8::internal::NumberDictionaryShape>::Add<v8::internal::Isolate>+86
10: 00007FF6363C8595 v8::internal::FeedbackNexus::ic_state+32581
11: 00007FF6363C29F2 v8::internal::FeedbackNexus::ic_state+9122
12: 00007FF636375714 v8::internal::JSObject::AddDataElement+1092
13: 00007FF63633442B v8::internal::StringSet::Add+1835
14: 00007FF63637700C v8::internal::JSObject::DefineAccessor+1644
15: 00007FF6363764AB v8::internal::JSObject::AddProperty+3083
16: 00007FF63637667B v8::internal::JSObject::AddProperty+3547
17: 00007FF636240658 v8::internal::Runtime::GetObjectProperty+5064
18: 00007FF6366F8F91 v8::internal::SetupIsolateDelegate::SetupHeap+494417
19: 00007FF636722E5D v8::internal::SetupIsolateDelegate::SetupHeap+666141
20: 00007FF63670CD2A v8::internal::SetupIsolateDelegate::SetupHeap+575722
21: 00007FF63668B53E v8::internal::SetupIsolateDelegate::SetupHeap+45310
22: 0000025F5C052EC8

This looks like a corrupted installation of nodejs.
Uninstall and reinstall your nodejs.
Clean your node_modules and reinstall all your dependencies.
Run again and tell us if that solve anything.

Related

Why isBefore method doesn't work as expected in dayjs?

I am trying to create a calendar with dayjs but doesn't work though the same code is working in momentjs.
My code is:
import dayjs from 'dayjs';
const calendar = [];
const today = dayjs();
const startDay = today.clone().startOf('month').startOf('week');
const endDay = today.clone().endOf('month').endOf('week');
let day = startDay.clone().subtract(1, 'day');
while (day.isBefore(endDay, 'day')) {
calendar.push(
Array(7)
.fill(0)
.map(() => day.add(1, 'day').clone() )
)
};
After running the code, it behaves like continuous loop and shows this error:
<--- Last few GCs --->
[9148:000002DE6245A350] 188279 ms: Scavenge (reduce) 1987.6 (1992.3) -> 1986.8 (1993.3) MB, 5.3 / 0.0 ms (average mu = 0.253, current mu = 0.221) allocation failure
[9148:000002DE6245A350] 188284 ms: Scavenge (reduce) 1987.7 (1992.3) -> 1987.0 (1993.5) MB, 3.3 / 0.0 ms (average mu = 0.253, current mu = 0.221) allocation failure
[9148:000002DE6245A350] 188323 ms: Scavenge (reduce) 1987.8 (1992.5) -> 1987.0 (1993.8) MB, 5.3 / 0.0 ms (average mu = 0.253, current mu = 0.221) allocation failure
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 00007FF742FB481F napi_wrap+110783
2: 00007FF742F57F26 v8::base::CPU::has_sse+61862
3: 00007FF742F58E26 node::OnFatalError+294
4: 00007FF7438323BE v8::Isolate::ReportExternalAllocationLimitReached+94
5: 00007FF74381718D v8::SharedArrayBuffer::Externalize+781
6: 00007FF7436C02CC v8::internal::Heap::EphemeronKeyWriteBarrierFromCode+1516
7: 00007FF7436CB6EA v8::internal::Heap::ProtectUnprotectedMemoryChunks+1258
8: 00007FF7436C8829 v8::internal::Heap::PageFlagsAreConsistent+2457
9: 00007FF7436BD3D1 v8::internal::Heap::CollectGarbage+2049
10: 00007FF7436BB5D5 v8::internal::Heap::AllocateExternalBackingStore+1349
11: 00007FF7436DBA3B v8::internal::Factory::NewFillerObject+203
12: 00007FF74340A0B1 v8::internal::interpreter::JumpTableTargetOffsets::iterator::operator=+1409
13: 00007FF7438BB27D v8::internal::SetupIsolateDelegate::SetupHeap+465325
14: 000001673571CCAD
If I use the same code in momentjs, There is no issue and work well
What's the problem with dayjs that it doesn't working???
Day.js is immutable,it means you always get a clone of day after day.add(1, 'day') so the day value is never updated.
Had you try to log the day value inside the while?
put an day = day.add(1, 'day'); at the end of cicle
const calendar = [];
const today = dayjs();
const startDay = today.clone().startOf('month').startOf('week');
const endDay = today.clone().endOf('month').endOf('week');
let day = startDay.clone().subtract(1, 'day');
while (day.isBefore(endDay, 'day')) {
console.log(day)
calendar.push(
Array(7)
.fill(0)
.map(() => day.add(1, 'day').clone() )
)
day = day.add(1, 'day');
};
console.log(calendar)
<script src="https://cdnjs.cloudflare.com/ajax/libs/dayjs/1.10.6/dayjs.min.js"></script>

Using a loop in VagrantFile to create X number of disks

I am trying to create a vm with vagrant with multiple disks, but I want the vagrantfile to use a loop to create them. Vagrant does not handle loops correctly as it seems to double or triple traverse the loop. Then Vagrant fails as it already created disk1.vdi
Warning: I am no expert on Ruby...
I have tried using arrays and ruby's .each method. Tried a while loop. All fail with the same problem.
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.provider "virtualbox" do |v|
Drives = [1,2,3,4,5]
Drives.each do |hd|
puts "harddrive #{hd}"
v.customize ['createhd', '--filename', "./disk#{hd}.vdi",'--variant', 'Fixed', '--size', 20 * 1024]
v.customize ['storageattach', :id, '--storagectl', 'IDE', '--device', hd+1, '--type', 'hdd', '--medium', "./disk#{hd}.vdi"]
end
end
end
What I expect is a vm with 5+1 drives
what I get is
$ vagrant up
harddrive 1
harddrive 2
harddrive 3
harddrive 4
harddrive 5
/home/brian/projects/centos/Vagrantfile:7: warning: already initialized constant Drives
/home/brian/projects/centos/Vagrantfile:7: warning: previous definition of Drives was here
harddrive 1
harddrive 2
harddrive 3
harddrive 4
harddrive 5
/home/brian/projects/centos/Vagrantfile:7: warning: already initialized constant Drives
/home/brian/projects/centos/Vagrantfile:7: warning: previous definition of Drives was here
harddrive 1
harddrive 2
harddrive 3
harddrive 4
harddrive 5
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'centos/7' version '1905.1' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2200 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
A customization command failed:
["createhd", "--filename", "./disk1.vdi", "--variant", "Fixed", "--size", 20480]
The following error was experienced:
#<Vagrant::Errors::VBoxManageError: There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["createhd", "--filename", "./disk1.vdi", "--variant", "Fixed", "--size", "20480"]
Stderr: 0%...
Progress state: VBOX_E_FILE_ERROR
VBoxManage: error: Failed to create medium
VBoxManage: error: Could not create the medium storage unit '/home/brian/projects/centos/disk1.vdi'.
VBoxManage: error: VDI: cannot create image '/home/brian/projects/centos/disk1.vdi' (VERR_ALREADY_EXISTS)
VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component MediumWrap, interface IMedium
VBoxManage: error: Context: "RTEXITCODE handleCreateMedium(HandlerArg*)" at line 462 of file VBoxManageDisk.cpp
>
Please fix this customization and try again.
I think the important parts are the --portcount at storagectl and the unless File.exist? to avoid trying to recreate the disks
servers = [
{ :hostname => "node01", :ip => "192.168.1.10", :memory => "2048", :disks => 2 },
{ :hostname => "node02", :ip => "192.168.1.20", :memory => "2048", :disks => 2 },
{ :hostname => "node03", :ip => "192.168.1.30", :memory => "2048", :disks => 2 },
]
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
servers.each do |conf|
config.vm.define conf[:hostname] do |node|
node.vm.hostname = conf[:hostname]
node.vm.network "private_network", ip: conf[:ip]
node.vm.provider "virtualbox" do |vb|
vb.customize ['storagectl', :id, '--name', 'SATA Controller', '--portcount', conf[:disks]+1]
(1..conf[:disks]).each do |x|
file_to_disk = './disk_'+conf[:hostname]+'_'+x.to_s()+'.vdi'
unless File.exist?(file_to_disk)
vb.customize ['createhd', '--filename', file_to_disk, '--size', 20 * 1024]
end
vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', x, '--device', 0, '--type', 'hdd', '--medium', file_to_disk]
end
vb.memory = conf[:memory]
end
end
end
end

do_rootfs function failed in yocto project

I am just getting started with the yocto project and trying to build an image for x86 architecture to be emulated using QEMU emulator (running on Ubuntu 16.04 ).I am getting the following error while building the OS image.
ERROR: core-image-sato-1.0-r0 do_rootfs: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_rootfs(d)
0003:
File: '/home/rahul/poky/poky/meta/classes/image.bbclass', lineno: 258, function: do_rootfs
0254: progress_reporter.next_stage()
0255:
0256: # generate rootfs
0257: d.setVarFlag('REPRODUCIBLE_TIMESTAMP_ROOTFS', 'export', '1')
*** 0258: create_rootfs(d, progress_reporter=progress_reporter, logcatcher=logcatcher)
0259:
0260: progress_reporter.finish()
0261:}
0262:do_rootfs[dirs] = "${TOPDIR}"
File: '/home/rahul/poky/poky/meta/lib/oe/rootfs.py', lineno: 1010, function: create_rootfs
1006: env_bkp = os.environ.copy()
1007:
1008: img_type = d.getVar('IMAGE_PKGTYPE')
1009: if img_type == "rpm":
*** 1010: RpmRootfs(d, manifest_dir, progress_reporter, logcatcher).create()
1011: elif img_type == "ipk":
1012: OpkgRootfs(d, manifest_dir, progress_reporter, logcatcher).create()
1013: elif img_type == "deb":
1014: DpkgRootfs(d, manifest_dir, progress_reporter, logcatcher).create()
File: '/home/rahul/poky/poky/meta/lib/oe/rootfs.py', lineno: 201, function: create
0197: if self.progress_reporter:
0198: self.progress_reporter.next_stage()
0199:
0200: # call the package manager dependent create method
*** 0201: self._create()
0202:
0203: sysconfdir = self.image_rootfs + self.d.getVar('sysconfdir')
0204: bb.utils.mkdirhier(sysconfdir)
0205: with open(sysconfdir + "/version", "w+") as ver:
File: '/home/rahul/poky/poky/meta/lib/oe/rootfs.py', lineno: 450, function: _create
0446: rpm_pre_process_cmds = self.d.getVar('RPM_PREPROCESS_COMMANDS')
0447: rpm_post_process_cmds = self.d.getVar('RPM_POSTPROCESS_COMMANDS')
0448:
0449: # update PM index files
*** 0450: self.pm.write_index()
0451:
0452: execute_pre_post_process(self.d, rpm_pre_process_cmds)
0453:
0454: if self.progress_reporter:
File: '/home/rahul/poky/poky/meta/lib/oe/package_manager.py', lineno: 543, function: write_index
0539:
0540: def write_index(self):
0541: lockfilename = self.d.getVar('DEPLOY_DIR_RPM') + "/rpm.lock"
0542: lf = bb.utils.lockfile(lockfilename, False)
*** 0543: RpmIndexer(self.d, self.rpm_repo_dir).write_index()
0544: bb.utils.unlockfile(lf)
0545:
0546: def insert_feeds_uris(self, feed_uris, feed_base_paths, feed_archs):
0547: from urllib.parse import urlparse
File: '/home/rahul/poky/poky/meta/lib/oe/package_manager.py', lineno: 105, function: write_index
0101: else:
0102: signer = None
0103:
0104: createrepo_c = bb.utils.which(os.environ['PATH'], "createrepo_c")
*** 0105: result = create_index("%s --update -q %s" % (createrepo_c, self.deploy_dir))
0106: if result:
0107: bb.fatal(result)
0108:
0109: # Sign repomd
File: '/home/rahul/poky/poky/meta/lib/oe/package_manager.py', lineno: 21, function: create_index
0017:def create_index(arg):
0018: index_cmd = arg
0019:
0020: bb.note("Executing '%s' ..." % index_cmd)
*** 0021: result = subprocess.check_output(index_cmd, stderr=subprocess.STDOUT, shell=True).decode("utf-8")
0022: if result:
0023: bb.note(result)
0024:
0025:"""
File: '/usr/lib/python3.5/subprocess.py', lineno: 626, function: check_output
0622: # empty string. That is maintained here for backwards compatibility.
0623: kwargs['input'] = '' if kwargs.get('universal_newlines', False) else b''
0624:
0625: return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
*** 0626: **kwargs).stdout
0627:
0628:
0629:class CompletedProcess(object):
0630: """A process that has finished running.
File: '/usr/lib/python3.5/subprocess.py', lineno: 708, function: run
0704: raise
0705: retcode = process.poll()
0706: if check and retcode:
0707: raise CalledProcessError(retcode, process.args,
*** 0708: output=stdout, stderr=stderr)
0709: return CompletedProcess(process.args, retcode, stdout, stderr)
0710:
0711:
0712:def list2cmdline(seq):
Exception: subprocess.CalledProcessError: Command '/home/rahul/poky/poky/build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/recipe-sysroot-native/usr/bin/createrepo_c --update -q /home/rahul/poky/poky/build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/oe-rootfs-repo' returned non-zero exit status 1
Subprocess output:
Temporary repodata directory /home/rahul/poky/poky/build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/oe-rootfs-repo/.repodata/ already exists! (Another createrepo process is running?)
ERROR: core-image-sato-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /home/rahul/poky/poky/build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/temp/log.do_rootfs.5019
ERROR: Task (/home/rahul/poky/poky/meta/recipes-sato/images/core-image-sato.bb:do_rootfs) failed with exit code '1'
Build process runs upto almost 90 percent after which this error comes up which terminates the process.What could be the issue ?
I got the same error when my host machine shut down abruptly, but everything worked well after I delete the .repodata folder with sudo rm -r build/tmp/work/qemux86-poky-linux/core-image-sato/1.0-r0/oe-rootfs-repo/.repodata/ and then build again.
I stopped a build using Ctrl-C and got the python error described in the original question.
The .repodata folder (please see the answer from jmiranda) was empty. So I deleted the oe-rootfs-repo folder and this worked for me.
I get the same issue, but with error "Directory not empty" instead, when building in Docker container. Deleting the destination directory using rm -r, and running the build again, works.
None of these methods worked for me.
I then clean the build using bitbake -c clean mybuildname and then again made the build and it worked flawlessly, i hope it helps someone.

Zeppelin 0.7.3 got local class incompatible error

I simply installed Spark 2.2.1 standalone cluster and zeppelin 0.7.3. I tried some logistic regression code like below:
lr = LogisticRegression(maxIter=10000, regParam=0.2)
model1 = lr.fit(training)
trainingSummary = model1.summary
objectiveHistory = trainingSummary.objectiveHistory
for objective in objectiveHistory:
print(objective)
trainingSummary.roc.show()
print("areaUnderROC(training): " + str(trainingSummary.areaUnderROC))
prediction = model1.transform(test)
prediction_train = model1.transform(training)
evaluator=BinaryClassificationEvaluator().setLabelCol("label").setRawPredictionCol("probability").setMetricName("areaUnderROC")
pred_test=prediction.select("label","probability","rawPrediction")
pred_train=prediction_train.select("label","probability","rawPrediction")
ROC_test=evaluator.evaluate(pred_test)
ROC_train=evaluator.evaluate(pred_train)
print("areaUnderROC(training): " + str(ROC_train))
print("areaUnderROC(testing): " + str(ROC_test))
and got the following error. googled and found such problem was fixed in 0.7.1 when reading JSON.
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, 192.168.0.17, executor 0): java.io.InvalidClassException: org.apache.commons.lang3.time.FastDateParser; local class incompatible: stream classdesc serialVersionUID = 2, local class serialVersionUID = 3
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:687)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
at
....
Caused by: java.io.InvalidClassException:
org.apache.commons.lang3.time.FastDateParser; local class incompatible:
stream classdesc serialVersionUID = 2, local class serialVersionUID = 3
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:687)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
....

netty pipelines not getting released from memory

I have a high volume netty server that keeps consuming memory. Using jmap, I've tracked it down to the fact that pipelines just seem to keep growing and growing (along with nio sockets, etc). It is like the socket isn't ever disconnecting.
My initialization of the ServerBootstrap is:
ServerBootstrap bootstrap = new ServerBootstrap(new NioServerSocketChannelFactory(coreThreads, workThreads, Runtime.getRuntime().availableProcessors()*2));
bootstrap.setOption("child.keepAlive", false);
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setPipelineFactory(new HttpChannelPipelineFactory(this, HttpServer.IdleTimer));
bootstrap.bind(new InetSocketAddress(host, port));
coreThreads and workThreads are java.util.concurrent.Executors.newCachedThreadPool().
IdleTimer is private static Timer IdleTimer = new HashedWheelTimer();
My pipeline factory is:
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("idletimer", new HttpIdleHandler(timer));
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("aggregator", new HttpChunkAggregator(65536));
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("chunkwriter", new ChunkedWriteHandler());
pipeline.addLast("http.handler" , handler);
pipeline.addLast("http.closer", new HttpClose());
HttpIdleHandler is the basic stock idle handler given in the examples except using the "all". It doesn't get executed that often. The timeout is 500 milliseconds. (aka 1/2 second). The idle handler calls close on the channel. The HttpClose() is a simple close the channel on everything that makes it there just in case the handler doesn't process it. It executes very irregularly.
Once I've sent the response in my handler (derived from SimpleChannelUpstreamHandler), I close the channel regardless of keepalive setting. I've verified that I'm closing channels by adding a listener to the channels ChannelFuture returned by close() and the value of isSuccess in the listener is true.
Some examples from the jmap output (columns are rank, number of instances, size in bytes, classname):
3: 147168 7064064 java.util.HashMap$Entry
4: 90609 6523848 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext
6: 19788 3554584 [Ljava.util.HashMap$Entry;
8: 49893 3193152 org.jboss.netty.handler.codec.http.HttpHeaders$Entry
11: 11326 2355808 org.jboss.netty.channel.socket.nio.NioAcceptedSocketChannel
24: 11326 996688 org.jboss.netty.handler.codec.http.HttpRequestDecoder
26: 22668 906720 org.jboss.netty.util.internal.LinkedTransferQueue
28: 5165 826400 [Lorg.jboss.netty.handler.codec.http.HttpHeaders$Entry;
30: 11327 815544 org.jboss.netty.channel.AbstractChannel$ChannelCloseFuture
31: 11326 815472 org.jboss.netty.channel.socket.nio.DefaultNioSocketChannelConfig
33: 12107 774848 java.util.HashMap
34: 11351 726464 org.jboss.netty.util.HashedWheelTimer$HashedWheelTimeout
36: 11327 634312 org.jboss.netty.channel.DefaultChannelPipeline
38: 11326 634256 org.jboss.netty.handler.timeout.IdleStateHandler$State
45: 10417 500016 org.jboss.netty.util.internal.LinkedTransferQueue$Node
46: 9661 463728 org.jboss.netty.util.internal.ConcurrentIdentityHashMap$HashEntry
47: 11326 453040 org.jboss.netty.handler.stream.ChunkedWriteHandler
48: 11326 453040 org.jboss.netty.channel.socket.nio.NioSocketChannel$WriteRequestQueue
51: 11326 362432 org.jboss.netty.handler.codec.http.HttpChunkAggregator
52: 11326 362432 org.jboss.netty.util.internal.ThreadLocalBoolean
53: 11293 361376 org.jboss.netty.handler.timeout.IdleStateHandler$AllIdleTimeoutTask
57: 4150 323600 [Lorg.jboss.netty.util.internal.ConcurrentIdentityHashMap$HashEntry;
58: 4976 318464 org.jboss.netty.handler.codec.http.DefaultHttpRequest
64: 11327 271848 org.jboss.netty.channel.SucceededChannelFuture
65: 11326 271824 org.jboss.netty.handler.codec.http.HttpResponseEncoder
67: 11326 271824 org.jboss.netty.channel.socket.nio.NioSocketChannel$WriteTask
73: 5370 214800 org.jboss.netty.channel.UpstreamMessageEvent
74: 5000 200000 org.jboss.netty.channel.AdaptiveReceiveBufferSizePredictor
81: 5165 165280 org.jboss.netty.handler.codec.http.HttpHeaders
84: 1562 149952 org.jboss.netty.handler.codec.http.DefaultCookie
96: 2048 98304 org.jboss.netty.util.internal.ConcurrentIdentityHashMap$Segment
98: 2293 91720 org.jboss.netty.buffer.BigEndianHeapChannelBuffer
What am I missing? What thread is responsible for releasing it's reference to the pipeline (or socket? channel?) such that the garbage collector will collect this memory? There appears to be some large hashtable holding on to them (several references to hashtable entries that I filtered out of the above list).
Unless you have a reference to Channel, ChannelPipeline, ChannelHandlerContext in your application, they should become unreachable as soon as the connection is closed. Please double-check if your application is hold a reference to one of them somewhere. Sometimes an anonymous class is a good suspect, but the precise answer will not be possible with the heap dump file.
According to this response: https://stackoverflow.com/a/12242390/8425783, there was issue in netty, and it was fixed in version 3.5.4.Final
Netty issue: https://github.com/netty/netty/issues/520

Resources